Search Results

Keyword: ‘ccna’

ENCOR FAQs & Tips

November 16th, 2023 digitaltut 218 comments

In this article, I will try to summarize all the Frequently Asked Questions in the ENCOR 350-401 v1.1 Exam. Hope it will save you some time searching through the Internet and asking your friends & teachers.

1. Please tell me how many questions in the real ENCOR exam, and how much time to answer them?

You have 120 minutes to answer 102 questions, include multiple choice and drag drop questions. There are some lab sims (from 3 to 4) in this exam currently. But if your native language is not English, Cisco allows you a 30-minute exam time extension. But there are a few requirements to get this extension, so the best way is asking your teacher or mentor before taking the exam.

2. How much does the ENCOR 350-401 cost? And how many points I need to pass the exam?

This exam costs $400. You need at least 825/1000 points to pass this exam. But you will no longer see your exam score after your test. You will only see if you passed or failed as well as details on each section performance (in percent). Sometimes you will see the status remains “Score Pending” and you have to wait for a few days (up to 72 business hours) in order for the PearsonVUE portal to reflect your actual score (“Pass” or “Fail”).

3. I passed the ENCOR exam, will I get a CCNP certificate for it?

No, ENCOR is only the core exam of the CCNP Enterprise certification. In order to get the CCNP Enterprise certification, you need to pass the ENCOR exam and one of the following concentration exams:
– 300-410 ENARSI: Implementing Cisco Enterprise Advanced Routing and Services (ENARSI)
– 300-415 ENSDWI: Implementing Cisco SD-WAN Solutions (ENSDWI)
– 300-420 ENSLD: Designing Cisco Enterprise Networks (ENSLD)
– 300-425 ENWLSD: Designing Cisco Enterprise Wireless Networks (ENWLSD)
– 300-430 ENWLSI: Implementing Cisco Enterprise Wireless Networks (ENWLSI)
– 300-435 ENAUTO: Implementing Automation for Cisco Enterprise Solutions (ENAUTO)

For example, you need to pass the ENCOR and ENARSI exam to get the CCNP Enterprise certificate.

4. So which concentration exam should I choose to complete my CCNP Enterprise cert?

First you should understand each of the concentration exams above:

– In ENARSI exam you will learn more about routing (EIGRP, OSPF, BGP, VPN & VRF-Lite) and services  (DHCP, AAA, SNMP, uRPF, IP SLA, NetFlow), mainly about how to troubleshoot them.
– In ENSDWI exam you will learn mainly about Cisco SD-WAN architecture (about vBond, vSmart, vManage and vEdge) and how they work. If your company is using them or you have a special reason to know about them then you should learn this exam.
– In ENSLD exam you will learn how to design popular routing protocols, WAN; describe SD-Access and SD-WAN.
– Two exams ENWLSD and ENWLSI will teach you about Wireless in detail
– The ENAUTO exam allows you to learn how to program and automate your network with APIs (JSON, XML, YANG; NETCONF and RESTCONF) using Python. The “network” here includes IOS XE devices, Cisco DNA Center, Cisco SD-WAN and Cisco Meraki.

In the above exams, only the ENARSI exam teach you about “traditional” network. If you don’t have any special reason to learn other exams then it is the most suitable exam for you. If you used to learn how to program/code then the ENAUTO is also a recommended exam to take.

If you are still in doubt about any exam then we recommend you to find the syllabus of that exam and have a closer look by yourself before deciding, just google it (with keyword: “syllabus” + that exam name). We don’t post direct links here because the subjects of these exams may change in the future so we wish you to find the latest syllabuses of these exams.

5. In the real exam, I clicked “Next” after choosing the answer, can I go back for reviewing?

No, you can’t go back so you can’t re-check your answers after clicking the “Next” button.

6. What are your recommended materials for ENCOR?

There are many options you can choose, but below are materials used and recommended by many candidates:

Recommended Books

Video training

  • CBT Nuggets
  • INE

Simulator (all are free)

  • GNS3 – the best simulator for learning ROUTE
  • Packet Tracer
  • EVE-NG

7. Are the exam questions the same in all the geographical locations?

Yes, the exam questions are the same in all geographical locations. But notice that Cisco has a pool of questions and each time you take the exam, a number of random questions will show up so you will not see all the same questions as the previous exam.

8. I passed the ENCOR exam. Do you have any site similar for CCNP Enterprise exams?

We have certprepare.com for CCNP Enterprise ENSDWI (a concentration exam of CCNP Enterprise certification) and networktut.com for ENARSI (another concentration exam of CCNP Enterprise certification).

We also have other sites (but only for sharing experience) like voicetut.com for Voice/Collaboration track, securitytut.com for Security track, dctut.com for Data Center track, sptut.com for Service Provider track, wirelesstut.com for Wireless track, opstut.com for DevNet track. Hope you enjoy these sites and find useful information too!

9. How many CCNP tracks does Cisco support now?

Cisco supports 7 CCNP tracks, which are:

1. CCNP Enterprise
2. CCNP Security
3. CCNP Service provider
4. CCNP Collaboration
5. CCNP Data Center
6. Cisco Certified DevNet
7. Cisco Certified CyberOps

In each track, you need to pass a dedicated core exam then pass one concentration exam of that track. Please check the picture below for more detail:

Cisco_Next_Level_Certification_Path.jpg

Note: With these new tracks, CCNA is no longer a prerequisite for CCNP. You can go directly for CCNP certs. But the knowledge of CCNA is highly recommended if you want to reach CCNP.

10. I passed (old) CCNP but my CCNP cert is going to expire and I want to recertify it. Which exam should I get?

According to Cisco Recertification Policy page, you need to complete one of the following things:

– Pass one technology core exam
– Pass any two professional concentration exams
– Pass one CCIE lab exam

Therefore if you only want to take one exam to recertify then you must pass the ENCOR 350-401 exam or any technology core exam of other tracks (for example the DCCOR 350-601 of Data Center track or the SCOR 350-701 of Security track).

Also if you earn 80 CE credits then you can also recertify your CCNP Enterprise cert.

Is there anything you want to ask? Just ask! All of us will help you.

Point to Point Protocol (PPP) Tutorial

digitaltut No comments

Note: The Point-to-Point Protocol is not a topic in CCNA 200-301 so if you are preparing for this exam you can ignore this tutorial.

Point-to-Point Protocol (PPP) is an open standard protocol that is mostly used to provide connections over point-to-point serial links. The main purpose of PPP is to transport Layer 3 packets over a Data Link layer point-to-point link. PPP can be configured on:
+ Asynchronous serial connection like Plain old telephone service (POTS) dial-up
+ Synchronous serial connection like Integrated Services for Digital Network (ISDN) or point-to-point leased lines.

PPP consists of two sub-protocols:
+ Link Control Protocol (LCP): set up and negotiate control options on the Data Link Layer (OSI Layer 2). After finishing setting up the link, it uses NCP.
+ Network control Protocol (NCP): negotiate optional configuration parameters and facilitate for the Network Layer (OSI Layer 3). In other words, it makes sure IP and other protocols can operate correctly on PPP link

PPP_NCP_LCP.jpg

Read more…

Border Gateway Protocol BGP Tutorial

digitaltut No comments

Basic understanding about BGP

We really want to show you why we need BGP first but it is very difficult to explain without understanding a bit about BGP. So we will learn some basic knowledge about BGP first.

First we need to understand the difference between Interior Gateway Protocol and Exterior Gateway Protocol, which is shown below:

IGP_EGP.jpg

Interior Gateway Protocol (IGP): A routing protocol operating within an Autonomous System (AS) like OSPF, EIGRP… Usually routers running IGP are under the same administration (of a company, corporation, individual)
Exterior Gateway Protocol (EGP): A routing protocol operating between different AS. BGP is the only EGP used nowadays

Read more…

GRE Tunnel Tutorial

digitaltut No comments

GRE stands for Generic Routing Encapsulation, which is a very simple form of tunneling. With GRE we can easily create a virtual link between routers and allow them to be directly connected, even if they physically aren’t. Let’s have a look at the topology below:

GRE_Tunnel.jpg

Suppose R1 and R2 are routers at two far ends of our company. They are connected to two computers who want to communicate. Although R1 and R2 are not physically connected to each other but with GRE Tunnel, they appear to be! This is great when you have multiple end points and don’t care the path between them. The routing tables of two routers show that they are directly connected via GRE Tunnel.

Read more…

GLBP Configuration Sim

June 11th, 2022 digitaltut 24 comments

Tasks

Implement GLBP between DISTRO-SW1 and DISTRO-SW2 on VLAN 100 for hosts connected to ACCESS-SW1 to achieve these goals:
1. Configure group 1 using the virtual IP address of 192.168.1.254
2. Configure DISTRO-SW1 as the AVG using a priority value of 110
3. If DISTRO-SW1 suffers a failure and recovers, ensure that it automatically resumes the AVG role after waiting for a minimum of 15 seconds.

Topology

VRRP_topology.jpg

Solution

You can practice this sim with our own online simulator or EVE-NG at:
+ GLBP Configuration online simulator
+ EVE-NG file (Note: Sometimes the VLAN information is missing when you import the EVE-NG lab so you may need to create the VLAN 100 again:
Switch(config)#vlan 100
Switch(config-vlan)#exit //Must exit to apply the VLAN config)

Note: The virtual IP address, priority, delay value … may be different so please check the question carefully.

DISTRO-SW1(config)#interface vlan 100
DISTRO-SW1(config-if)#glbp 1 ip 192.168.1.254
DISTRO-SW1(config-if)#glbp 1 priority 110
DISTRO-SW1(config-if)#glbp 1 preempt delay minimum 15 //Configures the router to preempt if it has a higher priority than the current active virtual forwarder after a delay of 15 seconds

DISTRO-SW2(config)#interface vlan 100
DISTRO-SW2(config-if)#glbp 1 ip 192.168.1.254
DISTRO-SW2(config-if)#glbp 1 preempt

Don’t forget to save the configs

DISTRO-SW1#, DISTRO-SW2#copy running-config startup-config

Embedded Event Manager (EEM) Tutorial

digitaltut 16 comments

EEM (Embedded Event Manager) is a software component of Cisco that allows network administrators to automate many tasks. EEM is like a programming language with “if {condition} then {action}” statement. If your condition is met then some “actions” will be performed automatically on the device.

An EEM consists of two major components:
+ Event: Defines the event to be monitored
+ Action: Defines action to be taken when the event is triggered

There are three steps to creating an EEM applet.
Step 1. Create the applet and give it a name with the command “event manager applet applet-name
Step 2 (Optional). Tell the applet what to look out for (just optional as some applets do not need to look out anything), usually with “event cli pattern” command
Step 3. Define action to be taken when the event is triggered in step 2, usually with “action” or “set” command.

Read more…

Architecture Questions

February 7th, 2021 digitaltut 41 comments

Question 1

Explanation

For campus designs requiring simplified configuration, common end-to-end troubleshooting tools, and the fastest convergence, a design using Layer 3 switches in the access layer (routed access) in combination with Layer 3 switching at the distribution layer and core layers provides the most rapid convergence of data and control plane traffic flows.

Routed_access.jpg

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/cisco-sda-design-guide.html#Layer_3_Routed_Access_Introduction

Campus fabric runs over arbitrary topologies:
+ Traditional 3-tier hierarchical network
+ Collapsed core/aggregation designs
+ Routed access
+ U-topology

Ideal design is routed access –allows fabric to extend to very edge of campus network

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2017/pdf/BRKCRS-2812.pdf

From above references, we see that campus infrastructure does not include two-tier topology.

Question 2

Question 3

Explanation

The difference between on-premise and cloud is essentially where this hardware and software resides. On-premise means that a company keeps all of this IT environment onsite either managed by themselves or a third-party. Cloud means that it is housed offsite with someone else responsible for monitoring and maintaining it.

Question 4

Question 5

Explanation

Stateful Switchover (SSO) provides protection for network edge devices with dual Route Processors (RPs) that represent a single point of failure in the network design, and where an outage might result in loss of service for customers.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/12-2SY/configuration/guide/sy_swcg/stateful_switchover.html

Etherchannel Questions

February 7th, 2021 digitaltut 36 comments

Quick overview of EtherChannel:

EtherChannel bundles the physical links into one logical link with the combined bandwidth and it is awesome! STP sees this link as a single link so STP will not block any links! EtherChannel also does load balancing among the links in the channel automatically. If a link within the EtherChannel bundle fails, traffic previously carried over the failed link is carried over the remaining links within the EtherChannel. If one of the links in the channel fails but at least one of the links is up, the logical link (EtherChannel link) remains up. EtherChannel also works well for router connections: EtherChannel_router.jpg When an EtherChannel is created, a logical interface will be created on the switches or routers representing for that EtherChannel. You can configure this logical interface in the way you want. For example, assign access/trunk mode on switches or assign IP address for the logical interface on routers…

Note: A maximum of 8 Fast Ethernet or 8 Gigabit Ethernet ports can be grouped together when forming an EtherChannel. There are three mechanisms you can choose to configure EtherChannel:
+ Link Aggregation Control Protocol (LACP)
+ Port Aggregation Protocol (PAgP)
+ Static (“On”)

The Port Aggregation Protocol (PAgP) and Link Aggregation Control Protocol (LACP) facilitate the automatic creation of EtherChannels by exchanging packets between Ethernet interfaces. The Port Aggregation Protocol (PAgP) is a Cisco-proprietary solution, and the Link Aggregation Control Protocol (LACP) is standards based.

LACP modes:

LACP dynamically negotiates the formation of a channel. There are two LACP modes:

+ passive: the switch does not initiate the channel, but does understand incoming LACP packets
+ active: send LACP packets and willing to form a port-channel

The table below lists if an EtherChannel will be formed or not for LACP:

LACP Active Passive
Active Yes Yes
Passive Yes No

PAgP modes:

+ desirable: send PAgP packets and willing to form a port-channel
+ auto: does not start PAgP packet negotiation but responds to PAgP packets it receives

The table below lists if an EtherChannel will be formed or not for PAgP:

PAgP Desirable Auto
Desirable Yes Yes
Auto Yes No

An EtherChannel in Cisco can be defined as a Layer 2 EtherChannel or a Layer 3 EtherChannel.
+ For Layer 2 EtherChannel, physical ports are placed into an EtherChannel group. A logical port-channel interface will be created automatically. An example of configuring Layer 2 EtherChannel can be found in Question 1 in this article.

+ For Layer 3 EtherChannel, a Layer 3 Switch Virtual Interface (SVI) is created and then the physical ports are bound into this Layer 3 SVI.

Static (“On”)

In this mode, no negotiation is needed. The interfaces become members of the EtherChannel immediately. When using this mode make sure the other end must use this mode too because they will not check if port parameters match. Otherwise the EtherChannel would not come up and may cause some troubles (like loop…).

Note: All interfaces in an EtherChannel must be configured identically to form an EtherChannel. Specific settings that must be identical include:
+ Speed settings
+ Duplex settings
+ STP settings
+ VLAN membership (for access ports)
+ Native VLAN (for trunk ports)
+ Allowed VLANs (for trunk ports)
+ Trunking Encapsulation (ISL or 802.1Q, for trunk ports)

Note: EtherChannels will not form if either dynamic VLANs or port security are enabled on the participating EtherChannel interfaces.

For more information about EtherChannel, please read our EtherChannel tutorial.

Question 1

Explanation

There are two PAgP modes:

Auto Responds to PAgP messages but does not aggressively negotiate a PAgP EtherChannel. A channel is formed only if the port on the other end is set to Desirable. This is the default mode.
Desirable Port actively negotiates channeling status with the interface on the other end of the link. A channel is formed if the other side is Auto or Desirable.

The table below lists if an EtherChannel will be formed or not for PAgP:

PAgP Desirable Auto
Desirable Yes Yes
Auto Yes No

Question 2

Explanation

The Cisco switch was configured with PAgP, which is a Cisco proprietary protocol so non-Cisco switch could not communicate.

Question 3

Explanation

In the exhibit we see no interfaces are shown in the “Ports” field. The interfaces will be shown under “Ports” field even if they are shut down. For example:

Etherchannel_interface_down.jpg

So the most likely cause of this problem is no port members have been defined for Po1 of SW1.

Trunking Questions

February 6th, 2021 digitaltut 30 comments

Question 1

Explanation

SW3 does not have VLAN 60 so it should not receive traffic for this VLAN (sent from SW2). Therefore we should configure VTP Pruning on SW3 so that SW2 does not forward VLAN 60 traffic to SW3. Also notice that we need to configure pruning on SW1 (the VTP Server), not SW2.

Question 2

Explanation

From the “show vlan brief” we learn that Finance belongs to VLAN 110 and all VLANs (from 1 to 1005) are allowed to traverse the trunk (port-channel 1). Therefore we have to remove VLAN 110 from the allowed VLAN list with the “switchport trunk allowed vlan remove ” command. The pruning feature cannot do this job as Finance VLAN is active.

Question 3

SD-WAN & SD-Access Solutions

February 6th, 2021 digitaltut 38 comments

SD-Access Quick summary

There are five basic device roles in the fabric overlay:
+ Control plane node: This node contains the settings, protocols, and mapping tables to provide the endpoint-to-location (EID-to-RLOC) mapping system for
the fabric overlay.
+ Fabric border node: This fabric device (for example, core layer device) connects external Layer 3 networks to the SDA fabric.
+ Fabric edge node: This fabric device (for example, access or distribution layer device) connects wired endpoints to the SDA fabric.
+ Fabric WLAN controller (WLC): This fabric device connects APs and wireless endpoints to the SDA fabric.
+ Intermediate nodes: These are intermediate routers or extended switches that do not provide any sort of SD-Access fabric role other than underlay services.

SD_Access_Fabric.jpg

Three major building blocks that make up SDA: the control plane, the data plane and the policy plane.

+ Control-Plane based on LISP
+ Data-Plane based on VXLAN
+ Policy-Plane based on TrustSec

SD-WAN Quick Summary

The primary components for the Cisco SD-WAN solution consist of the vManage network management system (management plane), the vSmart controller (control plane), the vBond orchestrator (orchestration plane), and the vEdge router (data plane).

+ vManage – This centralized network management system provides a GUI interface to easily monitor, configure, and maintain all Cisco SD-WAN devices and links in the underlay and overlay network.

+ vSmart controller – This software-based component is responsible for the centralized control plane of the SD-WAN network. It establishes a secure connection to each vEdge router and distributes routes and policy information via the Overlay Management Protocol (OMP), acting as a route reflector. It also orchestrates the secure data plane connectivity between the vEdge routers by distributing crypto key information, allowing for a very scalable, IKE-less architecture.

+ vBond orchestrator – This software-based component performs the initial authentication of vEdge devices and orchestrates vSmart and vEdge connectivity. It also has an important role in enabling the communication of devices that sit behind Network Address Translation (NAT).

+ vEdge router – This device, available as either a hardware appliance or software-based router, sits at a physical site or in the cloud and provides secure data plane connectivity among the sites over one or more WAN transports. It is responsible for traffic forwarding, security, encryption, Quality of Service (QoS), routing protocols such as Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF), and more.

SD_WAN_Physical_Architecture.jpg

Cisco SD-WAN uses Overlay Management Protocol (OMP) which manages the overlay network. OMP runs between the vSmart controllers and WAN Edge routers (and among vSmarts themselves) where control plane information, such as the routing, policy, and management information, is exchanged over a secure connection.

VPNs in SD-WAN

In the SD-WAN overlay, virtual private networks (VPNs) provide segmentation. Each VPN is equivalent to a VRF, which is isolated from one another and have their own forwarding tables. An interface or subinterface is explicitly configured under a single VPN and cannot be part of more than one VPN. Devices attached to an interface in one VPN cannot communicate with devices in another VPN unless policy is put in place to allow it. The VPN ranges from 0 to 65535, but several VPNs are reserved for internal use.

The Transport & Management VPNs

There are two implicitly configured VPNs in the WAN Edge devices and controllers: VPN 0 and VPN 512.

VPN 0 is the transport VPN. It contains all the interfaces that connect to the WAN links. Secure DTLS/TLS connections to the controllers are initiated from this VPN. Static or default routes or a dynamic routing protocol needs to be configured inside this VPN in order to get appropriate next-hop information so the control plane can be established and IPsec tunnel traffic can reach remote sites.

VPN 0 connects the WAN Edge to the WAN transport and creates control plane and data plane connections. The WAN Edge device can connect to multiple WAN transport(s) on different interfaces on the same VPN 0 transport segment. At least one interface needs to be configured to initially reach the SD-WAN controllers for onboarding.

VPN 512 is the management VPN. It carries the out-of-band management traffic to and from the Cisco SD-WAN devices. This VPN is ignored by OMP and not carried across the overlay network.

SDWAN_VPNs.jpg

Question 1

Explanation

There are five basic device roles in the fabric overlay:
+ Control plane node: This node contains the settings, protocols, and mapping tables to provide the endpoint-to-location (EID-to-RLOC) mapping system for
the fabric overlay.
+ Fabric border node: This fabric device (for example, core layer device) connects external Layer 3 networks to the SDA fabric.
+ Fabric edge node: This fabric device (for example, access or distribution layer device) connects wired endpoints to the SDA fabric.
+ Fabric WLAN controller (WLC): This fabric device connects APs and wireless endpoints to the SDA fabric.
+ Intermediate nodes: These are intermediate routers or extended switches that do not provide any sort of SD-Access fabric role other than underlay services.

SD_Access_Fabric.jpg

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

Question 2

Explanation

+ Orchestration plane (vBond) assists in securely onboarding the SD-WAN WAN Edge routers into the SD-WAN overlay (-> Therefore answer A mentioned about vBond). The vBond controller, or orchestrator, authenticates and authorizes the SD-WAN components onto the network. The vBond orchestrator takes an added responsibility to distribute the list of vSmart and vManage controller information to the WAN Edge routers. vBond is the only device in SD-WAN that requires a public IP address as it is the first point of contact and authentication for all SD-WAN components to join the SD-WAN fabric. All other components need to know the vBond IP or DNS information.

+ Management plane (vManage) is responsible for central configuration and monitoring. The vManage controller is the centralized network management system that provides a single pane of glass GUI interface to easily deploy, configure, monitor and troubleshoot all Cisco SD-WAN components in the network. (-> Answer C and answer D are about vManage)

+ Control plane (vSmart) builds and maintains the network topology and make decisions on the traffic flows. The vSmart controller disseminates control plane information between WAN Edge devices, implements control plane policies and distributes data plane policies to network devices for enforcement (-> Answer B is about vSmart)

Question 3

Explanation

The southbound protocol used by APIC is OpFlex that is pushed by Cisco as the protocol for policy enablement across physical and virtual switches.

Southbound interfaces are implemented with some called Service Abstraction Layer (SAL), which talks to the network elements via SNMP and CLI.

Note: Cisco OpFlex is a southbound protocol in a software-defined network (SDN).

Question 4

Explanation

Today the Dynamic Network Architecture Software Defined Access (DNA-SDA) solution requires a fusion router to perform VRF route leaking between user VRFs and Shared-Services, which may be in the Global routing table (GRT) or another VRF. Shared Services may consist of DHCP, Domain Name System (DNS), Network Time Protocol (NTP), Wireless LAN Controller (WLC), Identity Services Engine (ISE), DNAC components which must be made available to other virtual networks (VN’s) in the Campus.

Reference: https://www.cisco.com/c/en/us/support/docs/cloud-systems-management/dna-center/213525-sda-steps-to-configure-fusion-router.html

Question 5

Explanation

Fabric mode APs continue to support the same wireless media services that traditional APs support; apply AVC, quality of service (QoS), and other wireless policies; and establish the CAPWAP control plane to the fabric WLC. Fabric APs join as local-mode APs and must be directly connected to the fabric edge node switch to enable fabric registration events, including RLOC assignment via the fabric WLC. The fabric edge nodes use CDP to recognize APs as special wired hosts, applying special port configurations and assigning the APs to a unique overlay network within a common EID space across a fabric. The assignment allows management simplification by using a single subnet to cover the AP infrastructure at a fabric site.

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/sda-sdg-2019oct.html

Question 6

Explanation

The tunneling technology used for the fabric data plane is based on Virtual Extensible LAN (VXLAN). VXLAN encapsulation is UDP based, meaning that it can be forwarded by any IP-based network (legacy or third party) and creates the overlay network for the SD-Access fabric. Although LISP is the control plane for the SD-Access fabric, it does not use LISP data encapsulation for the data plane; instead, it uses VXLAN encapsulation because it is capable of encapsulating the original Ethernet header to perform MAC-in-IP encapsulation, while LISP does not. Using VXLAN allows the SD-Access fabric to support Layer 2 and Layer 3 virtual topologies (overlays) and the ability to operate over any IP-based network with built-in network segmentation (VRF instance/VN) and built-in group-based policy.

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

Question 7

Explanation

Access Points
+ AP is directly connected to FE (or to an extended node switch)
+ AP is part of Fabric overlay

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2018/pdf/BRKEWN-2020.pdf

Question 8

Explanation

The primary components for the Cisco SD-WAN solution consist of the vManage network management system (management plane), the vSmart controller (control plane), the vBond orchestrator (orchestration plane), and the vEdge router (data plane).

+ vManage – This centralized network management system provides a GUI interface to easily monitor, configure, and maintain all Cisco SD-WAN devices and links in the underlay and overlay network.

+ vSmart controller – This software-based component is responsible for the centralized control plane of the SD-WAN network. It establishes a secure connection to each vEdge router and distributes routes and policy information via the Overlay Management Protocol (OMP), acting as a route reflector. It also orchestrates the secure data plane connectivity between the vEdge routers by distributing crypto key information, allowing for a very scalable, IKE-less architecture.

+ vBond orchestrator – This software-based component performs the initial authentication of vEdge devices and orchestrates vSmart and vEdge connectivity. It also has an important role in enabling the communication of devices that sit behind Network Address Translation (NAT).

+ vEdge router – This device, available as either a hardware appliance or software-based router, sits at a physical site or in the cloud and provides secure data plane connectivity among the sites over one or more WAN transports. It is responsible for traffic forwarding, security, encryption, Quality of Service (QoS), routing protocols such as Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF), and more.

Reference: https://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/SDWAN/CVD-SD-WAN-Design-2018OCT.pdf

Question 9

Question 10

Explanation

There are five basic device roles in the fabric overlay:
+ Control plane node: This node contains the settings, protocols, and mapping tables to provide the endpoint-to-location (EID-to-RLOC) mapping system for the fabric overlay.
+ Fabric border node: This fabric device (for example, core layer device) connects external Layer 3 networks to the SDA fabric.
+ Fabric edge node: This fabric device (for example, access or distribution layer device) connects wired endpoints to the SDA fabric.
+ Fabric WLAN controller (WLC): This fabric device connects APs and wireless endpoints to the SDA fabric.
+ Intermediate nodes: These are intermediate routers or extended switches that do not provide any sort of SD-Access fabric role other than underlay services.

SD_Access_Fabric.jpg

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

SD-WAN & SD-Access Solutions 2

February 6th, 2021 digitaltut 5 comments

Question 1

Explanation

+ Orchestration plane (vBond) assists in securely onboarding the SD-WAN WAN Edge routers into the SD-WAN overlay. The vBond controller, or orchestrator, authenticates and authorizes the SD-WAN components onto the network. The vBond orchestrator takes an added responsibility to distribute the list of vSmart and vManage controller information to the WAN Edge routers. vBond is the only device in SD-WAN that requires a public IP address as it is the first point of contact and authentication for all SD-WAN components to join the SD-WAN fabric. All other components need to know the vBond IP or DNS information.

Question 2

Explanation

+ Fabric edge node: This fabric device (for example, access or distribution layer device) connects wired endpoints to the SDA fabric.

SD_Access_Fabric.jpg

Question 3

Explanation

+ Control plane (vSmart) builds and maintains the network topology and make decisions on the traffic flows. The vSmart controller disseminates control plane information between WAN Edge devices, implements control plane policies and distributes data plane policies to network devices for enforcement.

Question 4

Explanation

The BFD (Bidirectional Forwarding Detection) is a protocol that detects link failures as part of the Cisco SD-WAN (Viptela) high availability solution, is enabled by default on all vEdge routers, and you cannot disable it.

Question 5

Explanation

An overlay network creates a logical topology used to virtually connect devices that are built over an arbitrary physical underlay topology.

An overlay network is created on top of the underlay network through virtualization (virtual networks). The data plane traffic and control plane signaling are contained within each virtualized network, maintaining isolation among the networks and an independence from the underlay network.

SD-Access allows for the extension of Layer 2 and Layer 3 connectivity across the overlay through the services provided by through LISP.

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/cisco-sda-design-guide.html

Question 6

Explanation

Control plane (vSmart) builds and maintains the network topology and make decisions on the traffic flows. The vSmart controller disseminates control plane information between WAN Edge devices, implements control plane policies and distributes data plane policies to network devices for enforcement.

Question 7

Question 8

Explanation

Access Points
+ AP is directly connected to FE (or to an extended node switch)
+ AP is part of Fabric overlay

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2018/pdf/BRKEWN-2020.pdf

Question 9

Explanation

Fabric control plane node (C): One or more network elements that implement the LISP Map-Server (MS) and Map-Resolver (MR) functionality. The control plane node’s host tracking database keep track of all endpoints in a fabric site and associates the endpoints to fabric nodes in what is known as an EID-to-RLOC binding in LISP.

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/cisco-sda-macro-segmentation-deploy-guide.html

Question 10

Explanation

SD-Access Wireless Architecture Control Plane Node –A Closer Look

Fabric Control-Plane Node is based on a LISP Map Server / Resolver

Runs the LISP Endpoint ID Database to provide overlay reachability information
+ A simple Host Database, that tracks Endpoint ID to Edge Node bindings (RLOCs)
+ Host Database supports multiple types of Endpoint ID (EID), such as IPv4 /32, IPv6 /128* or MAC/48
+ Receives prefix registrations from Edge Nodes for wired clients, and from Fabric mode WLCs for wireless clients
+ Resolves lookup requests from FE to locate Endpoints
+ Updates Fabric Edge nodes, Border nodes with wireless client mobility and RLOC information

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/latam/docs/2018/pdf/BRKEWN-2020.pdf

SD-WAN & SD-Access Solutions 3

February 6th, 2021 digitaltut 8 comments

Question 1

Explanation

Cisco SD-WAN uses Overlay Management Protocol (OMP) which manages the overlay network. OMP runs between the vSmart controllers and WAN Edge routers (and among vSmarts themselves) where control plane information, such as the routing, policy, and management information, is exchanged over a secure connection.

Question 2

Explanation

SDA supports two additional types of roaming, which are Intra-xTR and Inter-xTR. In SDA, xTR stands for an access-switch that is a fabric edge node. It serves both as an ingress tunnel router as well as an egress tunnel router.

When a client on a fabric enabled WLAN, roams from an access point to another access point on the same access-switch, it is called Intra-xTR. Here, the local client database and client history table are updated with the information of the newly associated access point.

When a client on a fabric enabled WLAN, roams from an access point to another access point on a different access-switch, it is called Inter-xTR. Here, the map server is also updated with the client location (RLOC) information. Also, the local client database is updated with the information of the newly associated access point.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/mobility.html

Question 3

Question 4

Explanation

The tunneling technology used for the fabric data plane is based on Virtual Extensible LAN (VXLAN). VXLAN encapsulation is UDP based, meaning that it can be forwarded by any IP-based network (legacy or third party) and creates the overlay network for the SD-Access fabric. Although LISP is the control plane for the SD-Access fabric, it does not use LISP data encapsulation for the data plane; instead, it uses VXLAN encapsulation because it is capable of encapsulating the original Ethernet header to perform MAC-in-IP encapsulation, while LISP does not. Using VXLAN allows the SD-Access fabric to support Layer 2 and Layer 3 virtual topologies (overlays) and the ability to operate over any IP-based network with built-in network segmentation (VRF instance/VN) and built-in group-based policy.

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

QoS Questions

February 5th, 2021 digitaltut 7 comments

QoS quick summary:

1. Network factors:
+ Bandwidth: the speed of the link (or the capacity available on the link), usually measured in bits per second (bps)
+ Delay (or latency): how long a packet takes to get from the sender to the receiver. The more the delay, the slower the network. Delay is usually measured in milliseconds (ms)
+ Jitter: A measure of the variation in delay between packets. For example, one packet need 50ms to reach B from A while another packet takes 40ms then the jitter is 10ms
+ Loss: When packets travels to the destination, some of them may get lost.

2. QoS Models:
+ Best Effort: No QoS policies applied
+ Integrated Services (IntServ): Resource Reservation Protocol (RSVP) is used to reserve bandwidth
+ Differentiated Services (DiffServ): Packets are classified and marked individually; policy decisions are made independently by each node in a path.

3. QoS Markings:

Marking is a method that you use to modify the QoS fields of the incoming and outgoing packets. The QoS fields that you can mark are CoS in Layer 2 (both access and trunk traffic), and IP precedence and Differentiated Service Code Point (DSCP) in Layer 3.

Layer 2 marking would be of two types:
+ Traffic in trunk link with 802.1Q header. 802.1q tag on layer 2 header adds 4 bytes to the header to provide with a VLAN ID. Out of those 4 bytes, 3 bits are called ‘Priority bits (PRI)’. These bits provide 8 combinations (23) and they are termed as CoS (Class of Service) tags.
+ Traffic in access link without 802.1Q: We need to use 802.1P marking instead. 802.1P tag is similar to 802.1q tag (4 bytes), it will set the PRI value according to CoS marking and leave the VLAN ID as all zeroes.

Layer-3 marking is accomplished using the 8-bit Type of Service (ToS) field, part of the IP header. A mark in this field will remain unchanged as it travels from hop-to-hop, unless a Layer-3 device is explicitly configured to overwrite this field. There are two marking methods that use the ToS field:
+ IP Precedence: uses the first three bits of the ToS field.
+ Differentiated Service Code Point (DSCP): uses the first six bits of the ToS field. When using DSCP, the ToS field is often referred to as the Differentiated Services (DS) field.

TOS.png

4. QoS terms:

+ Classification: This involves categorizing network traffic into different groups based on specific criteria like IP address, protocol, port, or application type.
+ Marking: allows you to mark (set or change) a value (attribute) for the traffic belonging to a specific class
+ Queuing: entails holding packets in a queue and scheduling their transmission based on priority.
+ Policing: is used to control the rate of traffic flowing across an interface. During a bandwidth exceed (crossed the maximum configured rate), the excess traffic is generally dropped or remarked. The result of traffic policing is an output rate that appears as a saw-tooth with crests and troughs. Traffic policing can be applied to inbound and outbound interfaces. Unlike traffic shaping, QoS policing avoids delays due to queuing. Policing is configured in bytes.
+ Congestion: occurs when network bandwidth is insufficient to accommodate all traffic.
+ Shaping: retains excess packets in a queue and then schedules the excess for later transmission over increments of time. When traffic reaches the maximum configured rate, additional packets are queued instead of being dropped to proceed later. Traffic shaping is applicable only on outbound interfaces as buffering and queuing happens only on outbound interfaces. Shaping is configured in bits per second.

The primary reasons you would use traffic shaping are to control access to available bandwidth, to ensure that traffic conforms to the policies established for it, and to regulate the flow of traffic in order to avoid congestion that can occur when the sent traffic exceeds the access speed of its remote, target interface.

traffic_policing_vs_shaping.jpg

+ Tail drop: When the queue is full, the packet is dropped. This is the default behavior.

5. Congestion Management (types of queuing): uses the marking on each packet to determine which queue to place packets in

First-in, first-out (FIFO): FIFO entails no concept of priority or classes of traffic. With FIFO, transmission of packets out the interface occurs in the order the packets arrive, which means no QoS.
Priority Queuing (PQ): This type of queuing places traffic into one of four queues. Each queue has a different level of priority, and higher-priority queues must be emptied before packets are emptied from lower-priority queues. This behavior can “starve out” lower- priority traffic.
Custom Queuing (CQ): provide specific traffic guaranteed bandwidth at a potential congestion point, assuring the traffic a fixed portion of available bandwidth and leaving the remaining bandwidth to other traffic.
Weighted fair queueing (WFQ): allocates bandwidths to flows based on the weight. In addition, to allocate bandwidths fairly to flows, WFQ schedules packets in bits (not bytes). This prevents long packets from preempting bandwidths of short packets and reduces the delay and jitter when both short and long packets wait to be forwarded.

Class-based weighted fair queueing (CBWFQ) extends the standard WFQ functionality to provide support for user-defined traffic classes. For CBWFQ, you define traffic classes based on match criteria including protocols, access control lists (ACLs), and input interfaces. Packets satisfying the match criteria for a class constitute the traffic for that class. A queue is reserved for each class, and traffic belonging to a class is directed to the queue for that class.

Once a class has been defined according to its match criteria, you can assign it characteristics. To characterize a class, you assign it bandwidth, weight, and maximum packet limit. The bandwidth assigned to a class is the guaranteed bandwidth delivered to the class during congestion.

CBWFQ.jpg

Low latency queueing (LLQ) or also known as Priority Queuing (PQ): brings strict priority queuing (PQ) to CBWFQ. Strict PQ allows delay-sensitive packets such as voice to be sent before packets in other queues. LLQ reduces jitter in voice conversations.

This type of queuing places traffic into one of four queues. Each queue has a different level of priority, and higher-priority queues must be emptied before packets are emptied from lower-priority queues. This behavior can “starve out” lower- priority traffic.

Low_Latency_Queuing.jpg

The Resource Reservation Protocol (RSVP) protocol allows applications to reserve bandwidth for their data flows. It is used by a host, on the behalf of an application data flow, to request a specific amount of bandwidth from the network. RSVP is also used by the routers to forward bandwidth reservation requests.

Question 1

Question 2

Explanation

Weighted Random Early Detection (WRED) is just a congestion avoidance mechanism. WRED drops packets selectively based on IP precedence. Edge routers assign IP precedences to packets as they enter the network. When a packet arrives, the following events occur:

1. The average queue size is calculated.
2. If the average is less than the minimum queue threshold, the arriving packet is queued.
3. If the average is between the minimum queue threshold for that type of traffic and the maximum threshold for the interface, the packet is either dropped or queued, depending on the packet drop probability for that type of traffic.
4. If the average queue size is greater than the maximum threshold, the packet is dropped.

WRED reduces the chances of tail drop (when the queue is full, the packet is dropped) by selectively dropping packets when the output interface begins to show signs of congestion (thus it can mitigate congestion by preventing the queue from filling up). By dropping some packets early rather than waiting until the queue is full, WRED avoids dropping large numbers of packets at once and minimizes the chances of global synchronization. Thus, WRED allows the transmission line to be used fully at all times.

WRED generally drops packets selectively based on IP precedence. Packets with a higher IP precedence are less likely to be dropped than packets with a lower precedence. Thus, the higher the priority of a packet, the higher the probability that the packet will be delivered.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_conavd/configuration/15-mt/qos-conavd-15-mt-book/qos-conavd-cfg-wred.html

WRED is only useful when the bulk of the traffic is TCP/IP traffic. With TCP, dropped packets indicate congestion, so the packet source will reduce its transmission rate. With other protocols, packet sources may not respond or may resend dropped packets at the same rate. Thus, dropping packets does not decrease congestion.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_conavd/configuration/xe-16/qos-conavd-xe-16-book/qos-conavd-oview.html

Note: Global synchronization occurs when multiple TCP hosts reduce their transmission rates in response to congestion. But when congestion is reduced, TCP hosts try to increase their transmission rates again simultaneously (known as slow-start algorithm), which causes another congestion. Global synchronization produces this graph:

TCP_Global_Synchronization.jpg

Question 3

Explanation

QoS Packet Marking refers to changing a field within a packet either at Layer 2 (802.1Q/p CoS, MPLS EXP) or Layer 3 (IP Precedence, DSCP and/or IP ECN).

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_mqc/configuration/xe-16/qos-mqc-xe-16-book/qos-mrkg.html

Question 4

Explanation

Cisco routers allow you to mark two internal values (qos-group and discard-class) that travel with the packet within the router but do not modify the packet’s contents.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_mqc/configuration/xe-16-6/qos-mqc-xe-16-6-book/qos-mrkg.html

Question 5

Explanation

Traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate.

traffic_policing_vs_shaping.jpg

Question 6

Explanation

Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate (or committed information rate), excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs.

Unlike traffic shaping, traffic policing does not cause delay.

Classification (which includes traffic policing, traffic shaping and queuing techniques) should take place at the network edge. It is recommended that classification occur as close to the source of the traffic as possible.

Also according to this Cisco link, “policing traffic as close to the source as possible”.

Question 7

Explanation

CoS value 5 is commonly used for VOIP and CoS value 5 should be mapped to DSCP 46. DSCP 46 is defined as being for EF (Expedited Forwarding) traffic flows and is the value usually assigned to all interactive voice and video traffic. This is to keep the uniformity from end-to-end that DSCP EF (mostly for VOICE RTP) is mapped to COS 5.

Note:

+ CoS is a L2 marking contained within an 802.1q tag,. The values for CoS are 0 – 7
+ DSCP is a L3 marking and has values 0 – 63
+ The default DSCP-to-CoS mapping for CoS 5 is DSCP 40

Question 8

Explanation

First-in, first-out (FIFO): FIFO entails no concept of priority or classes of traffic. With FIFO, transmission of packets out the interface occurs in the order the packets arrive, which means no QoS.

Switching Mechanism Questions

February 4th, 2021 digitaltut 13 comments

Packet Switching

Packet Switching is the method of moving a packet from a router’s input interface to an output interface. There are three main packet switching methods: Process Switching, Fast Switching and Cisco Express Forwarding (CEF).

Process switching is the oldest and slowest switching methods because it must examines the routing table, determines which interface the packet should be switched to and then switches the packet for every coming packet. The router CPU is responsible of choosing the appropriate process to handle the packet and scheduling the running of the process. With this kind of switching, it was clear that the router could not handle packets fast enough to attain the speeds needed as the traffic flows were increasing at a rapid pace across the networks. So an idea was born to solve this problem: Why not cache the results of the IP next-hop and layer 2 look-ups and use them to switch the next packet towards a given destination? This concept gave birth to fast switching.

Fast Switching relies on the idea of caching the routing decision of first packet (via process switching) then applying to the next ones without calculating. The first packet is copied to packet memory, and if the destination network is found in fast switching cache, the frame is rewritten and sent to outgoing interface. If the destination address is not present in the fast-switching cache, the packet is returned to process switching path, where the processor attempts to build a cache entry which can be used to forward packets to the destination.

CEF switching is a Cisco proprietary and advanced Layer3 IP switching mechanism that was designed to tackle the deficiencies associated with fast-switching. CEF optimizes performance, scalability, and resiliency for large and complex networks with dynamic traffic patterns. CEF’s retrieval and packet forwarding technique is less CPU intensive than process or fast switching. This results in higher throughput when CEF is enabled.

CEF Quick summary

Cisco Express Forwarding (CEF) provides the ability to switch packets through a device in a very quick and efficient way while also keeping the load on the router’s processor low. CEF is made up of two different main components: the Forwarding Information Base (FIB) and the Adjacency Table. These are automatically updated at the same time as the routing table.

The adjacency table is tasked with maintaining the layer 2 next-hop information for the FIB.

RIB vs FIB

Each routing protocol like OSPF, EIGRP has its own Routing information base (RIB) and they select their best candidates to try to install to global RIB so that it can then be selected for forwarding. In order to view the RIB table, use the command “show ip ospf database” for OSPF, “show ip eigrp topology” for EIGRP or “show ip bgp” for BGP. To view the Forwarding Information Base (FIB), use the “show ip cef” command. RIB is in Control plane while FIB is in Data plane.

The Forwarding Information Base (FIB) contains destination reachability information as well as next hop information. This information is then used by the router to make forwarding decisions. The FIB allows for very efficient and easy lookups. Below is an example of the FIB table:

show_ip_cef.jpg

The FIB maintains next-hop address information based on the information in the IP routing table (RIB). In other words, FIB is a mirror copy of RIB.

RIB is in Control plane (and it is not used for forwarding) while FIB is in Data plane (and it is used for forwarding).

Question 1

Explanation

Cisco Express Forwarding (CEF) provides the ability to switch packets through a device in a very quick and efficient way while also keeping the load on the router’s processor low. CEF is made up of two different main components: the Forwarding Information Base (FIB) and the Adjacency Table. These are automatically updated at the same time as the routing table.

The Forwarding Information Base (FIB) contains destination reachability information as well as next hop information. This information is then used by the router to make forwarding decisions. The FIB allows for very efficient and easy lookups. Below is an example of the FIB table:

show_ip_cef.jpg

The adjacency table is tasked with maintaining the layer 2 next-hop information for the FIB. An example of the adjacency table is shown below:

show_adjacency.jpg

Note: A fast cache is only used when fast switching is enabled while CEF is disabled.

Question 2

Explanation

Cisco IOS software basically supports two modes of CEF load balancing: On per-destination or per-packet basis.

For per destination load balancing a hash is computed out of the source and destination IP address (-> Answer E is correct). This hash points to exactly one of the adjacency entries in the adjacency table (-> Answer D is correct), providing that the same path is used for all packets with this source/destination address pair. If per packet load balancing is used the packets are distributed round robin over the available paths. In either case the information in the FIB and adjacency tables provide all the necessary forwarding information, just like for non-load balancing operation.

The number of paths used is limited by the number of entries the routing protocol puts in the routing table, the default in IOS is 4 entries for most IP routing protocols with the exception of BGP, where it is one entry. The maximum number that can be configured is 6 different paths -> Answer A is not correct.

Reference: https://www.cisco.com/en/US/products/hw/modules/ps2033/prod_technical_reference09186a00800afeb7.html

Question 3

Explanation

The Forwarding Information Base (FIB) table – CEF uses a FIB to make IP destination prefix-based switching decisions. The FIB is conceptually similar to a routing table or information base. It maintains a mirror image of the forwarding information contained in the IP routing table. When routing or topology changes occur in the network, the IP routing table is updated, and these changes are reflected in the FIB. The FIB maintains next-hop address information based on the information in the IP routing table.

Reference: https://www.cisco.com/c/en/us/support/docs/routers/12000-series-routers/47321-ciscoef.html

Question 4

Explanation

CEF uses a Forwarding Information Base (FIB) to make IP destination prefix-based switching decisions. The FIB is conceptually similar to a routing table or information base. It maintains a mirror image of the forwarding information contained in the IP routing table. When routing or topology changes occur in the network, the IP routing table is updated, and those changes are reflected in the FIB. The FIB maintains next-hop address information based on the information in the IP routing table. Because there is a one-to-one correlation between FIB entries and routing table entries, the FIB contains all known routes and eliminates the need for route cache maintenance that is associated with earlier switching paths such as fast switching and optimum switching.

Note: In order to view the Routing information base (RIB) table, use the “show ip route” command. To view the Forwarding Information Base (FIB), use the “show ip cef” command. RIB is in Control plane while FIB is in Data plane.

Question 5

Explanation

Both answer A and answer C in this question are correct. It is hard to say which correct answer is better.

Question 6

Explanation

“Punt” is often used to describe the action of moving a packet from the fast path (CEF) to the route processor for handling.

Cisco Express Forwarding (CEF) provides the ability to switch packets through a device in a very quick and efficient way while also keeping the load on the router’s processor low. CEF is made up of two different main components: the Forwarding Information Base (FIB) and the Adjacency Table.

Process switching is the slowest switching methods (compared to fast switching and Cisco Express Forwarding) because it must find a destination in the routing table. Process switching must also construct a new Layer 2 frame header for every packet. With process switching, when a packet comes in, the scheduler calls a process that examines the routing table, determines which interface the packet should be switched to and then switches the packet. The problem is, this happens for the every packet.

Reference: http://www.cisco.com/web/about/security/intelligence/acl-logging.html

Question 7

Explanation

The Forwarding Information Base (FIB) contains destination reachability information as well as next hop information. This information is then used by the router to make forwarding decisions. The FIB allows for very efficient and easy lookups. Below is an example of the FIB table:

show_ip_cef.jpg

The FIB maintains next-hop address information based on the information in the IP routing table (RIB).

Note: In order to view the Routing information base (RIB) table, use the “show ip route” command. To view the Forwarding Information Base (FIB), use the “show ip cef” command. RIB is in Control plane while FIB is in Data plane.

Virtualization Questions

February 3rd, 2021 digitaltut 34 comments

Virtualization Quick Summary

A virtual machine (VM) is a software emulation of a physical server with an operating system. From an application’s point of view, the VM provides the look and feel of a real physical server, including all its components, such as CPU, memory, and network interface cards (NICs).

A hypervisor, also known as a virtual machine monitor, is a software that creates and manages virtual machines. A hypervisor allows one physical server to support multiple guest VMs by virtually sharing its resources, such as memory and processing.

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors_detail.jpg

Comparison Type 1 and Type 2 hypervisors

  Type 1 hypervisor Type 2 hypervisor
Other name Bare metal hypervisor Hosted hypervisor
Runs on Underlying physical host machine hardware Underlying operating system (host OS)
Best suited for Large, resource-intensive, or fixed-use workloads Desktop and development environments
Can negotiate dedicated resources? Yes No
Knowledge required System administrator-level knowledge Basic user knowledge
Examples VMware ESXi, Microsoft Hyper-V, KVM Oracle VM VirtualBox, VMware Workstation, Microsoft Virtual PC

Structure of virtualization in a hypervisor

Hypervisors provide virtual switch (vSwitch) that Virtual Machines (VMs) use to communicate with other VMs on the same host. The vSwitch may also be connected to the host’s physical NIC to allow VMs to get layer 2 access to the outside world.

Each VM is provided with a virtual NIC (vNIC) that is connected to the virtual switch. Multiple vNICs can connect to a single vSwitch, allowing VMs on a physical host to communicate with one another at layer 2 without having to go out to a physical switch.

 

Virtual_machine_structure.jpg

Although vSwitch does not run Spanning-tree protocol but vSwitch implements other loop prevention mechanisms. For example, a frame that enters from one VMNIC is not going to go out of the physical host from a different VMNIC card.

Benefits of Virtualizing

Server virtualization and the use of virtual machines is profoundly changing data center dynamics. Most organizations are struggling with the cost and complexity of hosting multiple physical servers in their data centers. The expansion of the data center, a result of both scale-out server architectures and traditional “one application, one server” sprawl, has created problems in housing, powering, and cooling large numbers of underutilized servers. In addition, IT organizations continue to deal with the traditional cost and operational challenges of matching server resources to organizational needs that seem fickle and ever changing.

Virtual machines can significantly mitigate many of these challenges by enabling multiple application and operating system environments to be hosted on a single physical server while maintaining complete isolation between the guest operating systems and their respective applications. Hence, server virtualization facilitates server consolidation by enabling organizations to exchange a number of underutilized servers for a single highly utilized server running multiple virtual machines.

By consolidating multiple physical servers, organizations can gain several benefits:
+ Underutilized servers can be retired or redeployed.
+ Rack space can be reclaimed.
+ Power and cooling loads can be reduced.
+ New virtual servers can be rapidly deployed.
+ CapEx (higher utilization means fewer servers need to be purchased) and OpEx (few servers means a simpler environment and lower maintenance costs) can be reduced.

Para-virtualization

Para-virtualization is an enhancement of virtualization technology in which a guest operating system (guest OS) is modified prior to installation inside a virtual machine. This allows all guest OS within the system to share resources and successfully collaborate, rather than attempt to emulate an entire hardware environment. The modification also decreases the execution time required to complete operations that can be problematic in virtual environments.

Paravirtualization.jpg

By granting the guest OS access to the underlying hardware, Para-virtualization enables communication between the guest OS and the hypervisor (using API calls), thus improving performance and efficiency within the system. This is the main difference between Para-virtualization and (traditional) full-virtualization.

Question 1

Explanation

There is nothing special with the configuration of Gi0/0 on R1. Only Gi0/0 interface on R2 is assigned to VRF VPN_A. The default VRF here is similar to the global routing table concept in Cisco IOS

Question 2

Explanation

Answer C and answer D are not correct as only route distinguisher (RD) identifies the customer routing table and “allows customers to be assigned overlapping addresses”.

Answer A is not correct as “When BGP is configured, route targets are transmitted as BGP extended communities”

Question 3

Explanation

In VRF-Lite, Route distinguisher (RD) identifies the customer routing table and allows customers to be assigned overlapping addresses. Therefore it can support multiple customers with overlapping addresses -> Answer E is correct.

VRFs are commonly used for MPLS deployments, when we use VRFs without MPLS then we call it VRF lite -> Answer C is not correct.

– VRF-lite does not support IGRP and ISIS. ( -> Answer B is not correct)
– The capability vrf-lite subcommand under router ospf should be used when configuring OSPF as the routing protocol between the PE and the CE.
– VRF-lite does not affect the packet switching rate. (-> Answer A is not correct)

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12-2/25ew/configuration/guide/conf/vrf.html#wp1045190

Question 4

Explanation

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

Question 5

Explanation

Server virtualization and the use of virtual machines is profoundly changing data center dynamics. Most organizations are struggling with the cost and complexity of hosting multiple physical servers in their data centers. The expansion of the data center, a result of both scale-out server architectures and traditional “one application, one server” sprawl, has created problems in housing, powering, and cooling large numbers of underutilized servers. In addition, IT organizations continue to deal with the traditional cost and operational challenges of matching server resources to organizational needs that seem fickle and ever changing.

Virtual machines can significantly mitigate many of these challenges by enabling multiple application and operating system environments to be hosted on a single physical server while maintaining complete isolation between the guest operating systems and their respective applications. Hence, server virtualization facilitates server consolidation by enabling organizations to exchange a number of underutilized servers for a single highly utilized server running multiple virtual machines.

By consolidating multiple physical servers, organizations can gain several benefits:
+ Underutilized servers can be retired or redeployed.
+ Rack space can be reclaimed.
+ Power and cooling loads can be reduced.
+ New virtual servers can be rapidly deployed.
+ CapEx (higher utilization means fewer servers need to be purchased) and OpEx (few servers means a simpler environment and lower maintenance costs) can be reduced.

Reference: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/net_implementation_white_paper0900aecd806a9c05.html

Question 6

Explanation

A virtual machine (VM) is a software emulation of a physical server with an operating system. From an application’s point of view, the VM provides the look
and feel of a real physical server, including all its components, such as CPU, memory, and network interface cards (NICs).

The virtualization software that creates VMs and performs the hardware abstraction that allows multiple VMs to run concurrently is known as a hypervisor.

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

Question 7

Question 8

Explanation

Because some PE routers might receive routing information they do not require, a basic requirement is to be able to filter the MP-iBGP updates at the ingress to the PE router so that the router does not need to keep this information in memory.

The Automatic Route Filtering feature fulfills this filtering requirement. This feature is available by default on all PE routers, and no additional configuration is necessary to enable it. Its function is to filter automatically VPN-IPv4 routes that contain a route target extended community that does not match any of the PE’s configured VRFs. This effectively discards any unwanted VPN-IPv4 routes silently, thus reducing the amount of information that the PE has to store in memory -> Answer D is correct.

Reference: MPLS and VPN Architectures Book, Volume 1

The reason that PE1 dropped the route is there is no “route-target import 999:999” command on PE1 (so we see the “DENIED due to:extended community not supported” in the debug) so we need to type this command to accept this route -> Answer E is correct.

Question 9

Explanation

Broadcast radiation refers to the processing that is required every time a broadcast is received on a host. Although IP is very efficient from a broadcast perspective when compared to traditional protocols such as Novell Internetwork Packet Exchange (IPX) Service Advertising Protocol (SAP), virtual machines and the vswitch implementation require special consideration. Because the vswitch is software based, as broadcasts are received the vswitch must interrupt the server CPU to change contexts to enable the vswitch to process the packet. After the vswitch has determined that the packet is a broadcast, it copies the packet to all the VMNICs, which then pass the broadcast packet up the stack to process. This processing overhead can have a tangible effect on overall server performance if a single domain is hosting a large number of virtual machines.

Note: This overhead effect is not a limitation of the vswitch implementation. It is a result of the software-based nature of the vswitch embedded in the ESX hypervisor.

Reference: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/net_implementation_white_paper0900aecd806a9c05.html

—————————————————————-

Note about the structure of virtualization in a hypervisor:

Hypervisors provide virtual switch (vSwitch) that Virtual Machines (VMs) use to communicate with other VMs on the same host. The vSwitch may also be connected to the host’s physical NIC to allow VMs to get layer 2 access to the outside world.

Each VM is provided with a virtual NIC (vNIC) that is connected to the virtual switch. Multiple vNICs can connect to a single vSwitch, allowing VMs on a physical host to communicate with one another at layer 2 without having to go out to a physical switch.

 

Virtual_machine_structure.jpg

Although vSwitch does not run Spanning-tree protocol but vSwitch implements other loop prevention mechanisms. For example, a frame that enters from one VMNIC is not going to go out of the physical host from a different VMNIC card.

Question 10

Explanation

A bare-metal hypervisor (Type 1) is a layer of software we install directly on top of a physical server and its underlying hardware. There is no software or any operating system in between, hence the name bare-metal hypervisor. A Type 1 hypervisor is proven in providing excellent performance and stability since it does not run inside Windows or any other operating system. These are the most common type 1 hypervisors:

+ VMware vSphere with ESX/ESXi
+ KVM (Kernel-Based Virtual Machine)
+ Microsoft Hyper-V
+ Oracle VM
+ Citrix Hypervisor (formerly known as Xen Server)

Virtualization Questions 2

February 3rd, 2021 digitaltut 14 comments

Question 1

Explanation

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

Question 2

Explanation

Hypervisors provide virtual switch (vSwitch) that Virtual Machines (VMs) use to communicate with other VMs on the same host. The vSwitch may also be connected to the host’s physical NIC to allow VMs to get layer 2 access to the outside world.

Each VM is provided with a virtual NIC (vNIC) that is connected to the virtual switch. Multiple vNICs can connect to a single vSwitch, allowing VMs on a physical host to communicate with one another at layer 2 without having to go out to a physical switch.

 

Virtual_machine_structure.jpg

-> Therefore answer B is correct.

Answer E is not correct as besides the virtual switch running as a separate virtual machine, we also need to set up a trunk link so that VMs can communicate.

Answer C is not correct as it is too complex when we want to connect two VMs on the same hypervisor. VXLAN should only be used between two VMs on different physical servers.

Answer D is not correct as it uses Layer 3 network (routed link).

Therefore only answer A is left. We can connect two VMs via a trunk link with an external Layer2 switch.

Question 3

Explanation

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

Question 4

Explanation

Each VM is provided with a virtual NIC (vNIC) that is connected to the virtual switch. Multiple vNICs can connect to a single vSwitch, allowing VMs on a physical host to communicate with one another at layer 2 without having to go out to a physical switch.

 

Virtual_machine_structure.jpg

Question 5

Explanation

There are two types of hypervisors: type 1 and type 2 hypervisor.

In type 1 hypervisor (or native hypervisor), the hypervisor is installed directly on the physical server. Then instances of an operating system (OS) are installed on the hypervisor. Type 1 hypervisor has direct access to the hardware resources. Therefore they are more efficient than hosted architectures. Some examples of type 1 hypervisor are VMware vSphere/ESXi, Oracle VM Server, KVM and Microsoft Hyper-V.

In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type 1 is more efficient and well performing, it is also more secure than type 2 because the flaws and vulnerabilities that are endemic to Operating Systems are often absent from Type 1, bare metal hypervisors. Type 1 has better performance, scalability and stability but supported by limited hardware.

Type1_Type2_Hypervisors.jpg

Question 6

Explanation

Static routes directly between VRFs are not supported so we cannot configure a direct static route between two VRFs.

The command “ip route vrf Customer1 172.16.1.0 255.255.255.0 172.16.1.1 global” means in VRF Customer1, in order to reach destination 172.16.1.0/24 then we uses the next hop IP address 172.16.1.1 in the global routing table. And the command “ip route 192.168.1.0 255.255.255.0 Vlan10” tells the router “to reach 192.168.1.0/24, send to Vlan 10”.

Question 7

Explanation

This question is a bit unclear but it mentioned about “dedicated operating systems that provide the virtualization platform” -> It means the Hypervisor so “Type 1 hypervisor” is the best choice here as type 1 hypervisor does not require to run any underlay Operating System.

Note: Hosted virtualization is type 2 hypervisor. In contrast to type 1 hypervisor, a type 2 hypervisor (or hosted hypervisor) runs on top of an operating system and not the physical hardware directly. A big advantage of Type 2 hypervisors is that management console software is not required. Examples of type 2 hypervisor are VMware Workstation (which can run on Windows, Mac and Linux) or Microsoft Virtual PC (only runs on Windows).

Type1_Type2_Hypervisors.jpg

LISP & VXLAN Questions

February 2nd, 2021 digitaltut 22 comments

Note: If you are not sure about LISP or VXLAN, please read our LISP Tutorial and VXLAN tutorial.

Question 1

Explanation

An Egress Tunnel Router (ETR) connects a site to the LISP-capable part of a core network (such as the Internet), publishes EID-to-RLOC mappings for the site, responds to Map-Request messages, and decapsulates and delivers LISP-encapsulated user data to end systems at the site.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_lisp/configuration/xe-3s/irl-xe-3s-book/irl-overview.html

Question 2

Explanation

Proxy ingress tunnel router (PITR): A PITR is an infrastructure LISP network entity that receives packets from non-LISP sites and encapsulates the packets to LISP sites or natively forwards them to non-LISP sites.

Reference: https://www.ciscopress.com/articles/article.asp?p=2992605

Note: The proxy egress tunnel router (PETR) allows the communication from the LISP sites to the non-LISP sites. The PETR receives LISP encapsulated traffic from ITR.

Question 3

Explanation

Locator ID Separation Protocol (LISP) is a network architecture and protocol that implements the use of two namespaces instead of a single IP address:
+ Endpoint identifiers (EIDs)—assigned to end hosts.
+ Routing locators (RLOCs)—assigned to devices (primarily routers) that make up the global routing system.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_lisp/configuration/xe-3s/irl-xe-3s-book/irl-overview.html

Question 4

Explanation

Locator ID Separation Protocol (LISP) is a network architecture and protocol that implements the use of two namespaces instead of a single IP address:
+ Endpoint identifiers (EIDs) – assigned to end hosts.
+ Routing locators (RLOCs) – assigned to devices (primarily routers) that make up the global routing system.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_lisp/configuration/xe-3s/irl-xe-3s-book/irl-overview.html

Question 5

Explanation

802.1Q VLAN identifier space is only 12 bits. The VXLAN identifier space is 24 bits. This doubling in size allows the VXLAN ID space to support 16 million Layer 2 segments -> Answer B is not correct.

VXLAN is a MAC-in-UDP encapsulation method that is used in order to extend a Layer 2 or Layer 3 overlay network over a Layer 3 infrastructure that already exists.

Reference: https://www.cisco.com/c/en/us/support/docs/lan-switching/vlan/212682-virtual-extensible-lan-and-ethernet-virt.html

Question 6

Explanation

Locator ID Separation Protocol (LISP) is a network architecture and protocol that implements the use of two namespaces instead of a single IP address:
+ Endpoint identifiers (EIDs)—assigned to end hosts.
+ Routing locators (RLOCs)—assigned to devices (primarily routers) that make up the global routing system.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_lisp/configuration/xe-3s/irl-xe-3s-book/irl-overview.html

Question 7

Explanation

VTEPs connect between Overlay and Underlay network and they are responsible for encapsulating frame into VXLAN packets to send across IP network (Underlay) then decapsulating when the packets leaves the VXLAN tunnel.

VXLAN_VTEP.jpg

Question 8

Question 9

Explanation

In this question we suppose that we only need to send packets from LISP site to non-LISP site successfully. We don’t care about the way back (if we care about the way back then all PETR, PITR, MS & MR are needed).

Proxy Egress Tunnel Router (PETR): A LISP device that de-encapsulates packets from LISP sites to deliver them to non-LISP sites.

LISP_PxTR.jpg

When the xTR in LISP Site 1 want to sends traffic to Non-LISP site, the ITR (not PETR) needs a Map Resolver (MR) to send Map Request to. When the ITR (the xTR in LISP Site 1 in the figure above) receives negative MAP-Reply packet from MR, it caches that prefix and map it to the PETR.

Good reference: https://netmindblog.com/2019/12/04/lisp-locator-id-separation-protocol-part-ii-pxtr/

Question 10

Explanation

Locator ID Separation Protocol (LISP) solves this issue by separating the location and identity of a device through the Routing locator (RLOC) and Endpoint identifier (EID):

+ Endpoint identifiers (EIDs) – assigned to end hosts.
+ Routing locators (RLOCs) – assigned to devices (primarily routers) that make up the global routing system.

Question 11

Explanation

VXLAN uses an 8-byte VXLAN header that consists of a 24-bit VNID and a few reserved bits. The VXLAN header together with the original Ethernet frame goes in the UDP payload. The 24-bit VNID is used to identify Layer 2 segments and to maintain Layer 2 isolation between the segments.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/vxlan/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_VXLAN_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_NX-OS_VXLAN_Configuration_Guide_7x_chapter_010.html

Let’s see the structure of a VXLAN packet to understand how (note: VNI = VNID)

VXLAN_Packet_Structure.jpg

The key fields for the VXLAN packet in each of the protocol headers are:

+ Outer MAC header (14 bytes with 4 bytes optional) – Contains the MAC address of the source VTEP and the MAC address of the next-hop router. Each router along the packet’s path rewrites this header so that the source address is the router’s MAC address and the destination address is the next-hop router’s MAC address.

+ Outer IP header (20 bytes)- Contains the IP addresses of the source and destination VTEPs.
+ (Outer) UDP header (8 bytes)- Contains source and destination UDP ports:
– Source UDP port: The VXLAN protocol repurposes this standard field in a UDP packet header. Instead of using this field for the source UDP port, the protocol uses it as a numeric identifier for the particular flow between VTEPs. The VXLAN standard does not define how this number is derived, but the source VTEP usually calculates it from a hash of some combination of fields from the inner Layer 2 packet and the Layer 3 or Layer 4 headers of the original frame.
– Destination UDP port: The VXLAN UDP port. The Internet Assigned Numbers Authority (IANA) allocates port 4789 to VXLAN.

+ VXLAN header (8 bytes)- Contains the 24-bit VNI (or VNID)
+ Original Ethernet/L2 Frame – Contains the original Layer 2 Ethernet frame.

EIGRP & OSPF Questions

February 1st, 2021 digitaltut 25 comments

Quick OSPF Overview

OSPF router ID selection:

OSPF uses the following criteria to select the router ID:
1. Manual configuration of the router ID (via the “router-id x.x.x.x” command under OSPF router configuration mode).
2. Highest IP address on a loopback interface.
3. Highest IP address on a non-loopback and active (no shutdown) interface.

OSPF forms neighbor relationship with other OSPF routers on the same segment by exchanging hello packets. The hello packets contain various parameters. Some of them should match between neighboring routers. These include:

+ Hello and Dead intervals
+ Area ID
+ Authentication type and password
+ Stub Area flag
+ Subnet ID and Subnet mask

When OSPF neighbor relationship is formed, a router goes through several state changes before it becomes fully adjacent with its neighbor. The states are Down -> Attempt (optional) -> Init -> 2-Way -> Exstart -> Exchange -> Loading -> Full. Short descriptions about these states are listed below:

Down: no information (hellos) has been received from this neighbor

Attempt: only valid for manually configured neighbors in an NBMA environment. In Attempt state, the router sends unicast hello packets every poll interval to the neighbor, from which hellos have not been received within the dead interval

Init: specifies that the router has received a hello packet from its neighbor, but the receiving router’s ID was not included in the hello packet

2-Way: indicates bi-directional communication has been established between two routers

Exstart: Once the DR and BDR are elected, the actual process of exchanging link state information can start between the routers and their DR and BDR

Exchange: OSPF routers exchange and compare database descriptor (DBD) packets

Loading: In this state, the actual exchange of link state information occurs. Outdated or missing entries are also requested to be resent

Full: routers are fully adjacent with each other

When OSPF is run on a network, two important events happen before routing information is exchanged:
+ Neighbors are discovered using multicast hello packets.
+ DR and BDR are elected for every multi-access network to optimize the adjacency building process. All the routers in that segment should be able to communicate directly with the DR and BDR for proper adjacency (in the case of a point-to-point network, DR and BDR are not necessary since there are only two routers in the segment, and hence the election does not take place).
For a successful neighbor discovery on a segment, the network must allow broadcasts or multicast packets to be sent.

In an NBMA network topology, which is inherently nonbroadcast, neighbors are not discovered automatically. OSPF tries to elect a DR and a BDR due to the multi-access nature of the network, but the election fails since neighbors are not discovered. Neighbors must be configured manually to overcome these problems

Each OSPF area only allows some specific LSAs to pass through. Below is a summarization of which LSAs are allowed in each OSPF area:

Area Restriction
Normal (Backbone & Non-Backbone) No Type 7 allowed
Stub No Type 5 AS-external LSA allowed
Totally Stub No Type 3, 4 or 5 LSAs allowed except the default summary route
NSSA No Type 5 AS-external LSAs allowed, but Type 7 LSAs that convert to Type 5 at the NSSA ABR can traverse
NSSA Totally Stub No Type 3, 4 or 5 LSAs except the default summary route, but Type 7 LSAs that convert to Type 5 at the NSSA ABR are allowed

Or this table will help you grasp it:

OSPF_LSAs_Area_types.jpg

OSPF Summarization
OSPF offers two methods of route summarization:
1) Summarization of internal routes performed on the ABRs
2) Summarization of external routes performed on the ASBRs

1) To summarize routes at the area boundary (ABRs), use the command:
area area-id range ip-address mask [advertise | not-advertise] [cost cost]

An internal summary route is generated if at least one subnet within the area falls in the summary address range and the summarized route metric is equal to the lowest cost of all the subnets within the summary address range. Interarea summarization can only be done for the intra-area routes of connected areas, and the ABR creates a route to Null0 to avoid loops in the absence of more specific routes.

2) To summarize external routes on the domain boundary (ASBRs), use the command:
summary-address {{ip-address mask} | {prefix mask}} [not-advertise] [tag tag]
The ASBR will summarize external routes before injecting them into the OSPF domain as type 5 external LSAs.

Note: An exception of using the “summary-address” is at the boundary of a NSSA area.

In both methods of route summarization described above, a summarized route is only generated if at least one subnet in the routing table falls in the summary address range.

OSPF point-to-point network type

Setting OSPF to point-to-point mode results in advertised routes containing the actual subnet mask instead of the default behavior of advertising /32 for a loopback interface.

Summarization in EIGRP and OSPF

Unlike OSPF where we can summarize only on ABR or ASBR, in EIGRP we can summarize anywhere.

Manual summarization can be applied anywhere in EIGRP domain, on every router, on every interface via the ip summary-address eigrp as-number address mask [administrative-distance ] command (for example: ip summary-address eigrp 1 192.168.16.0 255.255.248.0). Summary route will exist in routing table as long as at least one more specific route exists. If the last specific route disappears, summary route will also fade out. The metric used by EIGRP manual summary route is the minimum metric of the specific routes. The example below shows how to configure EIGRP manual summarization:

Manual_summarization_EIGRP.jpgR1(config)#interface fa1/0
R1(config-if)#ip summary-address 10 172.16.0.0 255.255.254.0

If you are not sure about OSPF LSA Types, please read our OSPF LSA Types Tutorial.

OSPF area filtering

The command “area area-number filter-list prefixin“: Prevent prefixes from entering this area (in keyword here means “into”)
The command “area area-number filter-list prefixout“: Prevent other areas that the ABR is connected to receive the prefix.

Question 1

Explanation

The following different OSPF types are compatible with each other:

+ Broadcast and Non-Broadcast (adjust hello/dead timers)
+ Point-to-Point and Point-to-Multipoint (adjust hello/dead timers)

Broadcast and Non-Broadcast networks elect DR/BDR so they are compatible. Point-to-point/multipoint do not elect DR/BDR so they are compatible.

Question 2

Explanation

On Ethernet interfaces the OSPF hello intervl is 10 second by default so in this case there would be a Hello interval mismatch -> the OSPF adjacency would not be established.

Question 3

Explanation

This combination of commands is known as “Conditional debug” and will filter the debug output based on your conditions. Each condition added, will behave like an ‘And’ operator in Boolean logic. Some examples of the “debug ip ospf hello” are shown below:

*Oct 12 14:03:32.595: OSPF: Send hello to 224.0.0.5 area 0 on FastEthernet1/0 from 192.168.12.2
*Oct 12 14:03:33.227: OSPF: Rcv hello from 1.1.1.1 area 0 on FastEthernet1/0 from 192.168.12.1
*Oct 12 14:03:33.227: OSPF: Mismatched hello parameters from 192.168.12.1

Question 4

Explanation

If we configured an EIGRP stub router so that it only advertises connected and summary routes. But we also want to have an exception to this rule then we can configure a leak-map. For example:

R4(config-if)#router eigrp 1
R4(config-router)#eigrp stub
R4(config)#ip access-list standard R4_L0opback0
R4(config-std-nacl)#permit host 4.4.4.4
R4(config)#route-map R4_L0opback0_LEAKMAP
R4(config-route-map)#match ip address R4_L0opback0
R4(config)#router eigrp 1
R4(config-router)#eigrp stub leak-map R4_L0opback0_LEAKMAP

As we can see the leak-map feature goes long with ‘eigrp stub’ command.

Question 5

Explanation

EIGRP provides a mechanism to load balance over unequal cost paths (or called unequal cost load balancing) through the “variance” command. In other words, EIGRP will install all paths with metric < variance * best_metric into the local routing table, provided that it meets the feasibility condition to prevent routing loop. The path that meets this requirement is called a feasible successor. If a path is not a feasible successor, it is not used in load balancing.

Note: The feasibility condition states that, the Advertised Distance (AD) of a route must be lower than the feasible distance of the current successor route.

Question 6

Explanation

OTP leverages existing LISP encapsulation which:
+ Allows dynamic multi-point tunneling (-> Answer A is correct)
+ Provides instance ID field to optionally support virtualization across WAN (see EVN WAN Extension section)
OTP does NOT use LISP control plane (map server/resolver, etc.) (-> Therefore answer B is not correct) instead it uses EIGRP to exchange routes and provide the next-hop (-> answer C and answer D are not correct), which LISP encapsulation uses to reach remote prefixes.

Reference: https://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ip-routing/whitepaper_C11-730404.html

Question 7

Explanation

When OSPF adjacency is formed, a router goes through several state changes before it becomes fully adjacent with its neighbor. The states are Down -> Attempt (optional) -> Init -> 2-Way -> Exstart -> Exchange -> Loading -> Full. Short descriptions about these states are listed below:

Down: no information (hellos) has been received from this neighbor.

Attempt: only valid for manually configured neighbors in an NBMA environment. In Attempt state, the router sends unicast hello packets every poll interval to the neighbor, from which hellos have not been received within the dead interval.

Init: specifies that the router has received a hello packet from its neighbor, but the receiving router’s ID was not included in the hello packet
2-Way: indicates bi-directional communication has been established between two routers.

Exstart: Once the DR and BDR are elected, the actual process of exchanging link state information can start between the routers and their DR and BDR.

Exchange: OSPF routers exchange database descriptor (DBD) packets

Loading: In this state, the actual exchange of link state information occurs

Full: routers are fully adjacent with each other

(Reference: http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a0080093f0e.shtml)

Neighbors Stuck in Exstart/Exchange State
The problem occurs most frequently when attempting to run OSPF between a Cisco router and another vendor’s router. The problem occurs when the maximum transmission unit (MTU) settings for neighboring router interfaces don’t match. If the router with the higher MTU sends a packet larger that the MTU set on the neighboring router, the neighboring router ignores the packet.

Question 8

Explanation

EIGRP support unequal-cost load balancing via the “variance …” while OSPF only supports equal-cost load balancing.

Question 9

Explanation

The Broadcast network type is the default for an OSPF enabled ethernet interface (while Point-to-Point is the default OSPF network type for Serial interface with HDLC and PPP encapsulation).

Reference: https://www.oreilly.com/library/view/cisco-ios-cookbook/0596527225/ch08s15.html

Question 10

Explanation

Summary ASBR LSA (Type 4) – Generated by the ABR to describe an ASBR to routers in other areas so that routers in other areas know how to get to external routes through that ASBR. For example, suppose R8 is redistributing external route (EIGRP, RIP…) to R3. This makes R3 an Autonomous System Boundary Router (ASBR). When R2 (which is an ABR) receives this LSA Type 1 update, R2 will create LSA Type 4 and flood into Area 0 to inform them how to reach R3. When R5 receives this LSA it also floods into Area 2.

In the above example, the only ASBR belongs to area 1 so the two ABRs (R2 & R5) send LSA Type 4 to area 0 & area 2 (not vice versa). This is an indication of the existence of the ASBR in area 1.

OSPF_LSAs_Types_4.jpg

Note:
+ Type 4 LSAs contain the router ID of the ASBR.
+ There are no LSA Type 4 injected into Area 1 because every router inside area 1 knows how to reach R3. R3 only uses LSA Type 1 to inform R2 about R8 and inform R2 that R3 is an ASBR.

EIGRP & OSPF Questions 2

February 1st, 2021 digitaltut 10 comments

Question 1

Explanation

Broadcast and Non-Broadcast networks elect DR/BDR while Point-to-point/multipoint do not elect DR/BDR. Therefore we have to set the two Gi0/0 interfaces to point-to-point or point-to-multipoint network to ensure that a DR/BDR election does not occur.

Question 2

Question 3

Question 4

Explanation

The “network 20.0.0.0 0.0.0.255 area 0” command on R2 did not cover the IP address of Fa1/1 interface of R2 so OSPF did not run on this interface. Therefore we have to use the command “network 20.1.1.2 0.0.255.255 area 0” to turn on OSPF on this interface.

Note: The command “network 20.1.1.2 0.0.255.255 area 0” can be used too so this answer is also correct but answer C is the best answer here.

The “network 0.0.0.0 255.255.255.255 area 0” command on R1 will run OSPF on all active interfaces of R1.

Question 5

Question 6

Explanation

We must reconfigure the IP address after assigning or removing an interface to a VRF. Otherwise that interface does not have an IP address.

Question 7

Explanation

By default, EIGRP metric is calculated:

metric = bandwidth + delay

While OSPF is calculated by:

OSPF metric = Reference bandwidth / Interface bandwidth in bps

(Or Cisco uses 100Mbps (108) bandwidth as reference bandwidth. With this bandwidth, our equation would be:

Cost = 108/interface bandwidth in bps)

BGP Questions

January 31st, 2021 digitaltut 28 comments

If you are not sure about BGP, please read our BGP tutorial.

BGP Quick Summary:

Protocol type: Path Vector
Type: EGP (External Gateway Protocol)
Packet Types: Open, Update, KeepAlive, Notification
Administrative Distance: eBGP: 20; iBGP: 200
Transport: TCP port 179
Neighbor States: Idle -> Active -> Connect -> Open Sent -> Open Confirm -> Established
1 – Idle: the initial state of a BGP connection. In this state, the BGP speaker is waiting for a BGP start event, generally either the establishment of a TCP connection or the re-establishment of a previous connection. Once the connection is established, BGP moves to the next state.
2 – Connect: In this state, BGP is waiting for the TCP connection to be formed. If the TCP connection completes, BGP will move to the OpenSent stage; if the connection cannot complete, BGP goes to Active
3 – Active: In the Active state, the BGP speaker is attempting to initiate a TCP session with the BGP speaker it wants to peer with. If this can be done, the BGP state goes to OpenSent state.
4 – OpenSent: the BGP speaker is waiting to receive an OPEN message from the remote BGP speaker
5 – OpenConfirm: Once the BGP speaker receives the OPEN message and no error is detected, the BGP speaker sends a KEEPALIVE message to the remote BGP speaker
6 – Established: All of the neighbor negotiations are complete. You will see a number, which tells us the number of prefixes the router has received from a neighbor or peer group.
Path Selection Attributes: (highest) Weight > (highest) Local Preference > Originate > (shortest) AS Path > Origin > (lowest) MED > External > IGP Cost > eBGP Peering > (lowest) Router ID
(Originate: prefer routes that it installed into BGP by itself over a route that another router installed in BGP)
Authentication: MD5
BGP Origin codes: i – IGP (injected by “network” statement), e – EGP, ? – Incomplete
AS number range: Private AS range: 64512 – 65535, Globally (unique) AS: 1 – 64511

More information about popular Path Selection Attributes
Weight Attribute:
+ Cisco proprietary
+ First attribute used in Path selection
+ Only used locally in a router (not be exchanged between BGP neighbors)
+ Higher weight is preferred
+ Default value is 0
Weight_BGP_Attribute_Influence.jpg

Local Preference (LocalPrf) Attribute:
+ Sent to all iBGP neighbor (not be exchanged between eBGP neighbors)
+ Used to choose the path to external BGP neighbors
+ Higher value is preferred
+ Default value is 100

LocalPreference_BGP_Influence.jpg

Note: Although Local Preference attribute is only sent to all iBGP neighbor and it is not exchanged between eBGP neighbors but we can apply it to an eBGP neighbor (inbound direction) to affect our local AS choice.

For example in the topology above, we can use Local Preference on R2 (inbound direction) for the R2-R4 eBGP connection to affect routing decision on R1 toward R4.

Unlike Weight attribute, Local Preference attribute is advertised to all iBGP neighbors.

MED Attribute:
+ Optional nontransitive attribute (nontransitive means that we can only advertise MED to routers that are one AS away)
+ Sent through ASes to external BGP neighbors
+ Lower value is preferred (it can be considered the external metric of a route)
+ Default value is 0

MED_BGP_Attribute_Influence.jpg

Question 1

Explanation

The BGP session may report in the following states

1 – Idle: the initial state of a BGP connection. In this state, the BGP speaker is waiting for a BGP start event, generally either the establishment of a TCP connection or the re-establishment of a previous connection. Once the connection is established, BGP moves to the next state.
2 – Connect: In this state, BGP is waiting for the TCP connection to be formed. If the TCP connection completes, BGP will move to the OpenSent stage; if the connection cannot complete, BGP goes to Active
3 – Active: In the Active state, the BGP speaker is attempting to initiate a TCP session with the BGP speaker it wants to peer with. If this can be done, the BGP state goes to OpenSent state.
4 – OpenSent: the BGP speaker is waiting to receive an OPEN message from the remote BGP speaker
5 – OpenConfirm: Once the BGP speaker receives the OPEN message and no error is detected, the BGP speaker sends a KEEPALIVE message to the remote BGP speaker
6 – Established: All of the neighbor negotiations are complete. You will see a number, which tells us the number of prefixes the router has received from a neighbor or peer group.

Question 2

Explanation

With BGP, we must advertise the correct network and subnet mask in the “network” command ( in this case network 10.1.1.0/24 on R1 and network 10.2.2.0/24 on R2). BGP is very strict in the routing advertisements. In other words, BGP only advertises the network which exists exactly in the routing table. In this case, if you put the command “network x.x.0.0 mask 255.255.0.0” or “network x.0.0.0 mask 255.0.0.0” or “network x.x.x.x mask 255.255.255.255” then BGP will not advertise anything.

It is easy to establish eBGP neighborship via the direct link. But let’s see what are required when we want to establish eBGP neighborship via their loopback interfaces. We will need two commands:
+ The command “neighbor 10.1.1.1 ebgp-multihop 2” on R1 and “neighbor 10.2.2.2 ebgp-multihop 2” on R1. This command increases the TTL value to 2 so that BGP updates can reach the BGP neighbor which is two hops away.
+ A route to the neighbor loopback interface. For example: “ip route 10.2.2.0 255.255.255.0 192.168.10.2” on R1 and “ip route 10.1.1.0 255.255.255.0 192.168.10.1” on R2

Question 3

Explanation

The ‘>’ shown in the output above indicates that the path with a next hop of 192.168.101.2 is the current best path.

Path Selection Attributes: Weight > Local Preference > Originate > AS Path > Origin > MED > External > IGP Cost > eBGP Peering > Router ID

BGP prefers the path with highest weight but the weights here are all 0 (which indicate all routes that are not originated by the local router) so we need to check the Local Preference. A path without LOCAL_PREF (LocPrf column) means it has the default value of 100. Therefore we can find the two next best paths with the next hop of 192.168.101.18 and 192.168.101.10.

We have to move to the next path selection attribute: Originate. BGP prefers the path that the local router originated (which is indicated with the “next hop 0.0.0.0”). But none of the two best paths is self-originated.

The AS Path of the next hop 192.168.101.18 (one AS away) is shorter than the AS Path of the next hop 192.168.101.10 (two ASes away) so the next hop 192.168.101.18 will be chosen as the next best path.

Question 4

Explanation

Path Selection Attributes: Weight > Local Preference > Originate > AS Path > Origin > MED > External > IGP Cost > eBGP Peering > Router ID

Question 5

Explanation

Local preference is an indication to the AS about which path has preference to exit the AS in order to reach a certain network. A path with a higher local preference is preferred. The default value for local preference is 100.

Unlike the weight attribute, which is only relevant to the local router, local preference is an attribute that routers exchange in the same AS. The local preference is set with the “bgp default local-preference value” command.

In this case, both R3 & R4 have exit links but R4 has higher local-preference so R4 will be chosen as the preferred exit point from AS 200.

(Reference: http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a00800c95bb.shtml#localpref)

Question 6

Question 7

Explanation

Unlike EIGRP or OSPF, when a “network” command is declared under BGP, BGP advertises that network without knowing if it has that network or not. Therefore in this question R2 will advertise network 10.0.0.0/24, which is collided with the same network on R1. Therefore we must remove this statement on R2 as it does not have this network. Although answer A is not totally correct (it should be “no network 10.0.0.0 mask 255.255.255.0 instead) but it is the best choice here.

Question 8

Explanation

R3 advertises BGP updates to R1 with multiple AS 200 so R1 believes the path to reach AS 200 via R3 is farther than R2 so R1 will choose R2 to forward traffic to AS 200.

Wireless Questions

January 30th, 2021 digitaltut 42 comments

Quick Wireless Summary
Cisco Access Points (APs) can operate in one of two modes: autonomous or lightweight
+ Autonomous: self-sufficient and standalone. Used for small wireless networks. Each autonomous AP must be configured with a management IP address so that it can be remotely accessed using Telnet, SSH, or a web interface. Each AP must be individually managed and maintained unless you use a management platform such as Cisco DNA Center.
+ Lightweight: The term ‘lightweight’ refers to the fact that these devices cannot work independently. A Cisco lightweight AP (LAP) has to join a Wireless LAN Controller (WLC) to function. LAP and WLC communicate with each other via a logical pair of CAPWAP tunnels.

Control and Provisioning for Wireless Access Point (CAPWAP) is an IETF standard for control messaging for setup, authentication and operations between APs and WLCs. CAPWAP is similar to LWAPP except the following differences:

+ CAPWAP uses Datagram Transport Layer Security (DTLS) for authentication and encryption to protect traffic between APs and controllers. LWAPP uses AES.
+ CAPWAP has a dynamic maximum transmission unit (MTU) discovery mechanism.
+ CAPWAP runs on UDP ports 5246 (control messages) and 5247 (data messages)

An LAP operates in one of six different modes:
+ Local mode (default mode): It offers one or more basic service sets (BBS) on a specific channel. AP maintains a tunnel towards its Wireless Controller. When the AP is not transmitting wireless client frames, it measures noise floor and interference, and scans for intrusion detection (IDS) events every 180 seconds.
+ FlexConnect, formerly known as Hybrid Remote Edge AP (H-REAP), mode: allows data traffic to be switched locally and not go back to the controller if the CAPWAP to the WLC is down. The FlexConnect AP can perform standalone client authentication and switch VLAN traffic locally even when it’s disconnected to the WLC (Local Switched). FlexConnect AP can also tunnel (via CAPWAP) both user wireless data and control traffic to a centralized WLC (Central Switched). The AP can locally switch traffic between a VLAN and SSID when the CAPWAP tunnel to the WLC is down.
+ Monitor mode: does not handle data traffic between clients and the infrastructure. It acts like a sensor for location-based services (LBS), rogue AP detection, and IDS
+ Rogue detector mode: monitor for rogue APs. It does not handle data at all.
+ Sniffer mode: run as a sniffer and captures and forwards all the packets on a particular channel to a remote machine where you can use protocol analysis tool (Wireshark, Airopeek, etc) to review the packets and diagnose issues. Strictly used for troubleshooting purposes.
+ Bridge mode: bridge together the WLAN and the wired infrastructure together.
+ Sensor mode: this is a special mode which is not listed in the books but you need to know. In this mode, the device can actually function much like a WLAN client would associating and identifying client connectivity issues within the network in real time without requiring an IT or technician to be on site.

Mobility Express is the ability to use an access point (AP) as a controller instead of a real WLAN controller. But this solution is only suitable for small to midsize, or multi-site branch locations where you might not want to invest in a dedicated WLC. A Mobility Express WLC can support up to 100 APs.

The 2.4 GHz band is subdivided into multiple channels each allotted 22 MHz bandwidth and separated from the next channel by 5 MHz.
-> A best practice for 802.11b/g/n WLANs requiring multiple APs is to use non-overlapping channels such as 1, 6, and 11.

wireless_2_4_GHz_band.png

Antenna

An antenna is a device to transmit and/or receive electromagnetic waves. Electromagnetic waves are often referred to as radio waves. Most antennas are resonant devices, which operate efficiently over a relatively narrow frequency band. An antenna must be tuned (matched) to the same frequency band as the radio system to which it is connected otherwise reception and/or transmission will be impaired.

Types of external antennas:
+ Omnidirectional: Provide 360-degree coverage. Ideal in houses and office areas. This type of antenna is used when coverage in all directions from the antenna is required.

ominidirectionl_antenna_direction.jpg

Omnidirectional Antenna Radiation Pattern

+ Directional: Focus the radio signal in a specific direction. Typically, these antennas have one main lobe and several minor lobes. Examples are the Yagi and parabolic dish

Yagi_radiation_pattern.jpgYagi Antenna Radiation Pattern

+ Multiple Input Multiple Output (MIMO) – Uses multiple antennas (up to eight) to increase bandwidth

A real example of how the lobes affect the signal strength:

antenna_radiation_real_example.jpg

Common Antennas

Commonly used antennas in a WLAN system are dipoles, omnidirectional antennas, patches and Yagis as shown below:

Antennas.jpg

Antenna Radiation Patterns

Dipole

Dipole_Antenna_Radiation_Pattern.jpg

A more comprehensive explanation of how we get the 2D patterns in (c) and (d) above is shown below:

Dipole_3d.jpg

Omni

Omni_Radiation_Pattern.jpg

Patch

Antenna_Radiation_Pattern_Patch.jpg

4 x 4 Patch

4x4_Patch_Radiation_Pattern.jpgYagi

Yagi_Antenna_Radiation_Pattern.jpg

Wireless Terminologies

Decibels

Decibels (dB) are the accepted method of describing a gain or loss relationship in a communication system. If a level is stated in decibels, then it is comparing a current signal level to a previous level or preset standard level. The beauty of dB is they may be added and subtracted. A decibel relationship (for power) is calculated using the following formula:

dB_formula.jpg

“A” might be the power applied to the connector on an antenna, the input terminal of an amplifier or one end of a transmission line. “B” might be the power arriving at the opposite end of the transmission line, the amplifier output or the peak power in the main lobe of radiated energy from an antenna. If “A” is larger than “B”, the result will be a positive number or gain. If “A” is smaller than “B”, the result will be a negative number or loss.

You will notice that the “B” is capitalized in dB. This is because it refers to the last name of Alexander Graham Bell.

Note:

+ dBi is a measure of the increase in signal (gain) by your antenna compared to the hypothetical isotropic antenna (which uniformly distributes energy in all directions) -> It is a ratio. The greater the dBi value, the higher the gain and the more acute the angle of coverage.

+To divide one number by another, simply subtract their equivalent decibel values. For example, to find 100 divided by 10:

100÷10 = log100 – log10= 20dB – 10dB = 10dB = 10

+ dBm is a measure of signal power. It is the the power ratio in decibel (dB) of the measured power referenced to one milliwatt (mW). The “m” stands for “milliwatt”.

Example:

At 1700 MHz, 1/4 of the power applied to one end of a coax cable arrives at the other end. What is the cable loss in dB?

Solution:

dB_example.jpg

=> Loss = 10 * (- 0.602) = – 6.02 dB

From the formula above we can calculate at 3 dB the power is reduced by half. Loss =  10 * log (1/2) = -3 dB; this is an important number to remember.

Beamwidth

The angle, in degrees, between the two half-power points (-3 dB) of an antenna beam, where more than 90% of the energy is radiated.

beamwidth.jpg

A radiation pattern defines the variation of the power radiated by an antenna as a function of the direction away from the antenna.

Polarization describes the way the electric field of the radio wave is oriented.

Antenna gain is the ability of the antenna to radiate more or less in any direction compared to a theoretical antenna.

OFDM

OFDM was proposed in the late 1960s, and in 1970, US patent was issued. OFDM encodes a single transmission into
multiple sub-carriers. All the slow subchannel are then multiplexed into one fast combined channel.

The trouble with traditional FDM is that the guard bands waste bandwidth and thus reduce capacity. OFDM selects channels that overlap but do not interfere with each other.

FDM_OFDM.gif

OFDM works because the frequencies of the subcarriers are selected so that at each subcarrier frequency, all other subcarriers do not contribute to overall waveform.

In this example, three subcarriers are overlapped but do not interfere with each other. Notice that only the peaks of each subcarrier carry data. At the peak of each of the subcarriers, the other two subcarriers have zero amplitude.

OFDM.jpg

Basic Service Set (BSS)

A group of stations that share an access point are said to be part of one BSS.

Extended Service Set (ESS)

Some WLANs are large enough to require multiple access points. A group of access points connected to the same WLAN are known as an ESS. Within an ESS, a client can associate with any one of many access points that use the same Extended service set identifier (ESSID). That allows users to roam about an office without losing wireless connection.

Roaming

Roaming is the movement of a client from one AP to another while still transmitting. Roaming can be done across different mobility groups, but must remain inside the same mobility domain. The wireless client makes decisions on whether to change APs or remain connected to the current AP. There are 2 types of roaming:

A client roaming from AP1 to AP2. These two APs are in the same mobility group and mobility domain

Roaming_Same_Mobile_Group.jpg

Roaming in the same Mobility Group

A client roaming from AP1 to AP2. These two APs are in different mobility groups but in the same mobility domain

Roaming_Different_Mobile_Group.jpg

Roaming in different Mobility Groups (but still in the same Mobility Domain)

Layer 2 & Layer 3 Intercontroller Roam

When a mobile client roams from one AP to another, and if those APs are on different WLCs, then the client makes an intercontroller roam.

When a client starts an intercontroller roam, the two WLCs compare the VLAN IDs allocated to their WLAN interfaces.
– If the VLAN IDs are the same, the client performs a Layer 2 intercontroller roam.
– If the VLAN IDs are different, the WLCs will arrange a Layer 3 or local-to-foreign roam.

Both of the roaming above allow the client to continue using its current IP address on the new AP and WLC.

In a Layer 3 roaming, the original WLC is called the anchor controller, and the WLC where the roamed client is reassociated is called the foreign controller. The client is anchored to the original WLC even if it roams to different controllers.

Wireless Parameters

Noise

There is radio frequency (RF) everywhere, from human activity, earth heat, space… The amount of unwanted RF is called noise.

Effective Isotropic Radiated Power (EIRP)

EIRP tells you what is the actual transmit power of the antenna. EIRP is a very important parameter because it is regulated by governmental agencies in most countries. In those cases, a system cannot radiate signals higher than a maximum allowable EIRP. To find the EIRP of a system, simply add the transmitter power level to the antenna gain and subtract the cable loss.

EIRP_wireless.jpg

EIRP = Tx Power – Tx Cable + Tx Antenna

Suppose a transmitter is configured for a power level of 10 dBm. A cable with 5-dB loss connects the transmitter to an antenna with an 8-dBi gain. The resulting EIRP of the system is EIRP = 10 dBm – 5 dB + 8 dBi = 13 dBm.

You might notice that the EIRP is made up of decibel-milliwatt (dBm), dB relative to an isotropic antenna (dBi), and decibel (dB) values. Even though the units appear to be different, you can safely combine them because they are all in the dB “domain”.

Receive Signal Strength Indicator (RSSI)

RSSI is a measurement of how well your device can hear a signal from an access point or router (useful signal). It’s a value that is useful for determining if you have enough signal to get a good wireless connection.

Signal-to-noise ratio (SNR or S/N)

SNR is the ratio of received signal power (at wireless client) to the noise power, and its unit of expression is typically decibels (dB). If your signal power and noise power are already in decibel form, then you can subtract the noise power from the signal power: SNR = S – N. This is because when you subtract logarithms, it is the equivalent of dividing normal numbers. Also, the difference in the numbers equals the SNR.For example, if the noise floor is -80 dBm and the wireless client is receiving a signal of -65 dBm SNR = -65 – (-80) = 15.

RSSI_SNR.jpg

Or we can find SNR from RSSI with this formula: SNR = RSSI – N, with N is the noise power.

A practical example to calculate SNR

Here is an example to tie together this information to come up with a very simple RF plan calculator for a single AP and a single client.
+ Access Point Power = 20 dBm
+ 50 foot antenna cable = – 3.35 dB Loss
+ External Access Point Antenna = + 5.5 dBi gain
+ Signal attenuation due to glass wall with metal frame = -6 dB
+ RSSI at WLAN Client = -75 dBm at 100ft from the AP
+ Noise level detected by WLAN Client = -85 dBm at 100ft from the AP

Based on the above, we can calculate the following information:
+ EIRP of the AP at source = 20 – 3.35 + 5.5 = 22.15 dBm
+ Transmit power as signal passes through glass wall = 22.15 – 6 = 16.15 dBm
+ SNR at Client = -75 + -85 = 10 dBm (difference between Signal and Noise)

WPA2 and WPA3

WPA2 is classified into two versions to encrypt Wi-Fi networks:
+ WPA2-Personal uses pre-shared key (PSK)
+ WPA2-Enterprise uses advanced encryption standard (AES)

Similar to WPA2, WPA3 includes:
+ WPA3-Personal: applies to small-scale networks (individual and home networks). 
+ WPA3-Enterprise: applies to medium- and large-sized networks with higher requirements on network management, access control, and security, and uses more advanced security protocols to protect sensitive data of users.

WPA3 provides improvements to the general Wi-Fi encryption, thanks to Simultaneous Authentication of Equals (SAE) replacing the Pre-Shared Key (PSK) authentication method used in prior WPA versions. SAE enables individuals or home users to set Wi-Fi passwords that are easier to remember and provide the same security protection even if the passwords are not complex enough.

WPA3 requires the use of Protected Management Frames. These frames help protect against forging and eavesdropping.

Wifi 6 (802.11ax)

Wifi 6 is an IEEE standard for wireless local-area networks (WLANs) and the successor of 802.11ac. Wi-Fi 6 brings several crucial wireless enhancements for IT administrators when compared to Wi-Fi 5. The first significant change is using 2.4 GHz. Wi-Fi 5 was limited to only using 5 GHz. While 5 GHz is a ‘cleaner’ band of RF, it doesn’t penetrate walls and 2.4 GHz and requires more battery life. For Wi-Fi driven IoT devices, 2.4 GHz will likely continue to be the band of choice for the foreseeable future.

Another critical difference between the two standards is the use of Orthogonal Frequency Division Multiple Access (OFDMA) and MU-MIMO. Wi-Fi 5 was limited to downlink only on MU-MIMO, where Wi-Fi 6 includes downlink and uplink. OFDMA, as referenced above, is also only available in Wi-Fi 6.

Question 1

Explanation

The Lightweight AP (LAP) can discover controllers through your domain name server (DNS). For the access point (AP) to do so, you must configure your DNS to return controller IP addresses in response to CISCO-LWAPP-CONTROLLER.localdomain, where localdomain is the AP domain name. When an AP receives an IP address and DNS information from a DHCP server, it contacts the DNS to resolve CISCO-CAPWAP-CONTROLLER.localdomain. When the DNS sends a list of controller IP addresses, the AP sends discovery requests to the controllers.

The AP will attempt to resolve the DNS name CISCO-CAPWAP-CONTROLLER.localdomain. When the AP is able to resolve this name to one or more IP addresses, the AP sends a unicast CAPWAP Discovery Message to the resolved IP address(es). Each WLC that receives the CAPWAP Discovery Request Message replies with a unicast CAPWAP Discovery Response to the AP.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless/4400-series-wireless-lan-controllers/107606-dns-wlc-config.html

Question 2

Explanation

Signal to Noise Ratio (SNR) is defined as the ratio of the transmitted power from the AP to the ambient (noise floor) energy present. To calculate the SNR value, we add the Signal Value to the Noise Value to get the SNR ratio. A positive value of the SNR ratio is always better.

Here is an example to tie together this information to come up with a very simple RF plan calculator for a single AP and a single client.
+ Access Point Power = 20 dBm
+ 50 foot antenna cable = – 3.35 dB Loss
+ Signal attenuation due to glass wall with metal frame = -6 dB
+ External Access Point Antenna = + 5.5 dBi gain
+ RSSI at WLAN Client = -75 dBm at 100ft from the AP
+ Noise level detected by WLAN Client = -85 dBm at 100ft from the AP

Based on the above, we can calculate the following information.
+ EIRP of the AP at source = 20 – 3.35 + 5.5 = 22.15 dBm
+ Transmit power as signal passes through glass wall = 22.15 – 6 = 16.15 dBm
+ SNR at Client = -75 + -85 = 10 dBm (difference between Signal and Noise)

Reference: https://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Borderless_Networks/Unified_Access/CMX/CMX_RFFund.html

Receive Signal Strength Indicator (RSSI) is a measurement of how well your device can hear a signal from an access point or router. It’s a value that is useful for determining if you have enough signal to get a good wireless connection.

EIRP tells you what’s the actual transmit power of the antenna in milliwatts.

dBm is an abbreviation for “decibels relative to one milliwatt,” where one milliwatt (1 mW) equals 1/1000 of a watt. It follows the same scale as dB. Therefore 0 dBm = 1 mW, 30 dBm = 1 W, and -20 dBm = 0.01 mW

Question 3

Explanation

The EAP-FAST protocol is a publicly accessible IEEE 802.1X EAP type that Cisco developed to support customers that cannot enforce a strong password policy and want to deploy an 802.1X EAP type that does not require digital certificates.

EAP-FAST is also designed for simplicity of deployment since it does not require a certificate on the wireless LAN client or on the RADIUS infrastructure yet incorporates a built-in provisioning mechanism.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-fixed/72788-CSSC-Deployment-Guide.html

Question 4

Explanation

If the clients roam between APs registered to different controllers and the client WLAN on the two controllers is on different subnet, then it is called inter-controller L3 roam.

In this situation as well controllers exchange mobility messages. Client database entry change is completely different that to L2 roam(instead of move, it will copy). In this situation the original controller marks the client entry as “Anchor” where as new controller marks the client entry as “Foreign“.The two controllers now referred to as “Anchor controller” & “Foreign Controller” respectively. Client will keep the original IP address & that is the real advantage.

Note: Inter-Controller (normally layer 2) roaming occurs when a client roam between two APs registered to two different controllers, where each controller has an interface in the client subnet.

Question 5

Question 6

Explanation

According to the Meraki webpage, radar and rogue AP are two sources of Wireless Interference.

Interference between different WLANs occurs when the access points within range of each other are set to the same RF channel.

Note: Microwave ovens (not conventional oven) emit damaging interfering signals at up to 25 feet or so from an operating oven. Some microwave ovens emit radio signals that occupy only a third of the 2.4-GHz band, whereas others occupy the entire band.

Reference: https://www.ciscopress.com/articles/article.asp?p=2351131&seqNum=2

So answer D is not a correct answer.

Question 7

Explanation

This paragraph was taken from the link https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wlan-security/69340-web-auth-config.html#c5:

“The next step is to configure the WLC for the Internal web authentication. Internal web authentication is the default web authentication type on WLCs.”

In step 4 of the link above, we will configure Security as described in this question. Therefore we can deduce this configuration is for Internal web authentication.

webauth_security_WLC.jpg

Question 8

Explanation

FlexConnect is a wireless solution for branch office and remote office deployments. It enables customers to configure and control access points in a branch or remote office from the corporate office through a wide area network (WAN) link without deploying a controller in each office.

The FlexConnect access points can switch client data traffic locally and perform client authentication locally when their connection to the controller is lost. When they are connected to the controller, they can also send traffic back to the controller. In the connected mode, the FlexConnect access point can also perform local authentication.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-2/configuration/guide/cg/cg_flexconnect.html

Question 9

Explanation

Deploying WPA2-Enterprise requires a RADIUS server, which handles the task of authenticating network users access. The actual authentication process is based on the 802.1X policy and comes in several different systems labelled EAP. Because each device is authenticated before it connects, a personal, encrypted tunnel is effectively created between the device and the network.

Reference: https://www.securew2.com/solutions/wpa2-enterprise-and-802-1x-simplified/

Question 10

Explanation

802.11r Fast Transition (FT) Roaming is an amendment to the 802.11 IEEE standards. It is a new concept for roaming. The initial handshake with the new AP occurs before client roams to the target AP. Therefor it is called Fast Transition. 802.11r provides two methods of roaming:

+ Over-the-air: With this type of roaming, the client communicates directly with the target AP using IEEE 802.11 authentication with the Fast Transition (FT) authentication algorithm.
+ Over-the-DS (distribution system): With this type of roaming, the client communicates with the target AP through the current AP. The communication between the client and the target AP is carried in FT action frames between the client and the current AP and is then sent through the controller.

But both of these methods do not deal with legacy clients.

The 802.11k allows 11k capable clients to request a neighbor report containing information about known neighbor APs that are candidates for roaming.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/80211r-ft/b-80211r-dg.html

IEEE 802.11v is an amendment to the IEEE 802.11 standard which describes numerous enhancements to wireless network management. One such enhancement is Network assisted Power Savings which helps clients to improve the battery life by enabling them to sleep longer. Another enhancement is Network assisted Roaming which enables the WLAN to send requests to associated clients, advising the clients as to better APs to associate to. This is useful for both load balancing and in directing poorly connected clients.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/802-11v.pdf

Cisco 802.11r supports three modes:
+ Pure mode: only allows 802.11r client to connect
+ Mixed mode: allows both clients that do and do not support FT to connect
+ Adaptive mode: does not advertise the FT AKM at all, but will use FT when supported clients connect

Therefore “Adaptive mode” is the best answer here.

Question 11

Explanation

Link aggregation (LAG) is a partial implementation of the 802.3ad port aggregation standard. It bundles all of the controller’s distribution system ports into a single 802.3ad port channel.

Restriction for Link aggregation:

+ LAG requires the EtherChannel to be configured for ‘mode on’ on both the controller and the Catalyst switch.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-5/configuration-guide/b_cg75/b_cg75_chapter_0100010.html

Question 12

Question 13

Explanation

Mobility Express is the ability to use an access point (AP) as a controller instead of a real WLAN controller. But this solution is only suitable for small to midsize, or multi-site branch locations where you might not want to invest in a dedicated WLC. A Mobility Express WLC can support up to 100 APs. Mobility Express WLC also uses CAPWAP to communicate to other APs.

Note: Local mode is the most common mode that an AP operates in. This is also the default mode. In local mode, the LAP maintains a CAPWAP (or LWAPP) tunnel to its associated controller.

Question 14

Explanation

A Cisco lightweight wireless AP needs to be paired with a WLC to function.

An AP must be very diligent to discover any controllers that it can join—all without any preconfiguration on your part. To accomplish this feat, several methods of discovery are used. The goal of discovery is just to build a list of live candidate controllers that are available, using the following methods:
+ Prior knowledge of WLCs
+ DHCP and DNS information to suggest some controllers (DHCP Option 43)
+ Broadcast on the local subnet to solicit controllers

Reference: CCNP and CCIE Enterprise Core ENCOR 350-401 Official Cert Guide

If you do not tell the LAP where the controller is via DHCP option 43, DNS resolution of “Cisco-capwap-controller.local_domain”, or statically configure it, the LAP does not know where in the network to find the management interface of the controller.

In addition to these methods, the LAP does automatically look on the local subnet for controllers with a 255.255.255.255 local broadcast.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless/5500-series-wireless-controllers/119286-lap-notjoin-wlc-tshoot.html

Question 15

Explanation

A patch antenna, in its simplest form, is just a single rectangular (or circular) conductive plate that is spaced above a ground plane. Patch antennas are attractive due to their low profile and ease of fabrication.

The azimuth and elevation plane patterns are derived by simply slicing through the 3D radiation pattern. In this case, the azimuth plane pattern is obtained by slicing through the x-z plane, and the elevation plane pattern is formed by slicing through the y-z plane. Note that there is one main lobe that is radiated out from the front of the antenna. There are three back lobes in the elevation plane (in this case), the strongest of which happens to be 180 degrees behind the peak of the main lobe, establishing the front-to-back ratio at about 14 dB. That is, the gain of the antenna 180 degrees behind the peak is 14 dB lower than the peak gain.

patch_atenna.jpg

Again, it doesn’t matter if these patterns are shown pointing up, down, to the left or to the right. That is usually an artifact of the measurement system. A patch antenna radiates its energy out from the front of the antenna. That will establish the true direction of the patterns.

Reference: https://www.cisco.com/c/en/us/products/collateral/wireless/aironet-antennas-accessories/prod_white_paper0900aecd806a1a3e.html

Wireless Questions 2

January 30th, 2021 digitaltut 3 comments

Question 1

Explanation

A prerequisite for configuring Mobility Groups is “All controllers must be configured with the same virtual interface IP address”. If all the controllers within a mobility group are not using the same virtual interface, inter-controller roaming may appear to work, but the handoff does not complete, and the client loses connectivity for a period of time. -> Answer B is correct.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-5/config-guide/b_cg85/mobility_groups.html

Answer A is not correct because when the client moves to a different mobility group (with different mobility group name), that client would be connected (provided that the new connected controller had information about this client in its mobility list already) or drop (if the new connected controller have not had information about this client in its mobility list). For more information please read the note below.

Note:

A mobility group is a set of controllers, identified by the same mobility group name, that defines the realm of seamless roaming for wireless clients. By creating a mobility group, you can enable multiple controllers in a network to dynamically share information and forward data traffic when inter-controller or inter-subnet roaming occurs. Controllers in the same mobility group can share the context and state of client devices as well as their list of access points so that they do not consider each other’s access points as rogue devices.

different_mobility_groups.jpg

Let’s take an example:

The controllers in the ABC mobility group share access point and client information with each other. The controllers in the ABC mobility group do not share the access point or client information with the XYZ controllers, which are in a different mobility group. Therefore if a client from ABC mobility group moves to XYZ mobility group, and the new connected controller does not have information about this client in its mobility list, that client will be dropped.

Note: Clients may roam between access points in different mobility groups if the controllers are included in each other’s mobility lists.

Question 2

Explanation

As these wireless networks grow especially in remote facilities where IT professionals may not always be on site, it becomes even more important to be able to quickly identify and resolve potential connectivity issues ideally before the users complain or notice connectivity degradation.
To address these issues we have created Cisco’s Wireless Service Assurance and a new AP mode called “sensor” mode. Cisco’s Wireless Service Assurance platform has three components, namely, Wireless Performance Analytics, Real-time Client Troubleshooting, and Proactive Health Assessment. Using a supported AP or dedicated sensor the device can actually function much like a WLAN client would associating and identifying client connectivity issues within the network in real time without requiring an IT or technician to be on site.

Reference: https://content.cisco.com/chapter.sjs?uri=/searchable/chapter/content/dam/en/us/td/docs/wireless/controller/technotes/8-5/b_Cisco_Aironet_Sensor_Deployment_Guide.html.xml

Question 3

Explanation

Mobility, or roaming, is a wireless LAN client’s ability to maintain its association seamlessly from one access point to another securely and with as little latency as possible. Three popular types of client roaming are:

Intra-Controller Roaming: Each controller supports same-controller client roaming across access points managed by the same controller. This roaming is transparent to the client as the session is sustained, and the client continues using the same DHCP-assigned or client-assigned IP address.

Inter-Controller Roaming: Multiple-controller deployments support client roaming across access points managed by controllers in the same mobility group and on the same subnet. This roaming is also transparent to the client because the session is sustained and a tunnel between controllers allows the client to continue using the same DHCP- or client-assigned IP address as long as the session remains active.

Inter-Subnet Roaming: Multiple-controller deployments support client roaming across access points managed by controllers in the same mobility group on different subnets. This roaming is transparent to the client because the session is sustained and a tunnel between the controllers allows the client to continue using the same DHCP-assigned or client-assigned IP address as long as the session remains active.

Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-4/configuration/guides/consolidated/b_cg74_CONSOLIDATED/b_cg74_CONSOLIDATED_chapter_01100.html

Question 4

Explanation

The AP will attempt to resolve the DNS name CISCO-CAPWAP-CONTROLLER.localdomain. When the AP is able to resolve this name to one or more IP addresses, the AP sends a unicast CAPWAP Discovery Message to the resolved IP address(es). Each WLC that receives the CAPWAP Discovery Request Message replies with a unicast CAPWAP Discovery Response to the AP.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless/4400-series-wireless-lan-controllers/107606-dns-wlc-config.html

Question 5

Question 6

Explanation

These message logs inform that the radio channel has been reset (and the AP must be down briefly). With dynamic channel assignment (DCA), the radios can frequently switch from one channel to another but it also makes disruption. The default DCA interval is 10 minutes, which is matched with the time of the message logs. By increasing the DCA interval, we can reduce the number of times our users are disconnected for changing radio channels.

Question 7

Question 8

Question 9

Explanation

A Yagi antenna is formed by driving a simple antenna, typically a dipole or dipole-like antenna, and shaping the beam using a well-chosen series of non-driven elements whose length and spacing are tightly controlled.

Yagi_antenna_model.jpg

Reference: https://www.cisco.com/c/en/us/products/collateral/wireless/aironet-antennas-accessories/prod_white_paper0900aecd806a1a3e.html

Question 10

Explanation

Once you know the complete combination of transmitter power level, the length of cable, and the antenna gain, you can figure out the actual power level that will be radiated from the antenna. This is known as the effective isotropic radiated power (EIRP), measured in dBm.

EIRP is a very important parameter because it is regulated by governmental agencies in most countries. In those cases, a system cannot radiate signals higher than a maximum allowable EIRP. To find the EIRP of a system, simply add the transmitter power level to the antenna gain and subtract the cable loss.

EIRP_wireless.jpg

EIRP = Tx Power – Tx Cable + Tx Antenna

Suppose a transmitter is configured for a power level of 10 dBm (10 mW). A cable with 5-dB loss connects the transmitter to an antenna with an 8-dBi gain. The resulting EIRP of the system is 10 dBm – 5 dB + 8 dBi, or 13 dBm.

You might notice that the EIRP is made up of decibel-milliwatt (dBm), dB relative to an isotropic antenna (dBi), and decibel (dB) values. Even though the units appear to be different, you can safely combine them because they are all in the dB “domain”.

Reference: CCNA Wireless 640-722 Official Cert Guide

Wireless Questions 3

January 30th, 2021 digitaltut 20 comments

Question 1

Explanation

Output power is measured in mW (milliwatts). A milliwatt is equal to one thousandth (10−3) of a watt.

Question 2

Question 3

Explanation

+ From the output of WLC “show interface summary”, we learned that the WLC has four VLANs: 999, 14, 15 and 16.
+ From the “show ap config general FlexAP1” output, we learned that FlexConnect AP has four VLANs: 10, 11, 12 and 13. Also the WLAN of FlexConnect AP is mapped to VLAN 10 (from the line “WLAN 1: …… 10 (AP-Specific)).

From the reference at: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-1/Enterprise-Mobility-8-1-Design-Guide/Enterprise_Mobility_8-1_Deployment_Guide/ch7_HREA.html

FlexConnect VLAN Central Switching Summary
Traffic flow on WLANs configured for Local Switching when FlexConnect APs are in connected mode are as follows:

+ If the VLAN is returned as one of the AAA attributes and that VLAN is not present in the FlexConnect AP database, traffic will switch centrally and the client is assigned this VLAN/Interface returned from the AAA server provided that the VLAN exists on the WLC. (-> as VLAN 15 exists on the WLC so the client in connected mode would be assigned this VLAN -> Answer G is correct)
+ If the VLAN is returned as one of the AAA attributes and that VLAN is not present in the FlexConnect AP database, traffic will switch centrally. If that VLAN is also not present on the WLC, the client will be assigned a VLAN/Interface mapped to a WLAN on the WLC.
+ If the VLAN is returned as one of the AAA attributes and that VLAN is present in the FlexConnect AP database, traffic will switch locally.
+ If the VLAN is not returned from the AAA server, the client is assigned a WLAN mapped VLAN on that FlexConnect AP and traffic is switched locally.

Traffic flow on WLANs configured for Local Switching when FlexConnect APs are in standalone mode are as follows:

+ If the VLAN returned by the AAA server is not present in the FlexConnect AP database, the client will be put on a default VLAN (that is, a WLAN mapped VLAN on a FlexConnect AP) (-> Therefore answer B is correct). When the AP connects back, this client is de-authenticated (-> Therefore answer C is correct) and will switch traffic centrally.

Question 4

Question 5

Explanation

Once you know the complete combination of transmitter power level, the length of cable, and the antenna gain, you can figure out the actual power level that will be radiated from the antenna. This is known as the effective isotropic radiated power (EIRP), measured in dBm.

EIRP is a very important parameter because it is regulated by governmental agencies in most countries. In those cases, a system cannot radiate signals higher than a maximum allowable EIRP. To find the EIRP of a system, simply add the transmitter power level to the antenna gain and subtract the cable loss.

EIRP_wireless.jpg

EIRP = Tx Power – Tx Cable + Tx Antenna

Suppose a transmitter is configured for a power level of 10 dBm (10 mW). A cable with 5-dB loss connects the transmitter to an antenna with an 8-dBi gain. The resulting EIRP of the system is 10 dBm – 5 dB + 8 dBi, or 13 dBm.

You might notice that the EIRP is made up of decibel-milliwatt (dBm), dB relative to an isotropic antenna (dBi), and decibel (dB) values. Even though the units appear to be different, you can safely combine them because they are all in the dB “domain”.

Reference: CCNA Wireless 640-722 Official Cert Guide

Question 6

Explanation

This is called Inter Controller-L2 Roaming. Inter-Controller (normally layer 2) roaming occurs when a client roam between two APs registered to two different controllers, where each controller has an interface in the client subnet. In this instance, controllers exchange mobility control messages (over UDP port 16666) and the client database entry is moved from the original controller to the new controller.

Question 7

Explanation

Windows can actually block your WiFi signal. How? Because the signals will be reflected by the glass.

Some new windows have transparent films that can block certain wave types, and this can make it harder for your WiFi signal to pass through.

Tinted glass is another problem for the same reasons. They sometimes contain metallic films that can completely block out your signal.
Mirrors, like windows, can reflect your signal. They’re also a source of electromagnetic interference because of their metal backings.

Reference: https://dis-dot-dat.net/what-materials-can-block-a-wifi-signal/

An incandescent light bulb, incandescent lamp or incandescent light globe is an electric light with a wire filament heated until it glows. WiFi operates in the gigahertz microwave band. The FCC has strict regulations on RFI (radio frequency interference) from all sorts of things, including light bulbs -> Incandesent lights do not interfere Wi-Fi networks.

Note:

+ Many baby monitors operate at 900MHz and won’t interfere with Wi-Fi, which uses the 2.4GHz band.
+ DECT cordless phone 6.0 is designed to eliminate wifi interference by operating on a different frequency. There is essentially no such thing as DECT wifi interference.

Question 8

Explanation

When the primary controller (WLC-1) goes down, the APs automatically get registered with the secondary controller (WLC-2). The APs register back to the primary controller when the primary controller comes back on line.

Reference: https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-lan-wlan/69639-wlc-failover.html

Question 9

Question 10

Explanation

Directional antennas
Directional antennas come in many different styles and shapes. An antenna does not offer any added power to the signal; it simply redirects the energy it receives from the transmitter. By redirecting this energy, it has the effect of providing more energy in one direction and less energy in all other directions. As the gain of a directional antenna increases, the angle of radiation usually decreases, providing a greater coverage distance but with a reduced coverage angle. Directional antennas include patch antennas and parabolic dishes. Parabolic dishes have a very narrow RF energy path, and the installer must be accurate in aiming these types of antennas at each other.

directionl_patch_antenna.jpgDirectional patch antenna

Reference: https://www.cisco.com/c/en/us/products/collateral/wireless/aironet-antennas-accessories/product_data_sheet09186a008008883b.html

Omnidirectional antennas

An omnidirectional antenna is designed to provide a 360-degree radiation pattern. This type of antenna is used when coverage in all directions from the antenna is required. The standard 2.14-dBi “rubber duck” is one style of omnidirectional antenna.

ominidirectionl_antenna_direction.jpgOmnidirectional antenna

-> Therefore Omnidirectional antenna is best suited for a high-density wireless network in a lecture hall.

Question 11

Explanation

We will have the answer from this paragraph:

“TLV values for the Option 43 suboption: Type + Length + Value. Type is always the suboption code 0xf1. Length is the number of controller management IP addresses times 4 in hex. Value is the IP address of the controller listed sequentially in hex. For example, suppose there are two controllers with management interface IP addresses, 192.168.10.5 and 192.168.10.20. The type is 0xf1. The length is 2 * 4 = 8 = 0x08. The IP addresses translates to c0a80a05 (192.168.10.5) and c0a80a14 (192.168.10.20). When the string is assembled, it yields f108c0a80a05c0a80a14. The Cisco IOS command that is added to the DHCP scope is option 43 hex f108c0a80a05c0a80a14.”

Reference: https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-lan-wlan/97066-dhcp-option-43-00.html

Therefore in this question the option 43 in hex should be “F104.AC10.3205 (the management IP address of 172.16.50.5 in hex is AC.10.32.05).

Question 12

Explanation

At first, we thought “greater severity” means higher priority (or severity with smaller value) in the Syslog level table below:

Level Keyword Description
0 emergencies System is unusable
1 alerts Immediate action is needed
2 critical Critical conditions exist
3 errors Error conditions exist
4 warnings Warning conditions exist
5 notification Normal, but significant, conditions exist
6 informational Informational messages
7 debugging Debugging messages

For example, levels 0 to 4 are “greater severity” than level 5. But according to this Cisco link:

“If you set a syslog level, only those messages whose severity is equal to or less than that level are sent to the syslog servers. For example, if you set the syslog level to Notifications (severity level 5), only those messages whose severity is between 0 and 5 are sent to the syslog servers.”

The correct answer for this question should be “equal or less severity”.

HSRP & VRRP Questions

January 29th, 2021 digitaltut 40 comments

If you are not sure about HSRP, please read our HSRP tutorial (on 9tut.com).

Quick VRRP overview:

+ is IETF RFC 3768 standard
+ supports maximum 255 groups
+ 1 active and some backups
+ Use multicast address 224.0.0.18
+ Tracking via objects
+ 1 sec hello timer, 3 sec hold time
+ Authentication: plaintext or MD5 authentication
+ Preemption is enabled by default
+ Virtual IP address can be the same as physical IP address (which is running VRRP)
+ Default priority is 100
+ Only VRRPv3 supports both IPv4 and IPv6

Question 1

Explanation

When you change the HSRP version, Cisco NX-OS reinitializes the group because it now has a new virtual MAC address. HSRP version 1 uses the MAC address range 0000.0C07.ACxx while HSRP version 2 uses the MAC address range 0000.0C9F.F0xx.

HSRP supports interface tracking which allows to specify another interface on the router for the HSRP process to monitor in order to alter the HSRP priority for a given group.

Question 2

Explanation

If you change the version for existing groups, Cisco NX-OS reinitializes HSRP for those groups because the virtual MAC address changes.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus3548/sw/unicast/503_A1_1/l3_nx-os/l3_hsrp.html

Question 3

Question 4

Explanation

The main disadvantage of HSRP and VRRP is that only one gateway is elected to be the active gateway and used to forward traffic whilst the rest are unused until the active one fails. Gateway Load Balancing Protocol (GLBP) is a Cisco proprietary protocol and performs the similar function to HSRP and VRRP but it supports load balancing among members in a GLBP group.

Note: Although GLBP is not a topic for this exam but not sure why we still have this question!

Question 5

Explanation

HSRP consists of 6 states:

State Description
Initial This is the beginning state. It indicates HSRP is not running. It happens when the configuration changes or the interface is first turned on
Learn The router has not determined the virtual IP address and has not yet seen an authenticated hello message from the active router. In this state, the router still waits to hear from the active router.
Listen The router knows both IP and MAC address of the virtual router but it is not the active or standby router. For example, if there are 3 routers in HSRP group, the router which is not in active or standby state will remain in listen state.
Speak The router sends periodic HSRP hellos and participates in the election of the active or standby router.
Standby In this state, the router monitors hellos from the active router and it will take the active state when the current active router fails (no packets heard from active router)
Active The router forwards packets that are sent to the HSRP group. The router also sends periodic hello messages

Please notice that not all routers in a HSRP group go through all states above. In a HSRP group, only one router reaches active state and one router reaches standby state. Other routers will stop at listen state.

Question 6

Explanation

A VRRP router receiving a packet with the TTL not equal to 255 must discard the packet (only one possible hop) -> Answer B is correct.

Currently there are three VRRP versions which are versions 1, 2 and 3 -> Answer E is correct.

VRRP uses multicast address 224.0.0.18 and supports plaintext or MD5 authentication.

Question 7

Question 8

Explanation

The main disadvantage of HSRP and VRRP is that only one gateway is elected to be the active gateway and used to forward traffic whilst the rest are unused until the active one fails. Gateway Load Balancing Protocol (GLBP) is a Cisco proprietary protocol and performs the similar function to HSRP and VRRP but it supports load balancing among members in a GLBP group.

Question 9

Explanation

SSO HSRP alters the behavior of HSRP when a device with redundant Route Processors (RPs) is configured for stateful switchover (SSO) redundancy mode. When an RP is active and the other RP is standby, SSO enables the standby RP to take over if the active RP fails.

The SSO HSRP feature enables the Cisco IOS HSRP subsystem software to detect that a standby RP is installed and the system is configured in SSO redundancy mode. Further, if the active RP fails, no change occurs to the HSRP group itself and traffic continues to be forwarded through the current active gateway device.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipapp_fhrp/configuration/15-s/fhp-15-s-book/fhp-hsrp-sso.html

Question 10

Explanation

In fact, VRRP has the preemption enabled by default so we don’t need the “vrrp 10 preempt” command. The default priority is 100 so we don’t need to configure it either. But notice that the correct command to configure the virtual IP address for the group is “vrrp 10 ip {ip-address}” (not “vrrp group 10 ip …”) and this command does not include a subnet mask.

Question 11

Explanation

In fact, when Edge-01 goes down, Edge-02 will not receive “Hello” messages from Edge-01 so it will promote itself to active state automatically. Therefore no answers here are correct. But if we have to choose the best answer then it would be “preempt” command as the “preempt” command enables the HSRP router with the highest priority to immediately become the active router.

Maybe this question wanted to ask if the link to Core router goes down, which command can be used to take over the forwarding role.

HSRP & VRRP Questions 2

January 29th, 2021 digitaltut 6 comments

Question 1

Explanation

The “virtual MAC address” is 0000.0c07.acXX (XX is the hexadecimal group number) so it is using HSRPv1.

Note: HSRP Version 2 uses a new MAC address which ranges from 0000.0C9F.F000 to 0000.0C9F.FFFF.

Question 2

Question 3

Explanation

From the output exhibit, we notice that the key-string of R1 is “Cisco123!” (letter “C” is in capital) while that of R2 is “cisco123!”. This causes a mismatch in the authentication so we have to fix their key-strings.

Note:

key-string [encryption-type] text-string: Configures the text string for the key. The text-string argument is alphanumeric, case-sensitive, and supports special characters.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/security/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_Security_Configuration_Guide/b_Cisco_Nexus_9000_Series_NX-OS_Security_Configuration_Guide_chapter_01111.pdf

Question 4

Explanation

In HSRP version 1, group numbers are restricted to the range from 0 to 255. HSRP version 2 expands the group number range from 0 to 4095 -> We must configure HSRP group 300 so we must change to HSRP version 2.

Network Assurance Questions

January 28th, 2021 digitaltut 4 comments

Question 1

Explanation

Syslog levels are listed below:

Level Keyword Description
0 emergencies System is unusable
1 alerts Immediate action is needed
2 critical Critical conditions exist
3 errors Error conditions exist
4 warnings Warning conditions exist
5 notification Normal, but significant, conditions exist
6 informational Informational messages
7 debugging Debugging messages

Number “5” in “%LINEPROTO-5- UPDOWN” is the severity level of this message so in this case it is “notification”.

Question 2

Explanation

The TCP port 6514 has been allocated as the default port for syslog over Transport Layer Security (TLS).

Reference: https://tools.ietf.org/html/rfc5425

Question 3

Explanation

The goal of the Cyber Threat Defense solution is to introduce a design and architecture that can help facilitate the discovery, containment, and remediation of threats once they have penetrated into the network interior.

Cisco Cyber Threat Defense version 2.0 makes use of several solutions to accomplish its objectives:

* NetFlow and the Lancope StealthWatch System
– Broad visibility
User and flow context analysis
– Network behavior and anomaly detection
– Incident response and network forensics

* Cisco FirePOWER and FireSIGHT
– Real-time threat management
– Deeper contextual visibility for threats bypassing the perimeters
– URL control

* Advanced Malware Protection (AMP)
– Endpoint control with AMP for Endpoints
– Malware control with AMP for networks and content

* Content Security Appliances and Services
– Cisco Web Security Appliance (WSA) and Cloud Web Security (CWS)
– Dynamic threat control for web traffic
– Outbound URL analysis and data transfer controls
– Detection of suspicious web activity
– Cisco Email Security Appliance (ESA)
– Dynamic threat control for email traffic
– Detection of suspicious email activity

* Cisco Identity Services Engine (ISE)
– User and device identity integration with Lancope StealthWatch
– Remediation policy actions using pxGrid

Reference: https://www.cisco.com/c/dam/en/us/td/docs/security/network_security/ctd/ctd2-0/design_guides/ctd_2-0_cvd_guide_jul15.pdf

IP SLA Questions

January 28th, 2021 digitaltut 21 comments

Question 1

Explanation

We cannot use the LAN (inside) interface to communicate to the ISP as the ISP surely does not know how to return the reply -> Answer A is correct.

Answer B is not correct as we must track the destination of the primary link, not backup link.

In this question, R1 pings R2 via its LAN Fa0/0 interface so maybe R1 (which is an ISP) will not know how to reply back as an ISP usually does not configure a route to a customer’s LAN -> C is correct.

There is no problem with the default route -> D is not correct.

Note:

This tutorial can help you revise IP SLA tracking topic: http://www.firewall.cx/cisco-technical-knowledgebase/cisco-routers/813-cisco-router-ipsla-basic.html and http://www.ciscozine.com/using-ip-sla-to-change-routing/

Note: Maybe some of us will wonder why there are these two commands:

R1(config)#ip route 0.0.0.0 0.0.0.0 172.20.20.2 track 10
R1(config)#no ip route 0.0.0.0 0.0.0.0 172.20.20.2

In fact the two commands:

ip route 0.0.0.0 0.0.0.0 172.20.20.2 track 10
ip route 0.0.0.0 0.0.0.0 172.20.20.2

are different. These two static routes can co-exist in the routing table. Therefore if the tracking goes down, the first command will be removed but the second one still exists and the backup path is not preferred. So we have to remove the second one.

Question 2

Explanation

The “ip sla 10” will ping the IP 192.168.10.20 every 3 seconds to make sure the connection is still up. We can configure an EEM applet if there is any problem with this IP SLA via the command “event track 10 state down”.

Reference: https://www.theroutingtable.com/ip-sla-and-cisco-eem/

The syntax of tracking command is: track object-number ip sla operation-number [state | reachability]

Tracking Return Code Track State
State

OK

(all other return codes)

Up

Down

Reachability

OK or over threshold

(all other return codes)

Up

Down

Both “state” and “reachability” return the track state of “up” or “down” only (no “unreachable” state) so we have to configure either “up” or “down” in “event track 10 state …” command.

Note:
+ There is no “event sla …” command.
+ There are only three states of an “event track”, which are “any”, “down” and “up”

Router(config-applet)#event track 10 state ?
  any   Any state
  down  Down state
  up    Up state

Question 3

Explanation

IP SLAs allows Cisco customers to analyze IP service levels for IP applications and services, to increase productivity, to lower operational costs, and to reduce the frequency of network outages. IP SLAs uses active traffic monitoring–the generation of traffic in a continuous, reliable, and predictable manner–for measuring network performance.

Being Layer-2 transport independent, IP SLAs can be configured end-to-end over disparate networks to best reflect the metrics that an end-user is likely to experience.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipsla/configuration/15-mt/sla-15-mt-book/sla_overview.html

Question 4

Explanation

Cisco IOS IP SLA Responder is a Cisco IOS Software component whose functionality is to respond to Cisco IOS IP SLA request packets. The IP SLA source sends control packets before the operation starts to establish a connection to the responder. Once the control packet is acknowledged, test packets are sent to the responder. The responder inserts a time-stamp when it receives a packet and factors out the destination processing time and adds time-stamps to the sent packets. This feature allows the calculation of unidirectional packet loss, latency, and jitter measurements with the kind of accuracy that is not possible with ping or other dedicated probe testing.

Reference: https://www.cisco.com/en/US/technologies/tk869/tk769/technologies_white_paper0900aecd806bfb52.html

The IP SLAs responder is a component embedded in the destination Cisco device that allows the system to anticipate and respond to IP SLAs request packets. The responder provides accurate measurements without the need for dedicated probes.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12-2/46sg/configuration/guide/Wrapper-46SG/swipsla.html

UDP Jitter measures the delay, delay variation(jitter), corruption, misorderingand packet lossby generating periodic UDP traffic. This operation always requires IP SLA responder.

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2017/pdf/BRKNMS-3043.pdf

NetFlow Questions

January 28th, 2021 digitaltut 10 comments

Note: If you are not sure about NetFlow, please read our NetFlow tutorial.

Question 1

Explanation

The “mode random one-out of 100” specifies that sampling uses the random mode and only take one sample out of every 100 packets.

Question 2

Explanation

Option A is not correct because “flow FLOW-MONITOR-1” cannot be under “sampler SAMPLER-1”.
Option B is not correct because we remove “mode random 1 out-of 2”.
Option D is not correct because “sampler SAMPLER-1” cannot be put under “flow monitor …”
Option C is correct and the command “ip flow monitor …” assigns the flow monitor and the flow sampler that you created to the interface to enable sampling.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/fnetflow/configuration/xe-3se/3850/use-fnflow-redce-cpu.html

Question 3

Explanation

To configure multiple NetFlow export destinations to a router, use the following commands in global configuration mode:

Step 1: Router(config)# ip flow-export destination ip-address udp-port
Step 2: Router(config)# ip flow-export destination ip-address udp-port

The following example enables the exporting of information in NetFlow cache entries:

ip flow-export destination 10.42.42.1 9991
ip flow-export destination 10.0.101.254 1999

Reference: https://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/12s_mdnf.html

Question 4

SPAN Questions

January 28th, 2021 digitaltut No comments

Question 1

Explanation

SW1 has been configured with the following commands:

SW1(config)#monitor session 1 source remote vlan 50
SW1(config)#monitor session 2 source interface fa0/14
SW1(config)#monitor session 2 destination interface fa0/15

The session 1 on SW1 was configured for Remote SPAN (RSPAN) while session 2 was configured for local SPAN. For RSPAN we need to configure the destination port to complete the configuration.

Note: In fact we cannot create such a session like session 1 because if we only configure “Source RSPAN VLAN 50” (with the command “monitor session 1 source remote vlan 50”) then we will receive a “Type: Remote Source Session” (not “Remote Destination Session”).

Question 2

Explanation

RSPAN consists of at least one RSPAN source session, an RSPAN VLAN, and at least one RSPAN destination session. You separately configure RSPAN source sessions and RSPAN destination sessions on different network devices. To configure an RSPAN source session on a device, you associate a set of source ports or source VLANs with an RSPAN VLAN.

Traffic monitoring in a SPAN session has these restrictions:
+ Sources can be ports or VLANs, but you cannot mix source ports and source VLANs in the same session.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3750x_3560x/software/release/12-2_55_se/configuration/guide/3750xscg/swspan.html

Therefore in this question, we cannot configure a source VLAN because we configured source ports for RSPAN session 1 already.

Question 3

Question 4

Question 5

Explanation

Encapsulated remote SPAN (ERSPAN): encapsulated Remote SPAN (ERSPAN), as the name says, brings generic routing encapsulation (GRE) for all captured traffic and allows it to be extended across Layer 3 domains.

Question 6

Explanation

The traffic for each RSPAN session is carried over a user-specified RSPAN VLAN that is dedicated for that RSPAN session in all participating switches -> This VLAN can be considered a special VLAN type -> Answer C is correct.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3750x_3560x/software/release/12-2_55_se/configuration/guide/3750xscg/swspan.html

We can configure multiple RSPAN sessions on a switch at a time, then continue configuring multiple RSPAN sessions on the other switch without any problem -> Answer B is not correct.

This is how to configure Remote SPAN (RSPAN) feature on two switches. Traffic on FastEthernet0/1 of Switch 1 will be sent to Fa0/10 of Switch2 via VLAN 40.

+ Configure on both switches
Switch1,2(config)#vlan 40
Switch1,2(config-vlan)#remote-span
+ Configure on Switch1
Switch1(config)# monitor session 1 source interface FastEthernet 0/1
Switch1(config)# monitor session 1 destination remote vlan 40
+ Configure on Switch2
Switch2(config)#monitor session 5 source remote vlan 40
Switch2(config)# monitor session 5 destination interface FastEthernet 0/10

Troubleshooting Questions

January 28th, 2021 digitaltut 5 comments

Question 1

Explanation

If the DF bit is set, routers cannot fragment packets. From the output below, we learn that the maximum MTU of R2 is 1492 bytes while we sent ping with 1500 bytes. Therefore these ICMP packets were dropped.

Note: Record option displays the address(es) of the hops (up to nine) the packet goes through.

AAA Questions

January 28th, 2021 digitaltut 27 comments

Note: If you are not sure about AAA, please read our AAA TACACS+ and RADIUS Tutorial (on 9tut.com).

Question 1

Question 2

Explanation

The “aaa authentication login default local group tacacs+” command is broken down as follows:

+ The ‘aaa authentication’ part is simply saying we want to configure authentication settings.
+ The ‘login’ is stating that we want to prompt for a username/password when a connection is made to the device.
+ The ‘default’ means we want to apply for all login connections (such as tty, vty, console and aux). If we use this keyword, we don’t need to configure anything else under tty, vty and aux lines. If we don’t use this keyword then we have to specify which line(s) we want to apply the authentication feature.
+ The ‘local group tacacs+” means all users are authenticated using router’s local database (the first method). If the credentials are not found on the local database, then the TACACS+ server is used (the second method).

Question 3


Explanation

According to the requirements (first use TACACS+, then allow login with no authentication), we have to use “aaa authentication login … group tacacs+ none” for AAA command.

The next thing to check is the if the “aaa authentication login default” or “aaa authentication login list-name” is used. The ‘default’ keyword means we want to apply for all login connections (such as tty, vty, console and aux). If we use this keyword, we don’t need to configure anything else under tty, vty and aux lines. If we don’t use this keyword then we have to specify which line(s) we want to apply the authentication feature.

From above information, we can find out answer C is correct. Although the “password 7 0202039485748” line under “line vty 0 4” is not necessary.

If you want to learn more about AAA configuration, please read our AAA TACACS+ and RADIUS Tutorial – Part 2.

For your information, answer D would be correct if we add the following command under vty line (“line vty 0 4”): “login authentication telnet” (“telnet” is the name of the AAA list above)

Question 4

Question 5

Explanation

The “autocommand” causes the specified command to be issued automatically after the user logs in. When the command is complete, the session is terminated. Because the command can be any length and can contain embedded spaces, commands using the autocommand keyword must be the last option on the line. In this specific question, we have to enter this line “username CCNP autocommand show running-config”.

Question 6

Explanation

In this question, there are two different passwords for user “tommy”:
+ In the TACACS+ server, the password is “Tommy”
+ In the local database of the router, the password is “Cisco”.

From the line “login authentication local” we know that the router uses the local database for authentication so the password should be “Cisco”.

Note: “… password 0 …” here means unencrypted password.

GRE Tunnel Questions

January 28th, 2021 digitaltut 8 comments

Note: If you are not sure about GRE, please read our GRE Tunnel Tutorial.

Question 1

Explanation

The IP protocol was designed for use on a wide variety of transmission links. Although the maximum length of an IP datagram is 65535, most transmission links enforce a smaller maximum packet length limit, called an MTU. The value of the MTU depends on the type of the transmission link. The design of IP accommodates MTU differences since it allows routers to fragment IP datagrams as necessary. The receiving station is responsible for the reassembly of the fragments back into the original full size IP datagram.

Fragmentation and Path Maximum Transmission Unit Discovery (PMTUD) is a standardized technique to determine the maximum transmission unit (MTU) size on the network path between two hosts, usually with the goal of avoiding IP fragmentation. PMTUD was originally intended for routers in IPv4. However, all modern operating systems use it on endpoints.

The TCP Maximum Segment Size (TCP MSS) defines the maximum amount of data that a host is willing to accept in a single TCP/IP datagram. This TCP/IP datagram might be fragmented at the IP layer. The MSS value is sent as a TCP header option only in TCP SYN segments. Each side of a TCP connection reports its MSS value to the other side. Contrary to popular belief, the MSS value is not negotiated between hosts. The sending host is required to limit the size of data in a single TCP segment to a value less than or equal to the MSS reported by the receiving host.

TCP MSS takes care of fragmentation at the two endpoints of a TCP connection, but it does not handle the case where there is a smaller MTU link in the middle between these two endpoints. PMTUD was developed in order to avoid fragmentation in the path between the endpoints. It is used to dynamically determine the lowest MTU along the path from a packet’s source to its destination.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html (there is some examples of how TCP MSS avoids IP Fragmentation in this link but it is too long so if you want to read please visit this link)

Note: IP fragmentation involves breaking a datagram into a number of pieces that can be reassembled later.

If the DF bit is set to clear, routers can fragment packets regardless of the original DF bit setting -> Answer D is not correct.

Question 2

Explanation

The TCP Maximum Segment Size (TCP MSS) defines the maximum amount of data that a host is willing to accept in a single TCP/IP datagram. This TCP/IP datagram might be fragmented at the IP layer. The MSS value is sent as a TCP header option only in TCP SYN segments. Each side of a TCP connection reports its MSS value to the other side. Contrary to popular belief, the MSS value is not negotiated between hosts. The sending host is required to limit the size of data in a single TCP segment to a value less than or equal to the MSS reported by the receiving host.

TCP MSS takes care of fragmentation at the two endpoints of a TCP connection, but it does not handle the case where there is a smaller MTU link in the middle between these two endpoints. PMTUD was developed in order to avoid fragmentation in the path between the endpoints. It is used to dynamically determine the lowest MTU along the path from a packet’s source to its destination.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html (there is some examples of how TCP MSS avoids IP Fragmentation in this link but it is too long so if you want to read please visit this link)

Note: IP fragmentation involves breaking a datagram into a number of pieces that can be reassembled later.

Question 3

Explanation

If the DF bit is set to clear (not set), routers can fragment packets regardless of the original DF bit setting.

Whenever we create tunnel interfaces, the GRE IP MTU is automatically configured 24 bytes less than the outbound physical interface MTU. Ethernet interfaces have an MTU value of 1500 bytes so tunnel interfaces by default will have 1476 bytes MTU, which is 24 bytes less the physical interface. The process of sending a 1500-byte IPv4 packet (with DF bit set to clear) is shown below:

1. The sender sends a 1500-byte packet (20 byte IPv4 header + 1480 bytes of TCP payload).
2. Since the MTU of the GRE tunnel is 1476, the 1500-byte packet is broken into two IPv4 fragments of 1476 and 44 bytes, each in anticipation of the additional 24 byes of GRE header.
3. The 24 bytes of GRE header is added to each IPv4 fragment. Now the fragments are 1500 (1476 + 24) and 68 (44 + 24) bytes each.
4. The GRE + IPv4 packets that contain the two IPv4 fragments are forwarded to the GRE tunnel peer router.
5. The GRE tunnel peer router removes the GRE headers from the two packets.
6. This router forwards the two packets to the destination host.
7. The destination host reassembles the IPv4 fragments back into the original IPv4 datagram.

Reference: https://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html (Scenario 5)

Question 4

Question 5

Explanation

The %TUN-5-RECURDOWN: Tunnel0 temporarily disabled due to recursive routing error message means that the generic routing encapsulation (GRE) tunnel router has discovered a recursive routing problem. This condition is usually due to one of these causes:
+ A misconfiguration that causes the router to try to route to the tunnel destination address using the tunnel interface itself (recursive routing)
+ A temporary instability caused by route flapping elsewhere in the network

Reference: https://www.cisco.com/c/en/us/support/docs/ip/enhanced-interior-gateway-routing-protocol-eigrp/22327-gre-flap.html

Question 6

Explanation

From the “Tunnel protocol/transport GRE/IP” line, we can deduce this tunnel is using the default IPv4 Layer-3 tunnel mode. We can return to this default mode with the “tunnel mode gre ip” command.

Question 7

Explanation

In order to make a Point-to-Point GRE Tunnel interface in up/up state, two requirements must be met:
+ A valid tunnel source (which is in up/up state and has an IP address configured on it) and tunnel destination must be configured
+ A valid tunnel destination is one which is routable. However, it does not have to be reachable.

-> In this question we are missing an up/up source so we can choose Loopback 0 interface.

Question 8

Explanation

In the above output, the IP address of “209.165.202.130” is the tunnel source IP while the IP 10.111.1.1 is the tunnel IP address.

An example of configuring GRE tunnel is shown below:

R1 (GRE config only)
interface s0/0/0
ip address 63.1.27.2 255.255.255.0
interface tunnel0
ip address 10.0.0.1 255.255.255.0
tunnel mode gre ip //this command can be ignored
tunnel source s0/0
tunnel destination 85.5.24.10
R2 (GRE config only)
interface s0/0/0
ip address 85.5.24.10 255.255.255.0
interface tunnel1
ip address 10.0.0.2 255.255.255.0
tunnel source 85.5.24.10
tunnel destination 63.1.27.2

Question 9

Question 10

Explanation

6to4 tunnel is a technique which relies on reserved address space 2002::/16 (you must remember this range). These tunnels determine the appropriate destination address by combining the IPv6 prefix with the globally unique destination 6to4 border router’s IPv4 address, beginning with the 2002::/16 prefix, in this format:

2002:border-router-IPv4-address::/48

For example, if the border-router-IPv4-address is 64.101.64.1, the tunnel interface will have an IPv6 prefix of 2002:4065:4001:1::/64, where 4065:4001 is the hexadecimal equivalent of 64.101.64.1. This technique allows IPv6 sites to communicate with each other over the IPv4 network without explicit tunnel setup but we have to implement it on all routers on the path.

NAT Questions

January 28th, 2021 digitaltut 9 comments

Question 1

Explanation

The command “ip nat inside source list 1 interface gigabitethernet0/0 overload” translates all source addresses that pass access list 1, which means 172.16.1.0/24 subnet, into an address assigned to gigabitethernet0/0 interface. Overload keyword allows to map multiple IP addresses to a single registered IP address (many-to-one) by using different ports so it is called Port Address Translation (PAT).

Question 2

Explanation

In this question, the inside local addresses of the 10.1.1.0/27 subnet are translated into 209.165.201.0/27 subnet. This is one-to-one NAT translation as the keyword “overload” is missing so in fact answer B is also correct.

Question 3

Explanation

The syntax of NAT command is ip nat inside source static local-ip global-ip so we can deduce the first IP address is the local IP address where we apply “ip nat inside” command and the second IP address is the global IP address where we apply “ip nat outside” command.

Question 4

Explanation

The command “ip nat inside source list 10 interface FastEthernet0/1 overload” configures NAT to overload on the address that is assigned to the Fa0/1 interface.

Question 5

Explanation

The command “ip nat inside source list 10 interface G0/3 overload” configures NAT to overload (PAT) on the address that is assigned to the G0/3 interface.

STP Questions

January 28th, 2021 digitaltut 19 comments

Quick review about BPDUGuard & BPDUFilter:

BPDU Guard feature allows STP to shut an access port in the event of receiving a BPDU and put that port into err-disabled state. BPDU Guard is configured under an interface via this command:

Switch(config-if)#spanning-tree bpduguard enable

Or configured globally via this command (BPDU Guard is enabled on all PortFast interfaces):

Switch(config)#spanning-tree portfast edge bpduguard default

BPDUFilter is designed to suppress the sending and receiving of BPDUs on an interface. There are two ways of configuring BPDUFilter: under global configuration mode or under interface mode but they have subtle difference.

If BPDUFilter is configured globally via this command:

Switch(config)#spanning-tree portfast bpdufilter default

BPDUFilter will be enabled on all PortFast-enabled interfaces and will suppress the interface from sending or receiving BPDUs. This is good if that port is connected to a host because we can enable PortFast on this port to save some start-up time while not allowing BPDU being sent out to that host. Hosts do not participate in STP and hence drop the received BPDUs. As a result, BPDU filtering prevents unnecessary BPDUs from being transmitted to host devices.

If BPDUFilter is configured under interface mode like this:

Switch(config-if)#spanning-tree bpdufilter enable

It will suppress the sending and receiving of BPDUs. This is the same as disabling spanning tree on the interface. This choice is risky and should only be used when you are sure that port only connects to host devices.

Question 1

Explanation

The purpose of Port Fast is to minimize the time interfaces must wait for spanning-tree to converge, it is effective only when used on interfaces connected to end stations.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3560/software/release/12-2_55_se/configuration/guide/3560_scg/swstpopt.html

Question 2

Question 3

Explanation

SW1 needs to block one of its ports to SW2 to avoid a bridging loop between the two switches. Unfortunately, it blocked the fiber port Link2. But how does SW2 select its blocked port? Well, the answer is based on the BPDUs it receives from SW1. A BPDU is superior than another if it has:
1. A lower Root Bridge ID
2. A lower path cost to the Root
3. A lower Sending Bridge ID
4. A lower Sending Port ID

These four parameters are examined in order. In this specific case, all the BPDUs sent by SW1 have the same Root Bridge ID, the same path cost to the Root and the same Sending Bridge ID. The only parameter left to select the best one is the Sending Port ID (Port ID = port priority + port index). And the port index of Gi0/0 is lower than the port index of Gi0/1 so Link 1 has been chosen as the primary link.

Therefore we must change the port priority to change the primary link. The lower numerical value of port priority, the higher priority that port has. In other words, we must change the port-priority on Gi0/1 of SW1 (not on Gi0/1 of SW2) to a lower value than that of Gi0/0.

Question 4

Explanation

Where to Use MST
This diagram shows a common design that features access Switch A with 1000 VLANs redundantly connected to two distribution Switches, D1 and D2. In this setup, users connect to Switch A, and the network administrator typically seeks to achieve load balancing on the access switch Uplinks based on even or odd VLANs, or any other scheme deemed appropriate.

MST_usage.pngReference: https://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/24248-147.html

Question 5

Explanation

From the second command output (show spanning-tree mst) we learn that MST1 includes VLANs 10 & 20. Therefore if we want DSW1 to become root bridge for these VLANs we need to set the MST 1 region to root -> The command “spanning-tree mst 1 root primary” can do the trick. In fact, this command runs a macro and sets the priority lower than the current root.

Also we can see the current root bridge for these VLANs has the priority of 32769 (default value + sysid) so we can set the priority of DSW1 to a specific lower value. But notice that the priority must be a multiple of 4096. Therefore D is a correct answer.

Question 6

Explanation

In the topology above, we see DSW2 has lowest priority 24576 so it is the root bridge for VLAN 10 so surely all traffic for this VLAN must go through it. All of DSW2 ports must be in forwarding state.

The next thing we have to figure out is which port of ALSW1 would be chosen as root port as traffic must go via this port. The root port is chosen via the following sequence of three conditions:

1. Lowest accumulated cost on interfaces towards Root Bridge
2. Lowest Sender Bridge ID
3. Lowest Sender Port ID (= Port Priority + Port Number)

Let’s start with the first condition: Lowest accumulated cost on interfaces towards Root Bridge.  This question did not mention about which method that STP is using (short or long method) so we will suppose the default, which is short method, is used. With this method, the STP cost of 10Gbps is 2 while the STP cost of 1Gbps is 4.

Therefore the path cost from DSW2 to ALSW1
+ via DSW2 -> DSW1 -> ALSW1 is 2 + 2 = 4 and
+ the path cost from DSW2 -> ALSW1 (direct link) is 4, too

-> The first condition is equal so we have to use the second one: Lowest Sender Bridge ID

In this condition, the direct path DSW2 -> ALSW1 wins because the sender Bridge ID of DSW2 is lower.

Therefore ALSW1 will choose Gi0/2 the root port and the link between ALSW1 and DSW1 is blocked by STP to prevent loop.

Therefore PC1 must go via this path: PC1 -> ALSW1 -> DSW2 -> DSW1.

Question 7

Explanation

Root guard does not allow the port to become a STP root port, so the port is always STP-designated. If a better BPDU arrives on this port, root guard does not take the BPDU into account and elect a new STP root. Instead, root guard puts the port into the root-inconsistent STP state which is equal to a listening state. No traffic is forwarded across this port.

Below is an example of where to configure Root Guard on the ports. Notice that Root Guard is always configure on designated ports.

Root_Guard_Location.jpg

To configure Root Guard use this command:

Switch(config-if)# spanning-tree guard root

Reference: http://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/10588-74.html

Question 8

DNA Center Questions

January 27th, 2021 digitaltut 17 comments

DNA Center Quick Summary

Software-Defined Access (SDA) uses the software-defined architectural model, with a controller and various APIs. At the center sits the Digital Network Architecture (DNA) Center controller.

DNA_Center.jpg

Note:

 + REST API (Representational State Transfer API) is a web service architecture that provides a standard way for clients to communicate with servers over the web. It is based on the HTTP protocol and uses the concepts of resources (represented by URIs) and HTTP methods (GET, POST, PUT, DELETE) to interact with them.

 + RESTCONF, on the other hand, is a protocol that provides a programmatic interface to network devices using RESTful architecture. It is built on top of RESTful principles and is designed specifically for network management. RESTCONF provides a standardized way to access and manipulate network configuration data, state data, and operational data using HTTP methods.

DNA Center is the controller for SDA networks. The southbound side of the controller contains the fabric, underlay, and overlay:

Overlay: The mechanisms to create VXLAN tunnels between SDA switches, which are then used to transport traffic from one fabric endpoint to another over the fabric.

Overlay.jpg

Underlay: The network of devices and connections (cables and wireless) to provide IP connectivity to all nodes in the fabric, with a goal to support the dynamic discovery of all SDA devices and endpoints as a part of the process to create overlay VXLAN tunnels.

Underlay.jpg

The relationship between Overlay and Underlay is shown below:

VXLAN_VTEP.jpg

Fabric: The combination of overlay and underlay, which together provide all features to deliver data across the network with the desired features and attributes

Cisco DNA Center is a software solution that resides on the Cisco DNA Center appliance. The solution receives data in the form of streaming telemetry from every device (switch, router, access point, and wireless access controller) on the network. This data provides Cisco DNA Center with the real-time information it needs for the many functions it performs.

Cisco DNA Center collects data from several different sources and protocols on the local network, including the following: traceroute; syslog; NetFlow; Authentication, Authorization, and Accounting (AAA); routers; Dynamic Host Configuration Protocol (DHCP); Telnet; wireless devices; Command-Line Interface (CLI); Object IDs (OIDs); IP SLA; DNS; ping; Simple Network Management Protocol (SNMP); IP Address Management (IPAM); MIB; Cisco Connected Mobile Experiences (CMX); and AppDynamics.

Cisco DNA Center offers 360-degree extensibility through four distinct types of platform capabilities:

+ Intent-based APIs leverage the controller and enable business and IT applications to deliver intent to the network and to reap network analytics and insights for IT and business innovation. The Intent API provides policy-based abstraction of business intent, allowing focus on an outcome rather than struggling with individual mechanisms steps.

For example, the administrator configures the intent or outcome desired of a set of security polices. The DNA Center then communicates with the devices to determine exactly the required configuration to achieve that intent. Then the complete configuration is sent down to the devices.
+ Process adapters, built on integration APIs, allow integration with other IT and network systems to streamline IT operations and processes.
+ Domain adapters, built on integration APIs, allow integration with other infrastructure domains such as data center, WAN, and security to deliver a consistent intent-based infrastructure across the entire IT environment.
+ SDKs allow management to be extended to third-party vendor’s network devices to offer support for diverse environments.

DNA Center APIs

Cisco DNA Center APIs are grouped into four categories: northbound, southbound, eastbound and westbound:
+ Northbound (Intent) APIs: enable developers to access Cisco DNA Center Automation and Assurance workflows. For example: provision SSIDs, QoS policies, update software images running on the network devices, and application health.
+ Southbound (Multivendor Support) APIs: allows partners to add support for managing non-Cisco devices directly from Cisco DNA Center
+ Eastbound (Events and Notifications) APIs: publish event notifications that enable third party applications to act on system level, operational or Cisco DNA Assurance notifications. For example, when some of the devices in the network are out of compliance, an eastbound API can enable an application to execute a software upgrade when it receives a notification.
+ Westbound (Integration) APIs: provide the capability to publish the network data, events and notifications to the external systems and consume information in Cisco DNA Center from the connected systems. Through integration APIs, Cisco DNA Center platform can power end-to-end IT processes across the value chain by integrating various domains such as IT Service Management (ITSM), IP address management (IPAM), and reporting. By leveraging the REST-based Integration Adapter APIs, bi-directional interfaces can be built to allow the exchange of contextual information between Cisco DNA Center and the external, third-party IT systems.

Question 1

Question 2

Explanation

A complete Cisco DNA Center upgrade includes “System Update” and “Appplication Updates”

DNA_Complete_Upgrade.jpg

Question 3

Explanation

Cisco DNA Center provides an interactive editor called Template Editor to author CLI templates. Template Editor is a centralized CLI management tool to help design a set of device configurations that you need to build devices in a branch. When you have a site, office, or branch that uses a similar set of devices and configurations, you can use Template Editor to build generic configurations and apply the configurations to one or more devices in the branch.

Reference: https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/1-3/user_guide/b_cisco_dna_center_ug_1_3/b_cisco_dna_center_ug_1_3_chapter_0111.html

Question 4

Question 5

Explanation

The Cisco DNA Center open platform for intent-based networking provides 360-degree extensibility across multiple components, including:
+ Intent-based APIs leverage the controller to enable business and IT applications to deliver intent to the network and to reap network analytics and insights for IT and business innovation. These enable APIs that allow Cisco DNA Center to receive input from a variety of sources, both internal to IT and from line-of-business applications, related to application policy, provisioning, software image management, and assurance.

Reference: https://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/dna-center/nb-06-dna-cent-plat-sol-over-cte-en.html

Question 6

Explanation

You can create a network hierarchy that represents your network’s geographical locations. Your network hierarchy can contain sites, which in turn contain buildings and areas. You can create site and building IDs to easily identify where to apply design settings or configurations later.

Reference: https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/1-2-5/user_guide/b_dnac_ug_1_2_5/b_dnac_ug_1_2_4_chapter_0110.html

DNA_Center_Design_Network_Hierarchy.jpg

Question 7

Question 8

Explanation

The Intent API is a Northbound REST API that exposes specific capabilities of the Cisco DNA Center platform.
The Intent API provides policy-based abstraction of business intent, allowing focus on an outcome rather than struggling with individual mechanisms steps.

Reference: https://developer.cisco.com/docs/dna-center/#!cisco-dna-center-platform-overview/intent-api-northbound

Question 9

Explanation

Cisco DNA Center allows customers to manage their non-Cisco devices through the use of a Software Development Kit (SDK) that can be used to create Device Packages for third-party devices.

Reference: https://developer.cisco.com/docs/dna-center/#!cisco-dna-center-platform-overview/multivendor-support-southbound

Question 10

Explanation

If your network uses Cisco Identity Services Engine for user authentication, you can configure Assurance for Cisco ISE integration. This enables you to see more information about wired clients, such as the username and operating system, in Assurance.

Reference: https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center-assurance/2-1-2/b_cisco_dna_assurance_2_1_2_ug/b_cisco_dna_assurance_2_1_1_ug_chapter_011.html

Security Questions

January 27th, 2021 digitaltut 14 comments

Question 1

Explanation

Lines (CON, AUX, VTY) default to level 1 privileges.

Question 2

Explanation

MACsec, defined in 802.1AE, provides MAC-layer encryption over wired networks by using out-of-band methods for encryption keying. The MACsec Key Agreement (MKA) Protocol provides the required session keys and manages the required encryption keys. MKA and MACsec are implemented after successful authentication using the 802.1x Extensible Authentication Protocol (EAP-TLS) or Pre Shared Key (PSK) framework.

A switch using MACsec accepts either MACsec or non-MACsec frames, depending on the policy associated with the MKA peer. MACsec frames are encrypted and protected with an integrity check value (ICV). When the switch receives frames from the MKA peer, it decrypts them and calculates the correct ICV by using session keys provided by MKA. The switch compares that ICV to the ICV within the frame. If they are not identical, the frame is dropped. The switch also encrypts and adds an ICV to any frames sent over the secured port (the access point used to provide the secure MAC service to a MKA peer) using the current session key.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9300/software/release/16-9/configuration_guide/sec/b_169_sec_9300_cg/macsec_encryption.html

Note: Cisco Trustsec is the solution which includes MACsec.

Question 3

Explanation

The ultimate goal of Cisco TrustSec technology is to assign a tag (known as a Security Group Tag, or SGT) to the user’s or device’s traffic at ingress (inbound into the network), and then enforce the access policy based on the tag elsewhere in the infrastructure (in the data center, for example). This SGT is used by switches, routers, and firewalls to make forwarding decisions. For instance, an SGT may be assigned to a Guest user, so that Guest traffic may be isolated from non-Guest traffic throughout the infrastructure.

Reference: https://www.cisco.com/c/dam/en/us/solutions/collateral/borderless-networks/trustsec/C07-730151-00_overview_of_trustSec_og.pdf

Question 4

Explanation

The Cisco TrustSec solution simplifies the provisioning and management of network access control through the use of software-defined segmentation to classify network traffic and enforce policies for more flexible access controls. Traffic classification is based on endpoint identity, not IP address, enabling policy change without net-work redesign.

Reference: https://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/Apr2016/User-to-DC_Access_Control_Using_TrustSec_Deployment_April2016.pdf

Question 5

Explanation

The “enable secret” password is always encrypted (independent of the “service password-encryption” command) using MD5 hash algorithm. The “enable password” does not encrypt the password and can be view in clear text in the running-config. In order to encrypt the “enable password”, use the “service password-encryption” command. This command will encrypt the passwords by using the Vigenere encryption algorithm. Unfortunately, the Vigenere encryption method is cryptographically weak and trivial to reverse.

The MD5 hash is a stronger algorithm than Vigenere so answer D is correct.

Question 6

Explanation

Firepower Threat Defense (FTD) provides six interface modes which are: Routed, Switched, Inline Pair, Inline Pair with Tap, Passive, Passive (ERSPAN).

When Inline Pair Mode is in use, packets can be blocked since they are processed inline
When you use Inline Pair mode, the packet goes mainly through the FTD Snort engine
When Tap Mode is enabled, a copy of the packet is inspected and dropped internally while the actual traffic goes through FTD unmodified

https://www.cisco.com/c/en/us/support/docs/security/firepower-ngfw/200924-configuring-firepower-threat-defense-int.html

Question 7

Question 8

Explanation

Ransomware are malicious software that locks up critical resources of the users. Ransomware uses well-established public/private key cryptography which leaves the only way of recovering the files being the payment of the ransom, or restoring files from backups.

Cisco Advanced Malware Protection (AMP) for Endpoints Malicious Activity Protection (MAP) engine defends your endpoints by monitoring the system and identifying processes that exhibit malicious activities when they execute and stops them from running. Because the MAP engine detects threats by observing the behavior of the process at run time, it can generically determine if a system is under attack by a new variant of ransomware or malware that may have eluded other security products and detection technology, such as legacy signature-based malware detection. The first release of the MAP engine targets identification, blocking, and quarantine of ransomware attacks on the endpoint.

Reference: https://www.cisco.com/c/dam/en/us/products/collateral/security/amp-for-endpoints/white-paper-c11-740980.pdf

Question 9

Explanation

Clustering lets you group multiple Firepower Threat Defense (FTD) units together as a single logical device. Clustering is only supported for the FTD device on the Firepower 9300 and the Firepower 4100 series. A cluster provides all the convenience of a single device (management, integration into a network) while achieving the increased throughput and redundancy of multiple devices.

Question 10

Explanation

The “exec-timeout” command is used to configure the inactive session timeout on the console port or the virtual terminal. The syntax of this command is:

exec-timeout minutes [seconds]

Therefore we need to use the “exec-timeout 10 0” command to set the user inactivity timer to 600 seconds (10 minutes).

Question 11

Explanation

If you’re a website owner and your website displays this error message, then there could be two reasons why the browser says the cert authority is invalid:
+ You’re using a self-signed SSL certificate, OR
+ The certificate authority (CA) that issued your SSL certificate isn’t trusted by your web browser.

Access-list Questions

January 27th, 2021 digitaltut 38 comments

If you are not sure about Access-list, please read our Access-list tutorial.

Question 1

Explanation

Remember, for the wildcard mask, 1′s are I DON’T CARE, and 0′s are I CARE. So now let’s analyze a simple ACL:

access-list 1 permit 172.23.16.0 0.0.15.255

Two first octets are all 0’s meaning that we care about the network 172.23.x.x. The third octet of the wildcard mask, 15 (0000 1111 in binary), means that we care about first 4 bits but don’t care about last 4 bits so we allow the third octet in the form of 0001xxxx (minimum:00010000 = 16; maximum: 0001111 = 31).

wildcard_mask.jpg

The fourth octet is 255 (all 1 bits) that means I don’t care.

Therefore network 172.23.16.0 0.0.15.255 ranges from 172.23.16.0 to 172.23.31.255.

Now let’s consider the wildcard mask of 0.0.0.254 (four octet: 254 = 1111 1110) which means we only care the last bit. Therefore if the last bit of the IP address is a “1” (0000 0001) then only odd numbers are allowed. If the last bit of the IP address is a “0” (0000 0000) then only even numbers are allowed.

Note: In binary, odd numbers are always end with a “1” while even numbers are always end with a “0”.

Therefore in this question, only the statement “permit 10.0.0.1 0.0.0.254” will allow all odd-numbered hosts in the 10.0.0.0/24 subnet.

Question 2

Question 3

Explanation

The syntax of an extended ACL is shown below:

access-list access-list-number {permit | deny} protocol source {source-mask} [eq source-port] destination {destination-mask} [eq destination-port]

Access_list_Inbound.jpg

According to the request in this question, we must apply the ACL on the port connected to the Web Server and with inbound direction. So it can only filter traffic sent from the Web Server to the Client. Please notice that the Client communicate to the Web Server with destination port of 80 but with random source port. So the Web Server must answer the Client with this random port (as the destination port) -> Therefore the destination port in the required ACL must be ignored. Also the Web Server must use port 80 as its source port.

So the structure of the ACL should be: permit tcp host <IP-address-of-Web-Server> eq 80 host <IP-address-of-Client>

-> Answer C is correct.

Question 4

Explanation

Although the statement “permit tcp any any gt … lt …” seems to be correct but in fact it is not. Each ACL statement only supports either “gt” or “lt” but not both:

Access-list_gt_lt.jpg

-> Answer A is not correct.

Answer C is only correct if the order of the statement is in reverse order.

Question 5

Explanation

We can insert a line (statement) between entries into an existing ACL by a number in between.

access_list_add_one_statement.jpg

So what will happen if we just enter a statement without the number? Well, that statement would be added at the bottom of an ACL. But in this case we already had an explicit “deny ip any any” statement so we cannot put another line under it.

Question 6

Explanation

We cannot filter traffic that is originated from the local router (R3 in this case) so we can only configure the ACL on R1 or R2. “Weekend hours” means from Saturday morning through Sunday night so we have to configure: “periodic weekend 00:00 to 23:59”.

Note: The time is specified in 24-hour time (hh:mm), where the hours range from 0 to 23 and the minutes range from 0 to 59.

Question 7

Explanation

The established keyword is only applicable to TCP access list entries to match TCP segments that have the ACK and/or RST control bit set (regardless of the source and destination ports), which assumes that a TCP connection has already been established in one direction only. Let’s see an example below:

access-list_established.jpgSuppose you only want to allow the hosts inside your company to telnet to an outside server but not vice versa, you can simply use an “established” access-list like this:

access-list 100 permit tcp any any established
access-list 101 permit tcp any any eq telnet
!
interface S0/0
ip access-group 100 in
ip access-group 101 out

Note:

Suppose host A wants to start communicating with host B using TCP. Before they can send real data, a three-way handshake must be established first. Let’s see how this process takes place:

TCP_Three_way_handshake.jpg

1. First host A will send a SYN message (a TCP segment with SYN flag set to 1, SYN is short for SYNchronize) to indicate it wants to setup a connection with host B. This message includes a sequence (SEQ) number for tracking purpose. This sequence number can be any 32-bit number (range from 0 to 232) so we use “x” to represent it.

2. After receiving SYN message from host A, host B replies with SYN-ACK message (some books may call it “SYN/ACK” or “SYN, ACK” message. ACK is short for ACKnowledge). This message includes a SYN sequence number and an ACK number:
+ SYN sequence number (let’s called it “y”) is a random number and does not have any relationship with Host A’s SYN SEQ number.
+ ACK number is the next number of Host A’s SYN sequence number it received, so we represent it with “x+1”. It means “I received your part. Now send me the next part (x + 1)”.

The SYN-ACK message indicates host B accepts to talk to host A (via ACK part). And ask if host A still wants to talk to it as well (via SYN part).

3. After Host A received the SYN-ACK message from host B, it sends an ACK message with ACK number “y+1” to host B. This confirms host A still wants to talk to host B.

Question 8

Explanation

The inbound direction of G0/0 of SW2 only filter traffic from Web Server to PC-1 so the source IP address and port is of the Web Server.

Question 9

Explanation

Note: We cannot assign a number smaller than 100 to an extended access-list so the command “ip access-list extended 10” is not correct.

extended_acl_number.jpg

Question 10

Explanation

We see in the traceroute result the packet could reach 10.99.69.5 (on R2) but it could not go any further so we can deduce an ACL on R3 was blocking it.

Note: Record option displays the address(es) of the hops (up to nine) the packet goes through.

Access-list Questions 2

January 27th, 2021 digitaltut 8 comments

Question 1

Explanation

The established keyword is only applicable to TCP access list entries to match TCP segments that have the ACK and/or RST control bit set (regardless of the source and destination ports), which assumes that a TCP connection has already been established in one direction only. Let’s see an example below:

access-list_established.jpgSuppose you only want to allow the hosts inside your company to telnet to an outside server but not vice versa, you can simply use an “established” access-list like this:

access-list 100 permit tcp any any established
access-list 101 permit tcp any any eq telnet
!
interface S0/0
ip access-group 100 in
ip access-group 101 out

Note:

Suppose host A wants to start communicating with host B using TCP. Before they can send real data, a three-way handshake must be established first. Let’s see how this process takes place:

TCP_Three_way_handshake.jpg

1. First host A will send a SYN message (a TCP segment with SYN flag set to 1, SYN is short for SYNchronize) to indicate it wants to setup a connection with host B. This message includes a sequence (SEQ) number for tracking purpose. This sequence number can be any 32-bit number (range from 0 to 232) so we use “x” to represent it.

2. After receiving SYN message from host A, host B replies with SYN-ACK message (some books may call it “SYN/ACK” or “SYN, ACK” message. ACK is short for ACKnowledge). This message includes a SYN sequence number and an ACK number:
+ SYN sequence number (let’s called it “y”) is a random number and does not have any relationship with Host A’s SYN SEQ number.
+ ACK number is the next number of Host A’s SYN sequence number it received, so we represent it with “x+1”. It means “I received your part. Now send me the next part (x + 1)”.

The SYN-ACK message indicates host B accepts to talk to host A (via ACK part). And ask if host A still wants to talk to it as well (via SYN part).

3. After Host A received the SYN-ACK message from host B, it sends an ACK message with ACK number “y+1” to host B. This confirms host A still wants to talk to host B.

Question 2

Multicast Questions

January 27th, 2021 digitaltut 11 comments

Question 1

Explanation

A rendezvous point (RP) is required only in networks running Protocol Independent Multicast sparse mode (PIM-SM).

By default, the RP is needed only to start new sessions with sources and receivers.

Reference: https://www.cisco.com/c/en/us/td/docs/ios/solutions_docs/ip_multicast/White_papers/rps.html

For your information, in PIM-SM, only network segments with active receivers that have explicitly requested multicast data will be forwarded the traffic. This method of delivering multicast data is in contrast to the PIM dense mode (PIM-DM) model. In PIM-DM, multicast traffic is initially flooded to all segments of the network. Routers that have no downstream neighbors or directly connected receivers prune back the unwanted traffic.

Question 2

Explanation

PIM dense mode (PIM-DM) uses a push model to flood multicast traffic to every corner of the network. This push model is a brute-force method of delivering data to the receivers. This method would be efficient in certain deployments in which there are active receivers on every subnet in the network. PIM-DM initially floods multicast traffic throughout the network. Routers that have no downstream neighbors prune the unwanted traffic. This process repeats every 3 minutes.

PIM Sparse Mode (PIM-SM) uses a pull model to deliver multicast traffic. Only network segments with active receivers that have explicitly requested the data receive the traffic. PIM-SM distributes information about active sources by forwarding data packets on the shared tree. Because PIM-SM uses shared trees (at least initially), it requires the use of an RP. The RP must be administratively configured in the network.

Answer C seems to be correct but it is not, PIM spare mode uses sources (not receivers) to register with the RP. Sources register with the RP, and then data is forwarded down the shared tree to the receivers.

Reference: Selecting MPLS VPN Services Book, page 193

Question 3

Explanation

The concept of joining the rendezvous point (RP) is called the RPT (Root Path Tree) or shared distribution tree. The RP is the root of our tree which decides where to forward multicast traffic to. Each multicast group might have different sources and receivers so we might have different RPTs in our network.

Question 4

Explanation

In the figure below, we can see RP sent “join 234.1.1.1” message toward Source.

RP_join_source.jpg

Reference: https://www.ciscolive.com/c/dam/r/ciscolive/apjc/docs/2018/pdf/BRKIPM-1261.pdf

Question 5

Explanation

Query messages are used to elect the IGMP querier as follows:
1. When IGMPv2 devices start, they each multicast a general query message to the all-systems group address of 224.0.0.1 with their interface address in the source IP address field of the message.
2. When an IGMPv2 device receives a general query message, the device compares the source IP address in the message with its own interface address. The device with the lowest IP address on the subnet is elected the IGMP querier.
3. All devices (excluding the querier) start the query timer, which is reset whenever a general query message is received from the IGMP querier. If the query timer expires, it is assumed that the IGMP querier has gone down, and the election process is performed again to elect a new IGMP querier.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3750x_3560x/software/release/15-2_2_e/multicast/configuration_guide/b_mc_1522e_3750x_3560x_cg/b_ipmc_3750x_3560x_chapter_01000.html

NTP Questions

January 27th, 2021 digitaltut 6 comments

Quick review of NTP

NTP uses the concept of a stratum to describe how many NTP hops away a machine is from an authoritative time source, usually a reference clock. A reference clock is a stratum 0 device that is assumed to be accurate and has little or no delay associated with it. Stratum 0 servers cannot be used on the network but they are directly connected to computers which then operate as stratum-1 servers. A stratum 1 time server acts as a primary network time standard.

ntp-stratum.jpg

A stratum 2 server is connected to the stratum 1 server; then a stratum 3 server is connected to the stratum 2 server and so on. A stratum 2 server gets its time via NTP packet requests from a stratum 1 server. A stratum 3 server gets its time via NTP packet requests from a stratum-2 server… A stratum server may also peer with other stratum servers at the same level to provide more stable and robust time for all devices in the peer group (for example a stratum 2 server can peer with other stratum 2 servers).

– NTP is designed to synchronize the time on a network. NTP runs over the User Datagram Protocol (UDP), using port 123 as both the source and destination.
– An Authoritative NTP Server can distribute time even when it is not synchronized to an existing time server. To configure a Cisco device as an Authoritative NTP Server, use the ntp master [stratum] command.
– To configure the local device to use a remote NTP clock source, use the command ntp server <IP address>. For example: Router(config)#ntp server 192.168.1.1
– The ntp authenticate command is used to enable the NTP authentication feature (NTP authentication is disabled by default).
– The ntp trusted-key command specifies one or more keys that a time source must provide in its NTP packets in order for the device to synchronize to it. This command provides protection against accidentally synchronizing the device to a time source that is not trusted.
– The ntp authentication-key defines the authentication keys. The device does not synchronize to a time source unless the source has one of these authentication keys and the key number is specified by the ntp trusted-key number command.
– Two most popular commands to display time sources statistics: show ntp status and show ntp associations

Question 1

Explanation

The stratum levels define the distance from the reference clock. A reference clock is a stratum 0 device that is assumed to be accurate and has little or no delay associated with it. Stratum 0 servers cannot be used on the network but they are directly connected to computers which then operate as stratum-1 servers. A stratum 1 time server acts as a primary network time standard.

ntp-stratum.jpg

A stratum 2 server is connected to the stratum 1 server; then a stratum 3 server is connected to the stratum 2 server and so on. A stratum 2 server gets its time via NTP packet requests from a stratum 1 server. A stratum 3 server gets its time via NTP packet requests from a stratum-2 server… A stratum server may also peer with other stratum servers at the same level to provide more stable and robust time for all devices in the peer group (for example a stratum 2 server can peer with other stratum 2 servers).

NTP uses the concept of a stratum to describe how many NTP hops away a machine is from an authoritative time source. A stratum 1 time server typically has an authoritative time source (such as a radio or atomic clock, or a Global Positioning System (GPS) time source) directly attached, a stratum 2 time server receives its time via NTP from a stratum 1 time server, and so on.

Reference: https://www.cisco.com/c/en/us/td/docs/routers/asr920/configuration/guide/bsm/16-6-1/b-bsm-xe-16-6-1-asr920/bsm-time-calendar-set.html

Question 2

Explanation

The time kept on a machine is a critical resource and it is strongly recommend that you use the security features of NTP to avoid the accidental or malicious setting of incorrect time. The two security features available are an access list-based restriction scheme and an encrypted authentication mechanism.

Reference: https://www.cisco.com/c/en/us/support/docs/availability/high-availability/19643-ntpm.html

Question 3

Explanation

The time kept on a machine is a critical resource and it is strongly recommend that you use the security features of NTP to avoid the accidental or malicious setting of incorrect time. The two security features available are an access list-based restriction scheme and an encrypted authentication mechanism.

Reference: https://www.cisco.com/c/en/us/support/docs/availability/high-availability/19643-ntpm.html

Question 4

Explanation

If the system clock has not been set, the date and time are preceded by an asterisk (*) to indicate that the date and time are probably not correct.

Reference: https://www.cisco.com/E-Learning/bulk/public/tac/cim/cib/using_cisco_ios_software/cmdrefs/service_timestamps.htm

Moreover, when we use “show clock” on a brand-new router (which has not been configured anything), we see the clock is set to the default value with an asterisk mask.

asterisk_mask_clock.jpg

Therefore we can deduce this device is not configured NTP.

CoPP Questions

January 27th, 2021 digitaltut No comments

If you are not sure about CoPP, please read our Control Plane Policing (CoPP) Tutorial (on networktut.com)

Question 1

Explanation

CoPP protects the route processor on network devices by treating route processor resources as a separate entity with its own ingress interface (and in some implementations, egress also). CoPP is used to police traffic that is destined to the route processor of the router such as:
+ Routing protocols like OSPF, EIGRP, or BGP.
+ Gateway redundancy protocols like HSRP, VRRP, or GLBP.
+ Network management protocols like telnet, SSH, SNMP, or RADIUS.

CoPP_SSH.jpg

Therefore we must apply the CoPP to deal with SSH because it is in the management plane. CoPP must be put under “control-plane” command. But we cannot name the control-plane (like “transit”).

Question 2

Automation Questions

January 25th, 2021 digitaltut 29 comments

Note: If you are not sure about Automation, please read our JSON Tutorial, JSON Web Token (JWT) Tutorial, Ansible Tutorial, Chef Tutorial, Puppet Tutorial.

Quick summary about Ansible, Puppet, Chef

Ansible_Puppet_Chef_compare.jpg

JSON Quick summary

JavaScript Object Notation (JSON) is a human readable and very popular format used by web services, programming languages (including Python) and APIs to read/write data.

JSON syntax structure:
+ uses curly braces {} to hold objects and square brackets [] to hold arrays
+ JSON data is written as key/value pairs
+ A key/value pair consists of a key (must be a string in double quotation marks ""), followed by a colon :, followed by a value. For example: “name”:”John”
+ Each key must be unique
+ Values must be of type string, number, object, array, boolean or null
+ Multiple key/value within an object are separated by commas ,

JSON can use arrays. Arrays are used to store multiple values in a single variable. For example:

{
“name”:”John”,
“age”:30,
“cars”:[ “Ford”, “BMW”, “Fiat”]
}

In the above example, “cars” is an array which contains three values “Ford”, “BMW” and “Fiat”.

If we have a JSON string, we can convert (parse) it into Python by using the json.loads() method, which returns a Python dictionary:

import json
myvar = '{“name”:”John”,“age”:30,“cars”:[ “Ford”, “BMW”, “Fiat”]}'
parse_myvar = json.loads(myvar)
print(parse_myvar["cars"][0])

The result:

Ford

Note:
+ json.dumps()
function converts a Python object into a JSON string. For example, we can convert Dictionary type into a JSON string: json.dumps({‘name’: ‘John’,’age’: ’20’})
+ json.loads() method parses a valid JSON string and convert it into a Python Dictionary.

NETCONF Quick summary

NETCONF provides mechanisms to retrieve and manipulate configuration of network devices. NETCONF is very similar to Command Line Interface (CLI) we all knew. But the main difference is CLI is designed for humans while NETCONF is aimed for automation applications.

NETCONF protocol is based on XML messages exchanged via SSH protocol using TCP port 830 (default). Network devices running a NETCONF agent can be managed through five main operations:
get: This operation retrieves the running configuration and device state information.
get-config: This operation retrieves all or part of a specified configuration.
edit-config: This operation loads all or part of a specified configuration to the specified device.
copy-config: This operation creates or replaces an entire configuration with specified contents.
delete-config: This operation deletes a configuration. The running configuration cannot be deleted.

The NETCONF protocol requires messages to always be encoded with XML.

RESTCONF Quick summary

RESTCONF helps NETCONF to run on the most popular protocol on the Internet: HTTP/HTTPS. Network devices running a RESTCONF agent can be managed through five HTTP operations:
OPTIONS: Discover which operations are supported by a data resource
HEAD
: Get without a body
GET: This method retrieves data and metadata for a resource. It is supported for all resource types, except operation resources.
PATCH: This method partially modifies a resource (the equivalent of the NETCONF merge operation).
PUT: This method creates or replaces the target resource.
POST: This method creates a data resource or invokes an operations resource.
DELETE: This method deletes the target resource.

The RESTCONF protocol allows data to be encoded with either XML or JSON.

YANG Quick summary

YANG (Yet Another Next Generation) is a data modelling language, providing a standardized way to model the operational and configuration data of a network device. YANG can then be converted into any encoding format, e.g. XML or JSON. An example of a YANG model is shown below (source: Cisco Live DEVNET-1721):

Yang_example.png

Question 1

Explanation

Ansible-managed node can be a Juniper device or other vendors’ device as well so answer A is not correct.

Ansible communicates with managed node via SSH -> Answer B is correct.

An Ansible ad-hoc command uses the /usr/bin/ansible command-line tool to automate a single task on one or more managed nodes. Ad-hoc commands are quick and easy, but they are not reusable -> It is not a requirement either -> Answer C is not correct.

Ansible Tower is a web-based solution that makes Ansible even more easy to use for IT teams of all kinds. But it is not a requirement to run Ansible -> Answer D is not correct.

Ansible_workflow.jpg

Note: Managed Nodes are the network devices (and/or servers) you manage with Ansible. Managed nodes are also sometimes called “hosts”. Ansible is not installed on managed nodes.

Question 2

Explanation

When a device boots up with the startup configuration, the nginx process will be running. NGINX is an internal webserver that acts as a proxy webserver. It provides Transport Layer Security (TLS)-based HTTPS. RESTCONF request sent via HTTPS is first received by the NGINX proxy web server, and the request is transferred to the confd web server for further syntax/semantics check.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/prog/configuration/168/b_168_programmability_cg/RESTCONF.html

The https-based protocol-RESTCONF (RFC 8040), which is a stateless protocol, uses secure HTTP methods to provide CREATE, READ, UPDATE and DELETE (CRUD) operations on a conceptual datastore containing YANG-defined data -> RESTCONF only uses HTTPs.

Note: In fact answer C is also correct:

RESTCONF servers MUST present an X.509v3-based certificate when establishing a TLS connection with a RESTCONF client. The use of X.509v3-based certificates is consistent with NETCONF over TLS.

Reference: https://tools.ietf.org/html/rfc8040

But answer A is still a better choice.

Question 3

Explanation

RESTCONF operations include OPTIONS, HEAD, GET, POST, PUT, PATCH, DELETE.

RESTCONF Description
OPTIONS Determine which methods are supported by the server.
GET Retrieve data and metadata about a resource.
HEAD The same as GET, but only the response headers are returned.
POST Create a resource or invoke an RPC operation.
PUT Create or replace a resource.
PATCH Create or update (but not delete) various resources.
DELETE Sent by a client to delete a target resource.

Question 4

Question 5

Explanation

An EEM policy is an entity that defines an event and the actions to be taken when that event occurs. There are two types of EEM policies: an applet or a script. An applet is a simple form of policy that is defined within the CLI configuration. A script is a form of policy that is written in Tool Command Language (Tcl).

There are two ways to manually run an EEM policy. EEM usually schedules and runs policies on the basis of an event specification that is contained within the policy itself. The event none command allows EEM to identify an EEM policy that can be manually triggered. To run the policy, use either the action policy command in applet configuration mode or the event manager run command in privileged EXEC mode.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/eem/configuration/xe-3s/eem-xe-3s-book/eem-policy-cli.html

Question 6

Explanation

EEM offers the ability to monitor events and take informational or corrective action when the monitored events occur or reach a threshold. An EEM policy is an entity that defines an event and the actions to be taken when that event occurs. There are two types of EEM policies: an applet or a script. An applet is a simple form of policy that is defined within the CLI configuration.

To specify the event criteria for an Embedded Event Manager (EEM) applet that is run by sampling Simple Network Management Protocol (SNMP) object identifier values, use the event snmp command in applet configuration mode.
event snmp oid oid-value get-type {exact | next} entry-op operator entry-val entry-value [exit-comb {or | and}] [exit-op operator] [exit-val exit-value] [exit-time exit-time-value] poll-interval poll-int-value

+ oid: Specifies the SNMP object identifier (object ID)
+ get-type: Specifies the type of SNMP get operation to be applied to the object ID specified by the oid-value argument.
— next – Retrieves the object ID that is the alphanumeric successor to the object ID specified by the oid-value argument.
+ entry-op: Compares the contents of the current object ID with the entry value using the specified operator. If there is a match, an event is triggered and event monitoring is disabled until the exit criteria are met.
+ entry-val: Specifies the value with which the contents of the current object ID are compared to decide if an SNMP event should be raised.
+ exit-op: Compares the contents of the current object ID with the exit value using the specified operator. If there is a match, an event is triggered and event monitoring is reenabled.
+ poll-interval: Specifies the time interval between consecutive polls (in seconds)

Reference: https://www.cisco.com/en/US/docs/ios/12_3t/12_3t4/feature/guide/gtioseem.html

In particular, this EEM will read the next value of above OID every 5 second and will trigger an action if the value is greater or equal (ge) 75%.

Question 7

Explanation

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA.

JSON Web Tokens are composed of three parts, separated by a dot (.): Header, Payload, Signature. Therefore, a JWT typically looks like the following:

xxxxx.yyyyy.zzzzz

The header typically consists of two parts: the type of the token, which is JWT, and the signing algorithm being used, such as HMAC SHA256 or RSA.
The second part of the token is the payload, which contains the claims. Claims are statements about an entity (typically, the user) and additional data.
To create the signature part you have to take the encoded header, the encoded payload, a secret, the algorithm specified in the header, and sign that.

Reference: https://jwt.io/introduction/

Question 8

Explanation

When you use the sync yes option in the event cli command, the EEM applet runs before the CLI command is executed. The EEM applet should set the _exit_status variable to indicate whether the CLI command should be executed (_exit_status set to one) or not (_exit_status set to zero).

With the sync no option, the EEM applet is executed in background in parallel with the CLI command.

Reference: https://blog.ipspace.net/2011/01/eem-event-cli-command-options-and.html

Question 9

Question 10

Explanation

YANG (Yet Another Next Generation) is a data modeling language for the definition of data sent over network management protocols such as the NETCONF and RESTCONF.

Question 11

Explanation

The REST API accepts and returns HTTP (not enabled by default) or HTTPS messages that contain JavaScript Object Notation (JSON) or Extensible Markup Language (XML) documents. You can use any programming language to generate the messages and the JSON or XML documents that contain the API methods or Managed Object (MO) descriptions.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/rest_cfg/2_1_x/b_Cisco_APIC_REST_API_Configuration_Guide/b_Cisco_APIC_REST_API_Configuration_Guide_chapter_01.html

Question 12

Explanation

This JSON can be written as follows:

{
   "switch": {
      "name": "dist1",
      "interfaces": ["gig1", "gig2", "gig3"]
   }
}

Automation Questions 2

January 24th, 2021 digitaltut 17 comments

Question 1

Explanation

The words “try” and “except” are Python keywords and are used to catch exceptions. For example:

try:
 print 1/0
except ZeroDivisionError:
 print "Error! We cannot divide by zero!!!" 

Question 2

Explanation

If we have a JSON string, we can parse it by using the json.loads() method so we don’t need to have a response server to test this question. Therefore, in order to test the result above, you can try this Python code:

import json
json_string = """
{
 "ins_api": { <!!!Please copy the code above and put here. We omitted it to save some space!!!>
 }
}
""" 
response = json.loads(json_string)

print(response['ins_api']['outputs']['output']['body']['kickstart_ver_str'])

And this is the result:

Python_JSON_print.jpg

Note:
+ If you want to run the full code in this question in Python (with a real HTTP JSON response), you must first install “requests” package before “import requests”.
+ The error “NameError: name ‘json’ is not defined” is only shown if we forgot the line “import json” in Python code -> Answer A is not correct.
+ We only see the “KeyError” message if we try to print out an unknown attribute (key). For example:

print(response['ins_api']['outputs']['output']['body']['unknown_attribute'])

Python_error_unknow_attribute.jpg
+ Triple quotes (“””) in Python allows strings to span multiple lines, including verbatim NEWLINEs, TABs, and any other special characters.

Question 3

Explanation

Cisco IOS XE supports the Yet Another Next Generation (YANG) data modeling language. YANG can be used with the Network Configuration Protocol (NETCONF) to provide the desired solution of automated and programmable network operations. NETCONF(RFC6241) is an XML-based protocol that client applications use to request information from and make configuration changes to the device. YANG is primarily used to model the configuration and state data used by NETCONF operations.

Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9500/software/release/16-5/configuration_guide/prog/b_165_prog_9500_cg/data_models.pdf

Note: Although NETCONF also uses XML but XML is not a data modeling language.

Question 4

Explanation

The 404 (Not Found) error status code indicates that the REST API can’t map the client’s URI to a resource but may be available in the future. Subsequent requests by the client are permissible.

Reference: https://restfulapi.net/http-status-codes/

Question 5

Explanation

A 401 error response indicates that the client tried to operate on a protected resource without providing the proper authorization. It may have provided the wrong credentials or none at all.

Note: A 4xx code indicates a “client error” while a 5xx code indicates a “server error”.

Reference: https://restfulapi.net/http-status-codes/

Question 6

Question 7

Explanation

The Southbound API is used to communicate with network devices.

Southbound_Northbound_APIs.jpg

Question 8

Explanation

To enable the action of printing data directly to the local tty when an Embedded Event Manager (EEM) applet is triggered, use the action puts command in applet configuration mode.

The following example shows how to print data directly to the local tty:

Router(config-applet)# event manager applet puts
Router(config-applet)# event none
Router(config-applet)# action 1 regexp “(.*) (.*) (.*)” “one two three” _match _sub1
Router(config-applet)# action 2 puts “match is $_match”
Router(config-applet)# action 3 puts “submatch 1 is $_sub1”
Router# event manager run puts
match is one two three
submatch 1 is one
Router#

The action puts command applies to synchronous events. The output of this command for a synchronous applet is directly displayed to the tty, bypassing the syslog.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/eem/command/eem-cr-book/eem-cr-a1.html

Question 9

Explanation

JSON data is written as name/value pairs.
A name/value pair consists of a field name (in double quotes), followed by a colon, followed by a value:
“name”:”Mark”

JSON can use arrays. Array values must be of type string, number, object, array, boolean or null. For example:
{
“name”:”John”,
“age”:30,
“alive”:true,
“cars”:[ “Ford”, “BMW”, “Fiat” ]
}

Question 10

Explanation

Agentless tool means that no software or agent needs to be installed on the client machines that are to be managed. Ansible is such an agentless tool. In contrast to agentless tool, agent-based tool requires software or agent to be installed on the client (-> Answer D is not correct).

In agentless tool, the master and slave nodes can communicate directly without the need of high-level language interpreter but agent-based tool requires interpreter to be installed on both master and slave nodes -> Answer C is not correct.

An agentless tool uses standard protocols, such as SSH, to push configurations down to a device (and it can be considered a “messaging system”).

Agentless tools like Ansible can directly communicate to slave nodes via SSH -> Answer B is not correct.

Therefore only answer A left. In this answer, “Messaging systems” should be understood as “additional software packages installed on slave nodes” to control nodes. Agentless tools do not require them.

Question 11

Explanation

Yet Another Next Generation (YANG) is a language which is only used to describe data models (structure). It is not XML or JSON.

Question 12

Explanation

With Synchronous ( sync yes), the CLI command in question is not executed until the policy exits. Whether or not the command runs depends on the value for the variable _exit_status. If _exit_status is 1, the command runs, if it is 0, the command is skipped.

Automation Questions 3

January 24th, 2021 digitaltut 6 comments

Question 1

Explanation

YANG (Yet Another Next Generation) is a data modeling language for the definition of data sent over network management protocols such as the NETCONF and RESTCONF.

Question 2

Explanation

One of the best practices to secure REST APIs is using password hash. Passwords must always be hashed to protect the system (or minimize the damage) even if it is compromised in some hacking attempts. There are many such hashing algorithms which can prove really effective for password security e.g. PBKDF2, bcrypt and scrypt algorithms.

Other ways to secure REST APIs are: Always use HTTPS, Never expose information on URLs (Usernames, passwords, session tokens, and API keys should not appear in the URL), Adding Timestamp in Request, Using OAuth, Input Parameter Validation.

Reference: https://restfulapi.net/security-essentials/

We should not use MD5 or any SHA (SHA-1, SHA-256, SHA-512…) algorithm to hash password as they are not totally secure.

Note: A brute-force attack is an attempt to discover a password by systematically trying every possible combination of letters, numbers, and symbols until you discover the one correct combination that works.

Question 3

Explanation

The example below shows the usage of lock command:

def demo(host, user, names):
    with manager.connect(host=host, port=22, username=user) as m:
        with m.locked(target=’running’):
            for n in names:
                m.edit_config(target=’running’, config=template % n)

the command “m.locked(target=’running’)” causes a lock to be acquired on the running datastore.

Question 4

Explanation

The most common implementations of OAuth (OAuth 2.0) use one or both of these tokens:
+ access token: sent like an API key, it allows the application to access a user’s data; optionally, access tokens can expire.
+ refresh token: optionally part of an OAuth flow, refresh tokens retrieve a new access token if they have expired. OAuth2 combines Authentication and Authorization to allow more sophisticated scope and validity control.

Question 5

Explanation

Ansible works by connecting to your nodes and pushing out small programs, called “Ansible modules” to them. These programs are written to be resource models of the desired state of the system. Ansible then executes these modules (over SSH by default), and removes them when finished.

Chef is a much older, mature solution to configure management. Unlike Ansible, it does require an installation of an agent on each server, named chef-client. Also, unlike Ansible, it has a Chef server that each client pulls configuration from.

Question 6

Explanation

Accept and Content-type are both headers sent from a client (a browser) to a service.
Accept header is a way for a client to specify the media type of the response content it is expecting and Content-type is a way to specify the media type of request being sent from the client to the server.

The response was sent in XML so we can say the Accept header sent was application/xml.

Question 7

Explanation

Customer needs are fast evolving. Typically, a network center is a heterogenous mix of various devices at multiple layers of the network. Bulk and automatic configurations need to be accomplished. CLI scraping is not flexible and optimal. Re-writing scripts many times, even for small configuration changes is cumbersome. Bulk configuration changes through CLIs are error-prone and may cause system issues. The solution lies in using data models-a programmatic and standards-based way of writing configurations to any network device, replacing the process of manual configuration. Data models are written in a standard, industry-defined language. Although configurations using CLIs are easier (more human-friendly), automating the configuration using data models results in scalability.

Question 8

Explanation

Missing Data Model RPC Error Reply Message
If a request is made for a data model that doesn’t exist on the Catalyst 3850 or a request is made for a leaf that is not implemented in a data model, the Server (Catalyst 3850) responds with an empty data response. This is expected behavior.

Reference: https://www.cisco.com/c/en/us/support/docs/storage-networking/management/200933-YANG-NETCONF-Configuration-Validation.html

Question 9

Explanation

ncclient is a Python library that facilitates client-side scripting and application development around the NETCONF protocol.

The above Python snippet uses the ncclient to connect and establish a NETCONF session to a Nexus device (which is also a NETCONF server).

Question 10

Question 11

Explanation

JSON syntax structure:
+ uses curly braces {} to hold objects and square brackets [] to hold arrays
+ JSON data is written as key/value pairs
+ A key/value pair consists of a key (must be a string in double quotation marks ""), followed by a colon :, followed by a value. For example: “name”:”John”
+ Each key must be unique
+ Values must be of type string, number, object, array, boolean or null
+ Multiple key/value within an object are separated by commas ,

JSON can use arrays. Arrays are used to store multiple values in a single variable. For example:

{
“name”:”John”,
“age”:30,
“cars”:[ “Ford”, “BMW”, “Fiat”]
}

In the above example, “cars” is an array which contains three values “Ford”, “BMW” and “Fiat”.

Note: Although our correct answer above does not have curly braces to hold objects but it is still the best choice here.

Miscellaneous Questions

January 23rd, 2021 digitaltut 13 comments

Question 1

Explanation

Although some Cisco webpages (like this one) mentioned about “logging synchronous” command in global configuration mode, which means “Router(config)#logging synchronous”, but in fact we cannot use it under global configuration mode. We can only use this command in line mode. Therefore answer C is better than answer A.

Let’s see how the “logging synchronous” command affect the typing command:

Without this command, a message may pop up and you may not know what you typed if that message is too long. When trying to erase (backspace) your command, you realize you are erasing the message instead.

without_logging_synchronous.jpg

With this command enabled, when a message pops up you will be put to a new line with your typing command which is very nice:

with_logging_synchronous.jpg

Question 2

Explanation

The Link Management Protocol (LMP) performs the following functions:
+ Verifies link integrity by establishing bidirectional traffic forwarding, and rejects any unidirectional links
+ Exchanges periodic hellos to monitor and maintain the health of the links
+ Negotiates the version of StackWise Virtual header between the switches StackWise Virtual link role resolution

Reference: https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9000/nb-06-cat-9k-stack-wp-cte-en.html

Question 3

Explanation

Syslog levels are listed below:

Level Keyword Description
0 emergencies System is unusable
1 alerts Immediate action is needed
2 critical Critical conditions exist
3 errors Error conditions exist
4 warnings Warning conditions exist
5 notification Normal, but significant, conditions exist
6 informational Informational messages
7 debugging Debugging messages

Number “3” in “%LINK-3-UPDOWN” is the severity level of this message so in this case it is “errors”.

Question 4

Explanation

The application residing on Device 1 originates an RSVP message called Path, which is sent to the same destination IP address as the data flow for which a reservation is requested (that is, 10.60.60.60) and is sent with the “router alert” option turned on in the IP header. The Path message contains, among other things, the following objects:

–The “sender T-Spec” (traffic specification) object, which characterizes the data flow for which a reservation will be requested. The T-Spec basically defines the maximum IP bandwidth required for a call flow using a specific codec. The T-Spec is typically defined using values for the data flow’s average bit rate, peak rate, and burst size.

Reference: https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/srnd/9x/uc9x/cac.html

Question 5

Explanation

Cisco IOS Nonstop Forwarding(NSF) always runs with stateful switchover (SSO) and provides redundancy for Layer 3 traffic.

Reference: https://www.cisco.com/en/US/docs/switches/lan/catalyst3850/software/release/3se/consolidated_guide/b_consolidated_3850_3se_cg_chapter_01101110.pdf

Drag Drop Questions

January 22nd, 2021 digitaltut 18 comments

Question 1

Explanation

OSPF metric is only dependent on the interface bandwidth & reference bandwidth while EIGRP metric is dependent on bandwidth and delay by default.

Both OSPF and EIGRP have three tables to operate: neighbor table (store information about OSPF/EIGRP neighbors), topology table (store topology structure of the network) and routing table (store the best routes).

Question 2

Question 3

Explanation

The following diagram illustrates the key difference between traffic policing and traffic shaping. Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate (or committed information rate), excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs. In contrast to policing, traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate.

traffic_policing_vs_shaping.jpg

Note: Committed information rate (CIR): The minimum guaranteed data transfer rate agreed to by the routing device.

Question 4

Explanation

ITR is the function that maps the destination EID to a destination RLOC and then encapsulates the original packet with an additional header that has the source IP address of the ITR RLOC and the destination IP address of the RLOC of an Egress Tunnel Router (ETR). After the encapsulation, the original packet become a LISP packet.

ETR is the function that receives LISP encapsulated packets, decapsulates them and forwards to its local EIDs. This function also requires EID-to-RLOC mappings so we need to point out an “map-server” IP address and the key (password) for authentication.

A LISP proxy ETR (PETR) implements ETR functions on behalf of non-LISP sites. A PETR is typically used when a LISP site needs to send traffic to non-LISP sites but the LISP site is connected through a service provider that does not accept nonroutable EIDs as packet sources. PETRs act just like ETRs but for EIDs that send traffic to destinations at non-LISP sites.

Map Server (MS) processes the registration of authentication keys and EID-to-RLOC mappings. ETRs sends periodic Map-Register messages to all its configured Map Servers.

Map Resolver (MR): a LISP component which accepts LISP Encapsulated Map Requests, typically from an ITR, quickly determines whether or not the destination IP address is part of the EID namespace

Question 5

Explanation

Unlike OSPF where we can summarize only on ABR or ASBR, in EIGRP we can summarize anywhere.

Manual summarization can be applied anywhere in EIGRP domain, on every router, on every interface via the ip summary-address eigrp as-number address mask [administrative-distance ] command (for example: ip summary-address eigrp 1 192.168.16.0 255.255.248.0). Summary route will exist in routing table as long as at least one more specific route will exist. If the last specific route will disappear, summary route also will fade out. The metric used by EIGRP manual summary route is the minimum metric of the specific routes.

Question 6

Explanation

When Secure Vault is not in use, all information stored in its container is encrypted. When a user wants to use the files and notes stored within the app, they have to first decrypt the database. This happens by filling in a previously determined Security Lock – which could be a PIN or a password of the user’s choosing.

When a user leaves the app, it automatically encrypts everything again. This way all data stored in Secure Vault is decrypted only while a user is actively using the app. In all other instances, it remains locked to any attacker, malware or spyware trying to access the data.

How token-based authentication works: Users log in to a system and – once authenticated – are provided with a token to access other services without having to enter their username and password multiple times. In short, token-based authentication adds a second layer of security to application, network, or service access.

OAuth is an open standard for authorization used by many APIs and modern applications. The simplest example of OAuth is when you go to log onto a website and it offers one or more opportunities to log on using another website’s/service’s logon. You then click on the button linked to the other website, the other website authenticates you, and the website you were originally connecting to logs you on itself afterward using permission gained from the second website.

Question 7

Explanation

To attach a policy map to an input interface, a virtual circuit (VC), an output interface, or a VC that will be used as the service policy for the interface or VC, use the service-policy command in the appropriate configuration mode.

Class of Service (CoS) is a 3 bit field within an Ethernet frame header when we use 802.1q which supports virtual LANs on an Ethernet network. This field specifies a priority value which is between 0 and 63 inclusive which can be used in the Quality of Service (QoS) to differentiate traffic.

The Differentiated Services Code Point (DSCP) is a 6-bit field in the IP header for the classification of packets. Differentiated Services is a technique which is used to classify and manage network traffic and it helps to provide QoS for modern Internet networks. It can provide services to all kinds of networks.

Traffic policing is also known as rate limiting as it propagates bursts. When the traffic rate reaches the configured maximum rate (or committed information rate), excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs.

Traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time -> It causes delay.

Question 8

Question 9

Question 10

Explanation

+ StealthWatch: performs security analytics by collecting network flows via NetFlow
+ ESA: email security solution which protects against email threats like ransomware, business email compromise, phishing, whaling, and many other email-driven attacks
+ AMP for Endpoints (AMP4E): provides malware protection on endpoints
+ Umbrella: provides DNS protection by blocking malicious destinations using DNS
+ Firepower Threat Defense (FTD): provides a comprehensive suite of security features such as firewall capabilities, monitoring, alerts, Intrusion Detection System (IDS) and Intrusion Prevention System (IPS).

Drag Drop Questions 2

January 22nd, 2021 digitaltut 3 comments

Question 1

Explanation

EIGRP maintains alternative loop-free backup via the feasible successors. To qualify as a feasible successor, a router must have an Advertised Distance (AD) less than the Feasible distance (FD) of the current successor route.

Advertised distance (AD): the cost from the neighbor to the destination.
Feasible distance (FD): The sum of the AD plus the cost between the local router and the next-hop router

Question 2

Explanation

There are four messages sent between the DHCP Client and DHCP Server: DHCPDISCOVER, DHCPOFFER, DHCPREQUEST and DHCPACKNOWLEDGEMENT. This process is often abbreviated as DORA (for Discover, Offer, Request, Acknowledgement).

Question 3

JSON Web Token (JWT) Tutorial

digitaltut 8 comments

As you know, the World Wide Web we know today is based on HTTP (which includes both HTTP and HTTPS). If you are reading this tutorial then surely you had to access to networktut.com via HTTP. But HTTP is a stateless protocol so if you logged in then visiting another page on the same site, you would be forced to log in again since HTTP does not save your login status. In order to solve this problem, there are two popular ways to help keep the information you provided for later use: Session-based authentication (sometime called Cookie-based authentication) and Token-based authentication. In this tutorial we will learn both and the difference between them.

Read more…

Router Questions

August 7th, 2019 digitaltut 240 comments

Question 1

Explanation

To determine which scheme has been used to encrypt a specific password, check the digit preceding the encrypted string in the configuration file. If that digit is a 7, the password has been encrypted using the weak algorithm. If the digit is a 5, the password has been hashed using the stronger MD5 algorithm.

For example, in the configuration command:

enable secret 5 $1$iUjJ$cDZ03KKGh7mHfX2RSbDqP.

The enable secret has been hashed with MD5, whereas in the command:

username jdoe password 7 07362E590E1B1C041B1E124C0A2F2E206832752E1A01134D

The password has been encrypted using the weak reversible algorithm.

When we enter the “enable secret” command with a number after that, the IOS can specify that the password has been encrypted so it will not encrypt any more and accept that password.

In new Cisco IOS (v15+), it seems the device does not recognize “enable secret 7” command as encrypted password. We tried on Cisco IOS v15.4 and see this:

enable_secret.jpg

When we tried to enter the command “enable secret 7 07362E590E1B1C041B1E124C0A2F2E206832752E1A01134D”, the Cisco IOS automatically change the command to “enable secret 5 $1$dLq2$qgzb4bgdsasX8dx1oHOkD.” (in the running-config file). So if you paste an “enable secret 7 …” command from an old Cisco IOS version, you cannot login any more with your password.

Note: In fact, there is an error with the answer D. As we entered the command in answer D, the router denied the encrypted password because it was not a valid encrypted secret password. That means the router also checked if the password was hashed correctly or not. But it is the best answer in this question.

enable_secret_error.jpg

Question 2

Explanation

Excessive debugs to the console port of a router can cause the router to hang. This is because the router automatically prioritizes console output ahead of other router functions. Hence if the router is processing a large debug output to the console port, it may hang. Hence, if the debug output is excessive use the vty (telnet) ports or the log buffers to obtain your debugs.

Note: By default, logging is enabled on the console port. Hence, the console port always processes debug output even if you are actually using some other port or method (such as Aux, vty or buffer) to capture the output. Hence, Cisco recommends that, under normal operating conditions, you have the no logging console command enabled at all times and use other methods to capture debugs.

To enable logging logging on your virtual terminal connection (telnet), use the “terminal monitor” command under Privileged mode (Router#)

Reference: http://www.cisco.com/c/en/us/support/docs/dial-access/integrated-services-digital-networks-isdn-channel-associated-signaling-cas/10374-debug.html

Question 3

Explanation

Per-packet load-balancing means that the router sends one packet for destination1 over the first path, the second packet for (the same) destination1 over the second path, and so on. Per-packet load balancing guarantees equal load across all links. However, there is potential that the packets may arrive out of order at the destination because differential delay may exist within the network -> Answer D is correct.

When searching the routing table, the router looks for the longest match for the destination IP address prefix. This is done at “process level” (known as process switching), which means that the lookup is considered as just another process queued among other CPU processes

Interrupt-level switching means that when a packet arrives, an interrupt is triggered which causes the CPU to postpone other tasks in order to handle that packet.

In general, process switching is faster then interrupt-level switching and can cause out-of-order packets.

Question 4

Explanation

The command “debug condition interface <interface>” command is used to disable debugging messages for all interfaces except the specified interface so in this case the debug output will be shown on Fa0/1 interface only.

Note: If in this question there was another “debug condition interface fa0/0” command configured then the answer should be C (both interfaces will show debugging ouput).

Question 5

Explanation

There are a few simple steps you can follow to ensure your VTY lines are as secure as possible. The easiest way is to enable username / password authentication. Other ways are to include an access-list to prevent unwanted IP addresses from connecting and use SSH to encrypt the traffic connecting to the device.

Question 6

Explanation

An Integrated Services Router(ISR) router can be implemented an Ethernet Switch Module to perform both IP routing and inter-VLAN routing. With this module, an ISR router will contain interface vlan configurations.

Question 7

Question 8

Access List

August 6th, 2019 digitaltut 138 comments

Question 1

Explanation

The first answer is not correct because the 10.0.0.0 network range is not correct. It should be 10.0.0.0. to 10.255.255.255.

Question 2

Explanation

Logging-enabled access control lists (ACLs) provide insight into traffic as it traverses the network or is dropped by network devices. Unfortunately, ACL logging can be CPU intensive and can negatively affect other functions of the network device. There are two primary factors that contribute to the CPU load increase from ACL logging: process switching of packets that match log-enabled access control entries (ACEs) and the generation and transmission of log messages.

Process switching is the slowest switching methods (compared to fast switching and Cisco Express Forwarding) because it must find a destination in the routing table. Process switching must also construct a new Layer 2 frame header for every packet. With process switching, when a packet comes in, the scheduler calls a process that examines the routing table, determines which interface the packet should be switched to and then switches the packet. The problem is, this happens for the every packet.

Reference: http://www.cisco.com/web/about/security/intelligence/acl-logging.html

Question 3

Explanation

If you use the “debug ip packet” command on a production router, you can bring it down since it generates an output for every packet and the output can be extensive. The best way to limit the output of debug ip packet is to create an access-list that linked to the debug. Only packets that match the access-list criteria will be subject to debug ip packet. For example, this is how to monitor traffic from 1.1.1.1 to 2.2.2.2

access-list 100 permit ip 1.1.1.1 2.2.2.2
debug ip packet 100

Note: The “debug ip packet” command is used to monitor packets that are processed by the routers routing engine and are not fast switched.

Question 4

Question 5

Question 6

Explanation

+ The question asks to “always” block traffic (every week) so we must use keyword “periodic”.
+ Traffic should be blocked to 11:59 PM, which means 23:59

Note: The time is specified in 24-hour time (hh:mm), where the hours range from 0 to 23 and the minutes range from 0 to 59

Only answer B satisfies these two requirements so it is the best answer. In fact, all the above answers are not correct as the access-list should deny web traffic, not allow them as shown in the answers.

Question 7

Question 8

Question 9

Explanation

The established keyword is only applicable to TCP access list entries to match TCP segments that have the ACK and/or RST control bit set (regardless of the source and destination ports), which assumes that a TCP connection has already been established in one direction only. Let’s see an example below:

access-list_established.jpgSuppose you only want to allow the hosts inside your company to telnet to an outside server but not vice versa, you can simply use an “established” access-list like this:

access-list 100 permit tcp any any established
access-list 101 permit tcp any any eq telnet
!
interface S0/0
ip access-group 100 in
ip access-group 101 out

Note:

Suppose host A wants to start communicating with host B using TCP. Before they can send real data, a three-way handshake must be established first. Let’s see how this process takes place:

TCP_Three_way_handshake.jpg

1. First host A will send a SYN message (a TCP segment with SYN flag set to 1, SYN is short for SYNchronize) to indicate it wants to setup a connection with host B. This message includes a sequence (SEQ) number for tracking purpose. This sequence number can be any 32-bit number (range from 0 to 232) so we use “x” to represent it.

2. After receiving SYN message from host A, host B replies with SYN-ACK message (some books may call it “SYN/ACK” or “SYN, ACK” message. ACK is short for ACKnowledge). This message includes a SYN sequence number and an ACK number:
+ SYN sequence number (let’s called it “y”) is a random number and does not have any relationship with Host A’s SYN SEQ number.
+ ACK number is the next number of Host A’s SYN sequence number it received, so we represent it with “x+1”. It means “I received your part. Now send me the next part (x + 1)”.

The SYN-ACK message indicates host B accepts to talk to host A (via ACK part). And ask if host A still wants to talk to it as well (via SYN part).

3. After Host A received the SYN-ACK message from host B, it sends an ACK message with ACK number “y+1” to host B. This confirms host A still wants to talk to host B.

Question 10

Explanation

Reflexive access lists provide filtering on upper-layer IP protocol sessions. They contain temporary entries that are automatically created when a new IP session begins. They are nested within extended, named IP access lists that are applied to an interface. Reflexive access lists are typically configured on border routers, which pass traffic between an internal and external network. These are often firewall routers. Reflexive access lists do not end with an implicit deny statement because they are nested within an access list and the subsequent statements need to be examined.

Reference: http://www.cisco.com/en/US/docs/ios-xml/ios/sec_data_acl/configuration/15-1s/sec-access-list-ov.html

Question 11

Explanation

The command “ipv6 traffic-filter access-list-name { in | out }” applies the access list to incoming or outgoing traffic on the interface.

Reference: http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3750/software/release/12-2_55_se/configuration/guide/scg3750/swv6acl.html

Question 12

Question 13

Explanation

When the ACL logging feature is configured, the system monitors ACL flows and logs dropped packets and statistics for each flow that matches the deny conditions of the ACL entry.

The log and log-input options apply to an individual ACE and cause packets that match the ACE to be logged. The sample below illustrates the initial message and periodic updates sent by an IOS device with a default configuration using the log ACE option.

*May 1 22:12:13.243: %SEC-6-IPACCESSLOGP: list ACL-IPv4-E0/0-IN permitted tcp 192.168.1.3(1024) -> 192.168.2.1(22), 1 packet

Reference: https://www.cisco.com/c/en/us/about/security-center/access-control-list-logging.html

From the example above we can see when an ACL drops a packet, it generates a level 6 Syslog (%SEC-6-)

Point-to-Point Protocol

August 5th, 2019 digitaltut 62 comments

If you are not sure about PPP and PPPoE, please read my PPP tutorial and PPPoE tutorial.

Question 1

Explanation

Password Authentication Protocol (PAP) is a very basic two-way process. The username and password are sent in plain text, there is no encryption or protection. If it is accepted, the connection is allowed. The configuration below shows how to configure PAP on two routers:

R1(config)#username R2 password digitaltut1
R1(config)#interface s0/0/0
R1(config-if)#encapsulation ppp
R1(config-if)#ppp authentication PAP
R1(config-if)#ppp pap sent-username R1 password digitaltut2
R2(config)#username R1 password digitaltut2
R2(config)#interface s0/0/0
R2(config-if)#encapsulation ppp
R2(config-if)#ppp authentication PAP
R2(config-if)#ppp pap sent-username R2 password digitaltut1

Note: The PAP “sent-username” and password that each router sends must match those specified with the “username … password …” command on the other router.

Question 2

Explanation

Challenge Handshake Authentication Protocol (CHAP) periodically verifies the identity of the client by using a three-way handshake. The three-way handshake steps are as follows:

1. When a client contacts a server that uses CHAP, the server (called the authenticator) responds by sending the client a simple text message (sometimes called the challenge text). This text is not important and it does not matter if anyone can intercepts it.
2. The client then takes this information and encrypts it using its password which was shared by both the client and server. The encrypted text is then returned to the server.
3. The server has the same password and uses it as a key to encrypt the information it previously sent to the client. It compares its results with the encrypted results sent by the client. If they are the same, the client is assumed to be authentic.

Note: PPP supports two authentication protocols: PAP and CHAP.

Question 3

Question 4

Explanation

PPP supports two authentication protocols: Password Authentication Protocol (PAP) and Challenge Handshake Authentication Protocol (CHAP). PAP authentication involves a two-way handshake where the username and password are sent across the link in clear text. For more information about PPP Authentication methods, please read Point to Point Protocol (PPP) Tutorial

Question 5

Question 6

Question 7

PPPoE Questions

August 4th, 2019 digitaltut 43 comments

Question 1

Explanation

PPPoE provides a standard method of employing the authentication methods of the Point-to-Point Protocol (PPP) over an Ethernet network. When used by ISPs, PPPoE allows authenticated assignment of IP addresses. In this type of implementation, the PPPoE client and server are interconnected by Layer 2 bridging protocols running over a DSL or other broadband connection.

PPPoE is composed of two main phases:
+ Active Discovery Phase: In this phase, the PPPoE client locates a PPPoE server, called an access concentrator. During this phase, a Session ID is assigned and the PPPoE layer is established.
+ PPP Session Phase: In this phase, PPP options are negotiated and authentication is performed. Once the link setup is completed, PPPoE functions as a Layer 2 encapsulation method, allowing data to be transferred over the PPP link within PPPoE headers.

Reference: http://www.cisco.com/c/en/us/td/docs/security/asa/asa92/configuration/vpn/asa-vpn-cli/vpn-pppoe.html

Question 2

Explanation

PPP Session Phase: In this phase, PPP options are negotiated and authentication is performed. Once the link setup is completed, PPPoE functions as a Layer 2 encapsulation method, allowing data to be transferred over the PPP link within PPPoE headers.

Reference: http://www.cisco.com/c/en/us/td/docs/security/asa/asa92/configuration/vpn/asa-vpn-cli/vpn-pppoe.html

Question 3

Explanation

The “dialer persistent” command (under interface configuration mode) allows a dial-on-demand routing (DDR) dialer profile connection to be brought up without being triggered by interesting traffic. When configured, the dialer persistent command starts a timer when the dialer interface starts up and starts the connection when the timer expires. If interesting traffic arrives before the timer expires, the connection is still brought up and set as persistent. An example of configuring is shown below:

interface Dialer1
ip address 12.12.12.1 255.255.255.0
encapsulation ppp
dialer-pool 1
dialer persistent

Question 4

Explanation

The “vpdn enable” command is used to enable virtual private dialup networking (VPDN) on the router and inform the router to look for tunnel definitions in a local database and on a remote authorization server (home gateway). The following steps include: configure the VPDN group; configure the virtual-template; create the IP pools.

Question 5

Explanation

There are three authentication methods that can be used to authenticate a PPPoE connection:

+ CHAP – Challenge Handshake Authentication Protocol
+ MS-CHAP – Microsoft Challenge Handshake Authentication Protocol Version 1 & 2
+ PAP – Password Authentication Protocol

In which MS-CHAP & CHAP are two encrypted authentication protocol while PAP is unencrypted authentication protocol.

Note: PAP authentication involves a two-way handshake where the username and password are sent across the link in clear text; hence, PAP authentication does not provide any protection against playback and line sniffing.

With CHAP, the server (authenticator) sends a challenge to the remote access client. The client uses a hash algorithm (also known as a hash function) to compute a Message Digest-5 (MD5) hash result based on the challenge and a hash result computed from the user’s password. The client sends the MD5 hash result to the server. The server, which also has access to the hash result of the user’s password, performs the same calculation using the hash algorithm and compares the result to the one sent by the client. If the results match, the credentials of the remote access client are considered authentic. A hash algorithm provides one-way encryption, which means that calculating the hash result for a data block is easy, but determining the original data block from the hash result is mathematically infeasible.

Question 6

Explanation

A PPPoE session is initiated by the PPPoE client. If the session has a timeout or is disconnected, the PPPoE client will immediately attempt to reestablish the session. The following four steps describe the exchange of packets that occurs when a PPPoE client initiates a PPPoE session:
1. The client broadcasts a PPPoE Active Discovery Initiation (PADI) packet.
2. When the access concentrator receives a PADI that it can serve, it replies by sending a PPPoE Active Discovery Offer (PADO) packet to the client.
3. Because the PADI was broadcast, the host may receive more than one PADO packet. The host looks through the PADO packets it receives and chooses one. The choice can be based on the access concentrator name or on the services offered. The host then sends a single PPPoE Active Discovery Request (PADR) packet to the access concentrator that it has chosen.
4. The access concentrator responds to the PADR by sending a PPPoE Active Discovery Session-confirmation (PADS) packet. At this point a virtual access interface is created that will then negotiate PPP, and the PPPoE session will run on this virtual access.

If a client does not receive a PADO for a preceding PADI, the client sends out a PADI at predetermined intervals. That interval is doubled for every successive PADI that does not evoke a response, until the interval reaches a configured maximum.

If PPP negotiation fails or the PPP line protocol is brought down for any reason, the PPPoE session and the virtual access will be brought down. When the PPPoE session is brought down, the client waits for a predetermined number of seconds before trying again to establish a PPPoE.

Reference: http://www.cs.vsb.cz/grygarek/TPS/DSL/pppoe_client.pdf

Question 7

Question 8

Question 9

Explanation

The picture below shows all configuration needed for PPPoE:

PPPoE_Topology_with_config.jpg

As we can see from the PPPoE Client configuration, to get the IP address assigned from the PPPoE server the command “ip address negotiated” should be used. For more information about PPPoE configuration please read our PPPoE tutorial.

Question 10

Explanation

According to this link: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/bbdsl/configuration/xe-3s/bba-pppoe-client.html

The PPPoE client does not support the following:
+ More than ten clients per customer premises equipment (CPE)-> This means a CPE can support up to 10 clients so answer A is correct.
+ Coexistence of the PPPoE client and server on the same device -> answer C is not correct

In the above link there is a topology shows “DMVPN Access to Multiple Hosts from the Same PPPoE Client” -> Answer B is correct.

Question 11

Question 12

CEF & Fast Switching

August 3rd, 2019 digitaltut 67 comments

CEF Quick summary:

Cisco Express Forwarding (CEF) provides the ability to switch packets through a device in a very quick and efficient way while also keeping the load on the router’s processor low. CEF is made up of two different main components: the Forwarding Information Base (FIB) and the Adjacency Table. These are automatically updated at the same time as the routing table.

The adjacency table is tasked with maintaining the layer 2 next-hop information for the FIB.

Adjacency Types That Require Special Handling:

Glean adjacency – in short when the router is directly connected to hosts the FIB table on the router will maintain a prefix for the subnet rather than for the individual host prefix. This subnet prefix points to a GLEAN adjacency. A glean adjacency entry indicates that a particular next hop should be directly connected, but there is no MAC header rewrite information available. When the device needs to forward packets to a specific host on a subnet, Cisco Express Forwarding requests an ARP entry for the specific prefix, ARP sends the MAC address, and the adjacency entry for the host is built. 
Punt adjacency – When packets to a destination prefix can’t be CEF Switched, or the feature is not supported in the CEF Switching path, the router will then use the next slower switching mechanism configured on the router.
Null adjacency: Packets destined for a Null0 interface are dropped. Null adjacency can be used as an effective form of access filtering.

Other special adjacency types are Discard adjacency and Drop adjacency but they are not mentioned here.

Question 1

Explanation

The command “show ip cef” is used to display the CEF Forwarding Information Base (FIB) table. There are some entries we want to explain:
+ If the “Next Hop” field of a network prefix is set to receive, the entry represents an IP address on one of the router’s interfaces. In this case, 192.168.201.2 and 192.168.201.31 are IP addresses assigned to interfaces on the local router.
+ If the “Next Hop” field of a network prefix is set to attached, the entry represents a network to which the router is directly attached. In this case the prefix 192.168.201.0/27 is a network directly attached to router R2’s Fa0/0 interface.

But there are some special cases:
+ The all-0s host addresses (for example, 192.168.201.0/32) and the all-1s host addresses (not have in the output above but for example, 192.168.201.255/32) also show as receive entries.
+ 255.255.255.255/32 is the local broadcast address for a subnet
+ 0.0.0.0/32: maybe it is a reserved link-local address
+ 0.0.0.0/0: This is the default route that matching all other addresses (also known as “gateway of last resort”). In this case it points to 192.168.201.1 -> Answer C is correct.

Reference: CCNP Routing and Switching ROUTE 300-101 Official Cert Guide

Question 2

Explanation

The “show adjacency” command is used to display information about the Cisco Express Forwarding adjacency table or the hardware Layer 3-switching adjacency table.

There are two known reasons for an incomplete adjacency:
+ The router cannot use ARP successfully for the next-hop interface.
+ After a clear ip arp or a clear adjacency command, the router marks the adjacency as incomplete. Then it fails to clear the entry.

Note: Two nodes in the network are considered adjacent if they can reach each other using only one hop.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/express-forwarding-cef/17812-cef-incomp.html

Question 3

Explanation

The “show ip cache” command displays the contents of a router’s fast cache. An example of the output of this command is shown below:

show_ip_cache.jpg

Note: If CEF is disabled and fast switching is enabled, the router begins to populate its fast cache.

Question 4

Question 5

Explanation

Cisco Express Forwarding (CEF) provides the ability to switch packets through a device in a very quick and efficient way while also keeping the load on the router’s processor low. CEF is made up of two different main components: the Forwarding Information Base (FIB) and the Adjacency Table. These are automatically updated at the same time as the routing table.

The adjacency table is tasked with maintaining the layer 2 next-hop information for the FIB.

Question 6

Explanation

The explanation of this question is too lengthy so we recommend to read this article: http://www.cisco.com/c/en/us/support/docs/ip/express-forwarding-cef/26083-trouble-cef.html

Question 7

Explanation

Glean adjacency – in short when the router is directly connected to hosts the FIB table on the router will maintain a prefix for the subnet rather than for the individual host prefix. This subnet prefix points to a GLEAN adjacency.
Punt adjacency – When packets to a destination prefix can’t be CEF Switched, or the feature is not supported in the CEF Switching path, the router will then use the next slower switching mechanism configured on the router.

Question 8

Explanation

Nodes in the network are said to be adjacent if they can reach each other with a single hop across a link layer. In addition to the FIB, CEF uses adjacency tables to prepend Layer 2 addressing information. The adjacency table maintains Layer 2 next-hop addresses for all FIB entries.

Reference: https://www.cisco.com/c/en/us/td/docs/ios/12_2/switch/configuration/guide/fswtch_c/xcfcef.html

Question 9

Frame Relay Questions

August 2nd, 2019 digitaltut 33 comments

Question 1

Explanation

Normal (Ethernet) ARP Request knows the Layer 3 address (IP) and requests for Layer 2 address (MAC). On the other hand, Frame Relay Inverse ARP knows the Layer 2 address (DLCI) and requests for Layer 3 address (IP) so we called it “Inverse”. For detail explanation about Inverse ARP Request please read our Frame Relay tutorial – Part 2.

Question 2

Explanation

When saying “Frame Relay point-to-point” network, it means “Frame Relay subinterfaces” run “point-to-point”. Notice that Frame Relay subinterfaces can run in two modes:
+ Point-to-Point: When a Frame Relay point-to-point subinterface is configured, the subinterface emulates a point-to-point network and OSPF treats it as a point-to-point network type
+ Multipoint: When a Frame Relay multipoint subinterface is configured, OSPF treats this subinterface as an NBMA network type.

And there are 4 network types which can be configured with OSPF. The hello & dead intervals of these types are listed below:

Network Type Hello Interval (secs) Dead Interval (secs)
Point-to-Point 10 40
Point-to-Multipoint 30 120
Broadcast 10 40
Non-Broadcast 30 120

Therefore the default OSPF hello interval on a Frame Relay point-to-point network is 10 seconds.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/open-shortest-path-first-ospf/13693-22.html

Question 3

Explanation

Traffic shaping should be used when:
+ Hub site (headquarter) has much faster speed link than the spokes (remote sites). In this case we need to rate-limit the hub site so that it does not exceed the remote side access rate
+ Hub site has the same speed link as the spokes. For example both the headquarter and the spokes use T1 links. In this case, we need to rate-limit the remote sites so as to not overrun the hub.

An example of configuring traffic shaping is shown below:

interface Serial0/1
encapsulation frame-relay
frame-relay traffic-shaping
!
interface Serial0/1.10 point-to-point
ip address 10.10.10.10 255.255.255.0
frame-relay interface-dlci 10 class my_traffic_shaping
!
map-class frame-relay my_traffic_shaping
frame-relay adaptive-shaping becn //Configure the router to respond to frame relay frames that have the BECN bit set
frame-relay cir 128000 //Specify the committed information rate (CIR) for a Frame Relay virtual circuit
frame-relay bc 8000 //Specify the committed burst size (Bc) for a Frame Relay virtual circuit.
frame-relay be 8000 // Specify the excess burst size (Be) for a Frame Relay virtual circuit.
frame-relay mincir 64000 // Specify minimum acceptable CIR for a Frame Relay virtual circuit.

Reference: http://www.cisco.com/c/en/us/support/docs/wan/frame-relay/6151-traffic-shaping-6151.html

Question 4

Question 5

Question 6

Question 7

Explanation

Each subinterface is treated like a physical interface so split horizon issues are overcome.

Question 8

Explanation

Frame-relay uses data-link connection identifiers (DLCIs) to build up logical circuits. The identifiers have local meaning only, that means that their values are unique per router, but not necessarily in the other routers. For example, there is only one DLCI of 23 representing for the connection from HeadQuarter to Branch 1 and only one DLCI of 51 from HeadQuarter to Branch 2. Branch 1 can use the same DLCI of 23 to represent the connection from it to HeadQuarter. Of course it can use other DLCIs as well because DLCIs are just local significant.

Frame_Relay_DLCI.jpg

By including a DLCI number in the Frame Relay header, HeadQuarter can communicate with both Branch 1 and Branch 2 over the same physical circuit.

Question 9

Explanation

An example of configuring Frame Relay point-to-multipoint connections is described at: http://www.9tut.com/frame-relay-gns3-lab. Frame Relay point-to-multipoint requires inverse ARP (which is enabled by default). It requires the frame-relay mapping command to be configured also. For example: R1(config-if)#frame-relay route 102 interface Serial0/1 201.

Question 10

Explanation

LMI autosense is automatically enabled in the following situations:
+ The router is powered up or the interface changes state to up
+ The line protocol is down but the line is up
+ The interface is a Frame Relay DTE
+ The LMI type is not explicitly configured on the interface

Reference: CCIE Practical Studies: Security

GRE Tunnel

August 1st, 2019 digitaltut 62 comments

Question 1

Explanation

GRE packets are encapsulated within IP and use IP protocol type 47

Question 2

Explanation

A GRE interface definition includes:

+ An IPv4 address on the tunnel
+ A tunnel source
+ A tunnel destination

Below is an example of how to configure a basic GRE tunnel:

interface Tunnel 0
ip address 10.10.10.1 255.255.255.0
tunnel source fa0/0
tunnel destination 172.16.0.2

In this case the “IPv4 address on the tunnel” is 10.10.10.1/24 and “sourced the tunnel from an Ethernet interface” is the command “tunnel source fa0/0”. Therefore it only needs a tunnel destination, which is 172.16.0.2.

Note: A multiple GRE (mGRE) interface does not require a tunnel destination address.

Question 3

Explanation

The tunnel interface is configured in default mode means the tunnel has been configured as a point-to-point (P2P) GRE tunnel. Normally, a P2P GRE Tunnel interface comes up (up/up state) as soon as it is configured with a valid tunnel source address or interface which is up and a tunnel destination IP address which is routable.

Under normal circumstances, there are only three reasons for a GRE tunnel to be in the up/down state:
+ There is no route, which includes the default route, to the tunnel destination address.
+ The interface that anchors the tunnel source is down.
+ The route to the tunnel destination address is through the tunnel itself, which results in recursion.

Therefore if a route towards the tunnel destination has not been configured then the tunnel is stuck in up/down state.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/118361-technote-gre-00.html

Question 4

Explanation

In this question only answer A is a reasonable answer. When the state of the tunnel interface is continuously moving between up and down we must make sure the route towards the tunnel destination address is good. If it is not good then that route may be removed from the routing table -> the tunnel interface comes down.

Question 5

Explanation

The IP protocol was designed for use on a wide variety of transmission links. Although the maximum length of an IP datagram is 65535, most transmission links enforce a smaller maximum packet length limit, called an MTU. The value of the MTU depends on the type of the transmission link. The design of IP accommodates MTU differences since it allows routers to fragment IP datagrams as necessary. The receiving station is responsible for the reassembly of the fragments back into the original full size IP datagram.

Fragmentation and Path Maximum Transmission Unit Discovery (PMTUD) is a standardized technique to determine the maximum transmission unit (MTU) size on the network path between two hosts, usually with the goal of avoiding IP fragmentation. PMTUD was originally intended for routers in IPv4. However, all modern operating systems use it on endpoints.

The TCP Maximum Segment Size (TCP MSS) defines the maximum amount of data that a host is willing to accept in a single TCP/IP datagram. This TCP/IP datagram might be fragmented at the IP layer. The MSS value is sent as a TCP header option only in TCP SYN segments. Each side of a TCP connection reports its MSS value to the other side. Contrary to popular belief, the MSS value is not negotiated between hosts. The sending host is required to limit the size of data in a single TCP segment to a value less than or equal to the MSS reported by the receiving host.

TCP MSS takes care of fragmentation at the two endpoints of a TCP connection, but it does not handle the case where there is a smaller MTU link in the middle between these two endpoints. PMTUD was developed in order to avoid fragmentation in the path between the endpoints. It is used to dynamically determine the lowest MTU along the path from a packet’s source to its destination.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html (there is some examples of how TCP MSS avoids IP Fragmentation in this link but it is too long so if you want to read please visit this link)

Note: IP fragmentation involves breaking a datagram into a number of pieces that can be reassembled later.

Question 6

Explanation

A valid tunnel destination is one which is routable (which means the destination is present or there is a default route in the routing table). However, it does not have to be reachable -> Answer B is correct.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/118361-technote-gre-00.html

For a tunnel to be up/up, the source interface must be up/up, it must have an IP address, and the destination must be reachable according to your own routing table.

Question 7

Question 8

Question 9

Explanation

GRE tunnel provides a way to encapsulate any network layer protocol over any other network layer protocol. GRE allows routers to act as if they have a virtual point-to-point connection to each other. GRE tunneling is accomplished by creating routable tunnel endpoints that operate on top of existing physical and/or other logical endpoints. Especially, IPsec does not support multicast traffic so GRE tunnel is a good solution instead (or we can combine both).

Question 10

Question 11

Explanation

When running GRE tunnel over IPSec, a packet is first encapsulated in a GRE packet and then GRE is encrypted by IPSec -> C is correct.

Question 12

Explanation

Four steps to configure GRE tunnel over IPsec are:

1. Create a physical or loopback interface to use as the tunnel endpoint. Using a loopback rather than a physical interface adds stability to the configuration.
2. Create the GRE tunnel interfaces.
3. Add the tunnel subnet to the routing process so that it exchanges routing updates across that interface.
4. Add GRE traffic to the crypto access list, so that IPsec encrypts the GRE tunnel traffic.

An example of configuring GRE Tunnel is shown below:

interface Tunnel0
ip address 192.168.16.2 255.255.255.0
tunnel source FastEthernet1/0
tunnel destination 14.38.88.10
tunnel mode gre ip

Note: The last command is enabled by default so we can ignore it in the configuration)

(Reference: CCNP Routing and Switching Quick Reference)

Question 13

Explanation

The address of the crypto isakmp key (line “crypto isakmp key ******* address 172.16.1.2”) should be 192.168.2.1, not 172.16.1.2 -> A is correct.

Question 14

Explanation

The access-list must also support GRE traffic with the “access-list 102 permit gre host 192.168.1.1 host 192.168.2.1” command -> B is correct.

Below is the correct configuration for GRE over IPsec on router B1 along with descriptions.

Configure_GRE_tunnel_over_IPsec.jpg

The interface tunnel configuration is rather simple so I don’t post it here.

Question 15

Explanation

The “tunnel destination” in interface tunnel should be 192.168.2.1, not 172.16.1.2 -> D is correct.

DMVPN Questions

July 31st, 2019 digitaltut 44 comments

Note: If you are not sure about DMVPN, please read our DMVPN tutorial first.

Question 1

Explanation

From the output we learn that the logical address 10.2.1.2 is mapped to the NBMA address 10.12.1.2. Type “dynamic” means NBMA address was obtained from NHRP Request packet. Type “static” means NBMA address is statically configured. The “authoritative” flag means that the NHRP information was obtained from the Next Hop Server (NHS).

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_4/ip_addr/configuration/guide/hadnhrp.html

Question 2

Explanation

When DMVPN tunnels flap, check the neighborship between the routers as issues with neighborship formation between routers may cause the DMVPN tunnel to flap. In order to resolve this problem, make sure the neighborship between the routers is always up.

Reference: http://www.cisco.com/c/en/us/support/docs/security-vpn/ipsec-negotiation-ike-protocols/29240-dcmvpn.html#Prblm1

Question 3

Explanation

DMVPN is not a protocol, it is the combination of the following technologies:

+ Multipoint GRE (mGRE)
+ Next-Hop Resolution Protocol (NHRP)
+ Dynamic Routing Protocol (EIGRP, RIP, OSPF, BGP…) (optional)
+ Dynamic IPsec encryption (optional)
+ Cisco Express Forwarding (CEF)

For more information about DMVPN, please read our DMVPN tutorial.

Question 4

Explanation

To allow communication to multiple sites using only one tunnel interface, we need to configure that tunnel in “multipoint” mode. Otherwise we have to create many tunnel interfaces, each can only communicate to one site.

DMVPN_Topo_mGRE.jpg

 

Question 5

Explanation

An mGRE tunnel inherits the concept of a classic GRE tunnel but an mGRE tunnel does not require a unique tunnel interface for each connection between Hub and spoke like traditional GRE. One mGRE can handle multiple GRE tunnels at the other ends. Unlike classic GRE tunnels, the tunnel destination for a mGRE tunnel does not have to be configured; and all tunnels on Spokes connecting to mGRE interface of the Hub can use the same subnet.

DMVPN_Topo_mGRE.jpg

For more information about DMVPN, please read our DMVPN tutorial.

Question 6

Explanation

GRE tunnels are the first thing we have to configure to create a DMVPN network so we should start troubleshooting from there. NHRP can only work properly with operating GRE tunnels.

Question 7

Question 8

Explanation

The “show crypto isakmp sa” command displays all current Internet Key Exchange (IKE) security associations (SAs) at a peer.

QM_IDLE state means this tunnel is UP and the IKE SA key exchange was successful, but is idle and may be used for subsequent quick mode exchanges. It is in a quiescent state (QM) -> Answers A, C, D are incorrect so answer B is the only suitable answer left.

Question 9

Explanation

The DMVPN is comprised of IPsec/GRE tunnels that connect branch offices to the data center. DMVPN troubleshooting requires the network engineer to verify neighbor links, routing and VPN peer connectivity. The GRE protocol is required to support routing advertisements. The VPN peer connection is comprised of IKE and IPsec security association exchanges.

The command “show crypto ipsec sa” is used to verify IPsec connectivity between branch office and data center router. We can also use this command to display the statistics of an active tunnel on a DMVPN network.

DMVPN_show_crypto_ipsec_sa.jpg

Note:
+ The command “show crypto isakmp sa” is used on DMVPN to verify IKE connectivity status to branch offices. The normal IKE state = QM IDLE for branch routers and data center routers.
+ The command “show crypto engine connection active” displays the total encrypts and decrypts per SA.

Reference: http://www.cisco.com/c/en/us/support/docs/security-vpn/ipsec-negotiation-ike-protocols/29240-dcmvpn.html

Question 10

Question 11

Question 12

Explanation

Both DMVPN Phase 2 and phase 3 support spoke to spoke communications (spokes talk to each other directly). In this case there is only an option of phase 2 (not phase 3) so it is the only correct answer.

Question 13

Explanation

Some documents say RIPv2 also supports DMVPN but EIGPR, OSPF and BGP are the better choices so we should choose them.

Question 14

Explanation

DMVPN is not a protocol, it is the combination of the following technologies:
+ Multipoint GRE (mGRE)
+ Next-Hop Resolution Protocol (NHRP)
+ Dynamic Routing Protocol (EIGRP, RIP, OSPF, BGP…) (optional)
+ Dynamic IPsec encryption (optional)
+ Cisco Express Forwarding (CEF)

DMVPN combines multiple GRE (mGRE) Tunnels, IPSec encryption and NHRP (Next Hop Resolution Protocol) to perform its job and save the administrator the need to define multiple static crypto maps and dynamic discovery of tunnel endpoints.

Question 15

TCP UDP Questions

July 30th, 2019 digitaltut 24 comments

Question 1

Explanation

It is a general best practice to not mix TCP-based traffic with UDP-based traffic (especially Streaming-Video) within a single service-provider class because of the behaviors of these protocols during periods of congestion. Specifically, TCP transmitters throttle back flows when drops are detected. Although some UDP applications have application-level windowing, flow control, and retransmission capabilities, most UDP transmitters are completely oblivious to drops and, thus, never lower transmission rates because of dropping.
When TCP flows are combined with UDP flows within a single service-provider class and the class experiences congestion, TCP flows continually lower their transmission rates, potentially giving up their bandwidth to UDP flows that are oblivious to drops. This effect is called TCP starvation/UDP dominance.
TCP starvation/UDP dominance likely occurs if TCP-based applications is assigned to the same service-provider class as UDP-based applications and the class experiences sustained congestion.
Granted, it is not always possible to separate TCP-based flows from UDP-based flows, but it is beneficial to be aware of this behavior when making such application-mixing decisions within a single service-provider class.

Reference: http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND/QoS-SRND-Book/VPNQoS.html

Question 2

Question 3

Explanation

TCP Selective Acknowledgement (SACK) prevents unnecessary retransmissions by specifying successfully received subsequent data. Let’s see an example of the advantages of TCP SACK.

TCP_ACK.jpgTCP (Normal) Acknowledgement TCP_SACK.jpg
TCP Selective Acknowledgement

For TCP (normal) acknowledgement, when a client requests data, server sends the first three segments (named of packets at Layer 4): Segment#1,#2,#3. But suppose Segment#2 was lost somewhere on the network while Segment#3 stills reached the client. Client checks Segment#3 and realizes Segment#2 was missing so it can only acknowledge that it received Segment#1 successfully. Client received Segment#1 and #3 so it creates two ACKs#1 to alert the server that it has not received any data beyond Segment#1. After receiving these ACKs, the server must resend Segment#2,#3 and wait for the ACKs of these segments.

For TCP Selective Acknowledgement, the process is the same until the Client realizes Segment#2 was missing. It also sends ACK#1 but adding SACK to indicate it has received Segment#3 successfully (so no need to retransmit this segment. Therefore the server only needs to resend Segment#2 only. But notice that after receiving Segment#2, the Client sends ACK#3 (not ACK#2) to say that it had all first three segments. Now the server will continue sending Segment #4,#5, …

The SACK option is not mandatory and it is used only if both parties support it.

The TCP Explicit Congestion Notification (ECN) feature allows an intermediate router to notify end hosts of impending network congestion. It also provides enhanced support for TCP sessions associated with applications, such as Telnet, web browsing, and transfer of audio and video data that are sensitive to delay or packet loss. The benefit of this feature is the reduction of delay and packet loss in data transmissions. Use the “ip tcp ecn” command in global configuration mode to enable TCP ECN.

The TCP time-stamp option provides improved TCP round-trip time measurements. Because the time stamps are always sent and echoed in both directions and the time-stamp value in the header is always changing, TCP header compression will not compress the outgoing packet. Use the “ip tcp timestamp” command to enable the TCP time-stamp option.

The TCP Keepalive Timer feature provides a mechanism to identify dead connections. When a TCP connection on a routing device is idle for too long, the device sends a TCP keepalive packet to the peer with only the Acknowledgment (ACK) flag turned on. If a response packet (a TCP ACK packet) is not received after the device sends a specific number of probes, the connection is considered dead and the device initiating the probes frees resources used by the TCP connection.

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipapp/configuration/xe-3s/asr1000/iap-xe-3s-asr1000-book/iap-tcp.html

Question 4

Explanation

Global synchronization occurs when multiple TCP hosts reduce their transmission rates in response to congestion. But when congestion is reduced, TCP hosts try to increase their transmission rates again simultaneously (known as slow-start algorithm), which causes another congestion. Global synchronization produces this graph:

TCP_Global_Synchronization.jpg

 

Global synchronization reduces optimal throughput of network applications and tail drop contributes to this phenomenon. When an interface on a router cannot transmit a packet immediately, the packet is queued. Packets are then taken out of the queue and eventually transmitted on the interface. But if the arrival rate of packets to the output interface exceeds the ability of the router to buffer and forward traffic, the queues increase to their maximum length and the interface becomes congested. Tail drop is the default queuing response to congestion. Tail drop simply means that “drop all the traffic that exceeds the queue limit. Tail drop treats all traffic equally and does not differentiate among classes of service.

Question 5

Explanation

When TCP is mixing with UDP under congestion, TCP flows will try to lower their transmission rate while UDP flows continue transmitting as usual. As a result of this, UDP flows will dominate the bandwidth of the link and this effect is called TCP-starvation/UDP-dominance. This can increase latency and lower the overall throughput.

Question 6

Question 7

Question 8

Explanation

If the speed of an interface is equal or less than 768 kbps (half of a T1 link), it is considered a low-speed interface. The half T1 only offers enough bandwidth to allow voice packets to enter and leave without delay issues. Therefore if the speed of the link is smaller than 768 kbps, it should not be configured with a queue.

Question 9

Question 10

Explanation

First we need to understand about bandwidth-delay product.

Bandwidth-delay product (BDP) is the maximum amount of data “in-transit” at any point in time, between two endpoints. In other words, it is the amount of data “in flight” needed to saturate the link. You can think the link between two devices as a pipe. The cross section of the pipe represents the bandwidth and the length of the pipe represents the delay (the propagation delay due to the length of the pipe).

Therefore the Volume of the pipe = Bandwidth x Delay (or Round-Trip-Time). The volume of the pipe is also the BDP.

Bandwidth-delay_Product.jpg

For example if the total bandwidth is 64 kbps and the RTT is 3 seconds, the formula to calculate BDP is:

BDP (bits) = total available bandwidth (bits/sec) * round trip time (sec) = 64,000 * 3 = 192,000 bits

-> BDP (bytes) = 192,000 / 8 = 24,000 bytes

Therefore we need 24KB to fulfill this link.

For your information, BDP is very important in TCP communication as it optimizes the use of bandwidth on a link. As you know, a disadvantage of TCP is it has to wait for an acknowledgment from the receiver before sending another data. The waiting time may be very long and we may not utilize full bandwidth of the link for the transmission.

Bandwidth-delay_Product_Wasted.jpg

Based on BDP, the sending host can increase the number of data sent on a link (usually by increasing the window size). In other words, the sending host can fill the whole pipe with data and no bandwidth is wasted.Bandwidth-delay_Product_Optimized.jpg

In conclusion, if we want an optimal end-to-end delay bandwidth product, TCP must use window scaling feature so that we can fill the entire “pipe” with data.

TCP UDP Questions 2

July 30th, 2019 digitaltut 30 comments

Question 1

Explanation

Unlike TCP which uses the sequence numbers to rearrange the segments when they arrive out of order, UDP just passes the received datagrams to the next OSI layer (the Session Layer) in the order in which they arrived.

Question 2

Question 3

Explanation

In Asymmetric routing, a packet traverses from a source to a destination in one path and takes a different path when it returns to the source. This is commonly seen in Layer-3 routed networks.

Issues to Consider with Asymmetric Routing

Asymmetric routing is not a problem by itself, but will cause problems when Network Address Translation (NAT) or firewalls are used in the routed path. For example, in firewalls, state information is built when the packets flow from a higher security domain to a lower security domain. The firewall will be an exit point from one security domain to the other. If the return path passes through another firewall, the packet will not be allowed to traverse the firewall from the lower to higher security domain because the firewall in the return path will not have any state information. The state information exists in the first firewall.

Reference: http://www.cisco.com/web/services/news/ts_newsletter/tech/chalktalk/archives/200903.html

Specifically for TCP-based connections, disabling stateful TCP checks can help mitigate asymmetric routing. When TCP state checks are disabled, the ASA can allow packets in a TCP connection even if the ASA didn’t see the entire TCP 3-way handshake. This feature is called TCP State Bypass.

Reference: https://supportforums.cisco.com/document/55536/asa-asymmetric-routing-troubleshooting-and-mitigation

Note: The active/active firewall topology uses two firewalls that are both actively providing firewall services.

Question 4

Explanation

A device that sends UDP packets assumes that they reach the destination. There is no mechanism to alert senders that the packet has arrived -> Answer A is not correct.

UDP throughput is not impacted by latency because the sender does not have to wait for the ACK to be sent back -> Answer B is not correct.

UDP does not negotiate how the connection will work, UDP just transmits and hopes for the best -> D is not correct.

Therefore only answer C is left.

Question 5

Explanation

The command “show tcp brief numeric” displays a concise description of TCP connection endpoints.

Question 6

Question 7

Explanation

TCP starvation/UDP dominance likely occurs if TCP-based applications is assigned to the same service-provider class as UDP-based applications and the class experiences sustained congestion.

TFTP (run on UDP port 69) and SNMP (runs on UDP port 161/162) are two protocols which run on UDP so they can cause TCP starvation.

Note: SMTP runs on TCP port 25; HTTPS runs on TCP port 443; FTP runs on TCP port 20/21

 

RIP Questions

July 29th, 2019 digitaltut 25 comments

Question 1

Question 2

Question 3

Question 4

Explanation

RIP can only be turned on under router sub command (in Router(config-router)# mode). Unlike OSPF or EIGRP, RIP cannot be enabled from interface sub command (Router(config-if)# mode)

OSPF Questions

July 28th, 2019 digitaltut 59 comments

Quick OSPF Overview

OSPF router ID selection:

OSPF uses the following criteria to select the router ID:
1. Manual configuration of the router ID (via the “router-id x.x.x.x” command under OSPF router configuration mode).
2. Highest IP address on a loopback interface.
3. Highest IP address on a non-loopback and active (no shutdown) interface.

OSPF forms neighbor relationship with other OSPF routers on the same segment by exchanging hello packets. The hello packets contain various parameters. Some of them should match between neighboring routers. These include:

+ Hello and Dead intervals
+ Area ID
+ Authentication type and password
+ Stub Area flag
+ Subnet ID and Subnet mask

When OSPF neighbor relationship is formed, a router goes through several state changes before it becomes fully adjacent with its neighbor. The states are Down -> Attempt (optional) -> Init -> 2-Way -> Exstart -> Exchange -> Loading -> Full. Short descriptions about these states are listed below:

Down: no information (hellos) has been received from this neighbor

Attempt: only valid for manually configured neighbors in an NBMA environment. In Attempt state, the router sends unicast hello packets every poll interval to the neighbor, from which hellos have not been received within the dead interval

Init: specifies that the router has received a hello packet from its neighbor, but the receiving router’s ID was not included in the hello packet

2-Way: indicates bi-directional communication has been established between two routers

Exstart: Once the DR and BDR are elected, the actual process of exchanging link state information can start between the routers and their DR and BDR

Exchange: OSPF routers exchange and compare database descriptor (DBD) packets

Loading: In this state, the actual exchange of link state information occurs. Outdated or missing entries are also requested to be resent

Full: routers are fully adjacent with each other

When OSPF is run on a network, two important events happen before routing information is exchanged:
+ Neighbors are discovered using multicast hello packets.
+ DR and BDR are elected for every multi-access network to optimize the adjacency building process. All the routers in that segment should be able to communicate directly with the DR and BDR for proper adjacency (in the case of a point-to-point network, DR and BDR are not necessary since there are only two routers in the segment, and hence the election does not take place).
For a successful neighbor discovery on a segment, the network must allow broadcasts or multicast packets to be sent.

In an NBMA network topology, which is inherently nonbroadcast, neighbors are not discovered automatically. OSPF tries to elect a DR and a BDR due to the multi-access nature of the network, but the election fails since neighbors are not discovered. Neighbors must be configured manually to overcome these problems

Each OSPF area only allows some specific LSAs to pass through. Below is a summarization of which LSAs are allowed in each OSPF area:

Area Restriction
Normal None
Stub No Type 5 AS-external LSA allowed
Totally Stub No Type 3, 4 or 5 LSAs allowed except the default summary route
NSSA No Type 5 AS-external LSAs allowed, but Type 7 LSAs that convert to Type 5 at the NSSA ABR can traverse
NSSA Totally Stub No Type 3, 4 or 5 LSAs except the default summary route, but Type 7 LSAs that convert to Type 5 at the NSSA ABR are allowed

OSPF Summarization
OSPF offers two methods of route summarization:
1) Summarization of internal routes performed on the ABRs
2) Summarization of external routes performed on the ASBRs

1) To summarize routes at the area boundary (ABRs), use the command:
area area-id range ip-address mask [advertise | not-advertise] [cost cost]

An internal summary route is generated if at least one subnet within the area falls in the summary address range and the summarized route metric is equal to the lowest cost of all the subnets within the summary address range. Interarea summarization can only be done for the intra-area routes of connected areas, and the ABR creates a route to Null0 to avoid loops in the absence of more specific routes.

2) To summarize external routes on the domain boundary (ASBRs), use the command:
summary-address {{ip-address mask} | {prefix mask}} [not-advertise] [tag tag]
The ASBR will summarize external routes before injecting them into the OSPF domain as type 5 external LSAs.

Note: An exception of using the “summary-address” is at the boundary of a NSSA area.

In both methods of route summarization described above, a summarized route is only generated if at least one subnet in the routing table falls in the summary address range.

Question 1

Explanation

LSA Type 7 is generated by an ASBR inside a Not So Stubby Area (NSSA) to describe routes redistributed into the NSSA. LSA 7 is translated into LSA 5 as it leaves the NSSA. These routes appear as N1 or N2 in the routing table inside the NSSA. Much like LSA 5, N2 is a static cost while N1 is a cumulative cost that includes the cost upto the ASBR -> LSA Type 7 only exists in an NSSA area.

Question 2

Question 3

Explanation

Answer B is not correct because using “passive-interface” command on ASW1 & ASW2 does not prevent DSW1 & DSW2 from sending routing updates to two access layer switches.

Question 4

Explanation

From the output above, we see the following LSAs:

+ Router Link States (Area 0): LSA Type 1 (Area 0)
+ Net Link States (Area 0): LSA Type 2 (Area 0)
+ Summary Net Link States (Area 0): LSA Type 3 (Area 0)
+ Router link States (Area 4): LSA Type 1 (Area 4)
+ Net Link States (Area 4): LSA Type 2 (Area 4)
+ Summary Net Link States (Area 4): LSA Type 3 (Area 4)

There are two areas represented on this router, which are Area 0 & Area 4. So we conclude this is an ABR router.

Just for your information, from the Router Link States (Area 0) part, we only see one entry 15.15.15.33. It is both the Link ID and ADV Router so we can conclude this is an IP address of one of the interfaces on the local router.

Question 5

Question 6

Questions 7

Explanation

When OSPF is run on a network, two important events happen before routing information is exchanged:
+ Neighbors are discovered using multicast hello packets.
+ DR and BDR are elected for every multi-access network to optimize the adjacency building process. All the routers in that segment should be able to communicate directly with the DR and BDR for proper adjacency (in the case of a point-to-point network, DR and BDR are not necessary since there are only two routers in the segment, and hence the election does not take place).
For a successful neighbor discovery on a segment, the network must allow broadcasts or multicast packets to be sent.

In an NBMA network topology, which is inherently nonbroadcast, neighbors are not discovered automatically. OSPF tries to elect a DR and a BDR due to the multi-access nature of the network, but the election fails since neighbors are not discovered. Neighbors must be configured manually to overcome these problems -> C is not correct while D is correct.

In Point-to-Multipoint network: This is a collection of point-to-point links between various devices on a segment. These networks also allow broadcast or multicast packets to be sent over the network. These networks can represent the multi-access segment as multiple point-to-point links that connect all the devices on the segment. -> A is correct.

Question 8

Explanation

OSPF forms neighbor relationship with other OSPF routers on the same segment by exchanging hello packets. The hello packets contain various parameters. Some of them should match between neighboring routers. These include:

+ Hello and Dead intervals
+ Area ID
+ Authentication type and password
+ Stub Area flag
+ Subnet ID and Subnet mask

So there are three correct answers in this question. Maybe in the exam you will see only two correct answers.

Question 9

Explanation

Let’s have a quick review of LSAs Type 4 & 5:

Summary ASBR LSA (Type 4) – Generated by the ABR to describe an ASBR to routers in other areas so that routers in other areas know how to get to external routes through that ASBR. For example, suppose R8 is redistributing external route (EIGRP, RIP…) to R3. This makes R3 an Autonomous System Boundary Router (ASBR). When R2 (which is an ABR) receives this LSA Type 1 update, R2 will create LSA Type 4 and flood into Area 0 to inform them how to reach R3. When R5 receives this LSA it also floods into Area 2.

OSPF_LSAs_Types_4.jpg

In the above example, the only ASBR belongs to area 1 so the two ABRs send LSA Type 4 to area 0 & area 2 (not vice versa). This is an indication of the existence of the ASBR in area 1.

Note:
+ Type 4 LSAs contain the router ID of the ASBR.
+ There are no LSA Type 4 injected into Area 1 because every router inside area 1 knows how to reach R3. R3 only uses LSA Type 1 to inform R2 about R8 and inform R2 that R3 is an ASBR.

External Link LSA (LSA 5) – Generated by ASBR to describe routes redistributed into the area and point the destination for these external routes to the ASBR. These routes appear as O E1 or O E2 in the routing table. In the topology below, R3 generates LSAs Type 5 to describe the external routes redistributed from R8 and floods them to all other routers and tell them “hey, if you want to reach these external routes, send your packets to me!”. But other routers will ask “how can I reach you? You didn’t tell me where you are in your LSA Type 5!”. And that is what LSA Type 4 do – tell other routers in other areas where the ASBR is!

OSPF_LSAs_Types_5.jpg

Each OSPF area only allows some specific LSAs to pass through. Below is a summarization of which LSAs are allowed in each OSPF area:

Area Restriction
Normal None
Stub No Type 5 AS-external LSA allowed
Totally Stub No Type 3, 4 or 5 LSAs allowed except the default summary route
NSSA No Type 5 AS-external LSAs allowed, but Type 7 LSAs that convert to Type 5 at the NSSA ABR can traverse
NSSA Totally Stub No Type 3, 4 or 5 LSAs except the default summary route, but Type 7 LSAs that convert to Type 5 at the NSSA ABR are allowed

Reference: http://www.cisco.com/c/en/us/support/docs/ip/open-shortest-path-first-ospf/13703-8.html

Therefore there are two OSPF areas that prevent LSAs Type 4 & 5: Totally Stub & NSSA Totally Stub areas

OSPF Questions 2

July 28th, 2019 digitaltut 46 comments

Question 1

Explanation

Loading: In this state, the actual exchange of link state information occurs. Based on the information provided by the DBDs, routers send link-state request packets. The neighbor then provides the requested link-state information in link-state update packets. During the adjacency, if a router receives an outdated or missing LSA, it requests that LSA by sending a link-state request packet. All link-state update packets are acknowledged.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/open-shortest-path-first-ospf/13685-13.html

Question 2

Question 3

Question 4


Question 5

Explanation

+ Standard areas can contain LSAs of type 1, 2, 3, 4, and 5, and may contain an ASBR. The backbone is considered a standard area.
+ Stub areas can contain type 1, 2, and 3 LSAs. A default route is substituted for external routes.
+ Totally stubby areas can only contain type 1 and 2 LSAs, and a single type 3 LSA. The type 3 LSA describes a default route, substituted for all external and inter-area routes.
+ Not-so-stubby areas implement stub or totally stubby functionality yet contain an ASBR. Type 7 LSAs generated by the ASBR are converted to type 5 by ABRs to be flooded to the rest of the OSPF domain.

Reference: http://packetlife.net/blog/2008/jun/24/ospf-area-types/

Question 6

Explanation

NSSA External LSA (Type 7) – Generated by an ASBR inside a Not So Stubby Area (NSSA) to describe routes redistributed into the NSSA. LSA 7 is translated into LSA 5 as it leaves the NSSA. These routes appear as N1 or N2 in the routing table inside the NSSA. Much like LSA 5, N2 is a static cost while N1 is a cumulative cost that includes the cost upto the ASBR.

OSPF_LSAs_Types_7.jpg

Question 7

Explanation

OSPFv3 uses the well-known IPv6 multicast addresses, FF02::5 to communicate with neighbors. The FF02::5 multicast address is known as the AllSPFRouters address. All OSPFv3 routers must join this multicast group and listen to packets for this multicast group. The OSPFv3 Hello packets are sent to this address.

Note: All other routers (non DR and non BDR) establish adjacency with the DR and the BDR and use the IPv6 multicast address FF02::6 (known as AllDRouters address) to send LSA updates to the DR and BDR.

The answer “link-local addresses” is also correct too. The reason is OSPFv3 routers use link-local address (FE80::/10) on its interfaces (as the source address) to send Hello packets to FF02::5 (as the destination address). So in fact this question is not clear and there are two correct answers here.

Note: The two IPv6 multicast addresses FF02::5 and FF02::6 have link-local scope.

Question 8

Explanation

NSSA External LSA (Type 7) – Generated by an ASBR inside a Not So Stubby Area (NSSA) to describe routes redistributed into the NSSA. LSA 7 is translated into LSA 5 as it leaves the NSSA. These routes appear as N1 or N2 in the routing table inside the NSSA. Much like LSA 5, N2 is a static cost while N1 is a cumulative cost that includes the cost upto the ASBR.

OSPF_LSAs_Types_7.jpg

Question 9

Question 10

OSPF Questions 3

July 28th, 2019 digitaltut 23 comments

Question 1

Explanation

LSAs Type 8 (Link LSA) have link-local flooding scope.  A router originates a separate link-LSA for each attached link that supports two or more (including the originating router itself) routers.  Link-LSAs should not be originated for virtual links.

Link-LSAs have three purposes:
1.  They provide the router’s link-local address to all other routers attached to the link.
2.  They inform other routers attached to the link of a list of IPv6 prefixes to associate with the link.
3.  They allow the router to advertise a collection of Options bits in the network-LSA originated by the Designated Router on a broadcast or NBMA link.

LSAs Type 9 (Intra-Area Prefix LSA) have area flooding scope. An intra-area-prefix-LSA has one of two functions:
1.  It either associates a list of IPv6 address prefixes with a transit network link by referencing a network-LSA…
2.  Or associates a list of IPv6 address prefixes with a router by referencing a router-LSA.  A stub link’s prefixes are associated with its attached router.

LSA Type 9 is breaking free of LSA Type 1 and LSA Type 2 as they were used in IPv4 OSPF to advertise the prefixes inside the areas, giving us a change in the way the OSPF SPF algorithm is ran.

Reference (and for more information): http://packetpushers.net/a-look-at-the-new-lsa-types-in-ospfv3-with-vyatta-and-cisco/

Question 2

Question 3

Explanation

The wildcard mask should be 0.0.0.255 instead of the subnet mask 255.0.0.0.

Question 4

Explanation

Route aggregation can be performed on the border routers to reduce the LSAs advertised to other areas. Route aggregation can also minimize the influences caused by the topology changes.

Question 5

Question 6

Explanation

IS-IS is an interior gateway protocol (IGP), same as EIGRP and OSPF so maybe they are the best answers. Although RIP is not a wrong choice but it is not widely used because of many limitations (only 15 hops, long convergence time…).

Question 7

Explanation

This command affects all the OSPF costs on the local router as all links are recalculated with formula: cost = reference-bandwidth (in Mbps) / interface bandwidth 

Note: The default reference bandwidth for OSPF is 10^8 bps or 100Mpbs so the “auto-cost reference-bandwidth 100” is in fact the default value so answer A may be  a correct answer.

Question 8

Explanation

NSSA External LSA (Type 7) – Generated by an ASBR inside a Not So Stubby Area (NSSA) to describe routes redistributed into the NSSA. LSA 7 is translated into LSA 5 as it leaves the NSSA. These routes appear as N1 or N2 in the routing table inside the NSSA. Much like LSA 5, N2 is a static cost while N1 is a cumulative cost that includes the cost upto the ASBR.

OSPF_LSAs_Types_7.jpg

Question 9

EIGRP Questions

July 27th, 2019 digitaltut 57 comments
EIGRP Quick Summary:
+ EIGRP supports up to six unequal-cost paths.
+ EIGRP provides a mechanism to load balance over unequal cost paths (or called unequal cost load balancing) through the “variance” command.
+ The feasibility condition states that, the Advertised Distance (AD) of a route must be lower than the feasible distance of the current successor route.
+ The path that meets the feasibility condition requirement is called a feasible successor

Question 1

Explanation

By default, EIGRP load-shares over four equal-cost paths. EIGRP also support unequal-cost load balancing via the “variance” command.

Question 2

Explanation

The table below lists the default administrative distance values of popular routing protocols:

Routing Protocols Default Administrative Distance
EIGRP 90
OSPF 110
RIP 120
eBGP 20
iBGP 200
Connected interface 0
Static route 1

Question 3

Explanation

Requirements
+ The time must be properly configured on all routers.
+ A working EIGRP configuration is recommended.

Reference: https://www.cisco.com/c/en/us/support/docs/ip/enhanced-interior-gateway-routing-protocol-eigrp/82110-eigrp-authentication.html

Question 4

Explanation

The following list of parameters must match between EIGRP neighbors in order to successfully establish neighbor relationships:

+ Autonomous System number.
+ K-Values.
+ If authentication is used both: the key number, the password, and the date/time the password is valid must match.
+ The neighbors must be on common subnet (all IGPs follow this rule).

Question 5

Explanation

When EIGRP is configured in a point-to-multipoint Frame Relay network, although the Hub can receive routing updates sent from its Spoke routers but split horizon rule forbids the Hub from relaying advertisements back out the interface on which they were received. For example in the topology below, Hub can receive routing updated from two Spokes but it cannot relay them out of S0/0 interface again (as it is the interface where it received the updates). To solve this problem we need to disable split horizon on S0/0 interface of Hub.

Frame_Relay_Point_to_multipoint.jpg

The command should be (suppose EIGRP 1 is running):

Hub(config)#interface serial0/0
Hub(config-if)#no ip split-horizon eigrp 1

Therefore answer A is correct.

In Non-broadcast networks (such as Frame-Relay), multicast (and broadcast) are not allowed while EIGRP (and OSPF, RIPv2) uses multicast to send Hello and Update messages. Therefore these dynamic routing protocols would not work well under Frame-Relay. To overcome this issue we usually add the keyword “broadcast” at the end of the frame-relay map statement (for example, “frame-relay map ip 10.1.1.1 403 broadcast“). This makes EIGRP to send update via unicast instead of multicast.

Another way to resolve above issue is to use the “neighbor” command. This command also make EIGRP to communicate with its neighbors via unicast -> B is correct.

Note: Although we can use the “neighbor” command to set up EIGRP neighbor relationship but the routes cannot be advertised from the Hub to the Spoke because of split horizon rule.

Question 6

Explanation

Although we can use the “neighbor” command to set up EIGRP neighbor relationship but the routes cannot be advertised from the Hub to the Spoke because of split horizon rule -> Answer D is not correct.

To overcome the split horizon rule we can use subinterface as each subinterface is treated like a separate physical interface so routing updates can be advertised back from Hub to Spokes.  -> Answer C is correct.

Note: The split horizon rule states that routes will not be advertised back out an interface in which they were received on

Question 7

Explanation

The “ip summary-address eigrp” command is used to configure interface-level address summarization. EIGRP summary routes are given an administrative distance value of 5. The administrative distance metric is used to advertise a summary without installing it in the routing table.

Reference: http://www.cisco.com/c/en/us/td/docs/ios/iproute_eigrp/command/reference/ire_book/ire_i1.html

Question 8

Question 9

Explanation

An example of how to configure EIGRP authentication on two routers that are connected to each other is shown below:

R1,R2(config)#key chain MYKEYS
R1,R2(config-keychain)#key 1
R1,R2(config-keychain-key)#key-string SecRetThing!
R1,R2(config-keychain-key)#end

R1,R2(config)#interface serial 0/0
R1,R2(config-subif)#ip authentication mode eigrp 10 md5
R1,R2(config-subif)#ip authentication key-chain eigrp 10 MYKEYS

Question 10

Explanation

When redistributing into RIP, EIGRP (and IGRP) we need to specify the metrics or the redistributed routes would never be learned. In this case we need to configure like this:

router eigrp 1
redistribute ospf 100 metric 10000 100 255 1 1500

Question 11

EIGRP Questions 2

July 27th, 2019 digitaltut 20 comments

Question 1

Explanation

The “eigrp stub” command is equivalent to the “eigrp stub connected summary” command which advertises the connected routes and summarized routes.

Note: Summary routes can be created manually with the summary address command or automatically at a major network border router with the auto-summary command enabled.

Question 2

Question 3

Explanation

The syntax of “distance” command is:

distance {ip-address {wildcard-mask}} [ip-standard-list] [ip-extended-list]

Reference: https://www.cisco.com/c/en/us/td/docs/ios/12_2/iproute/command/reference/fiprrp_r/1rfindp1.html

Question 4

Explanation

If an older version of code is deployed on the hub router, it will ignore the stub TLV and continue to send QUERY packets to the stub router. However, the stub router will immediately reply “inaccessible” to any QUERY packets, and will not continue to propagate them. Thus, the solution is backward-compatible and does not necessarily require an upgrade on the hub routers.

Reference: http://www.cisco.com/en/US/technologies/tk648/tk365/technologies_white_paper0900aecd8023df6f.html

Question 5

Question 6

Explanation

To become a neighbor, the following conditions must be met:
+ The router must hear a Hello packet from a neighbor.
+ The EIGRP autonomous system (AS) must be the same.
+ K-values must be the same.

Question 7

Question 8

Question 9

Explanation

Below is an example of the “show ip eigrp neighbors” output.

EIGRP_show_ip_eigrp_neighbors_2.jpg

Let’s analyze these columns:

+ H: lists the neighbors in the order this router was learned
+ Address: the IP address of the neighbors
+ Interface: the interface of the local router on which this Hello packet was received
+ Hold (sec): the amount of time left before neighbor is considered in “down” status
+ Uptime: amount of time since the adjacency was established
+ SRTT (Smooth Round Trip Timer): the average time in milliseconds between the transmission of a packet to a neighbor and the receipt of an acknowledgement.
+ RTO (Retransmission Timeout): if a multicast has failed, then a unicast is sent to that particular router, the RTO is the time in milliseconds that the router waits for an acknowledgement of that unicast.
+ Queue count (Q Cnt): shows the number of queued EIGRP packets. It is usually 0.
+ Sequence Number (Seq Num): the sequence number of the last update EIGRP packet received. Each update message is given a sequence number, and the received ACK should have the same sequence number. The next update message to that neighbor will use Seq Num + 1.

In this question we have to check the RTO and Q cnt fields.

Question 10

Explanation

By default, EIGRP uses only the bandwidth & delay parameters to calculate the metric (metric = bandwidth + delay). In particular, EIGRP uses the slowest bandwidth of the outgoing interfaces of the route to calculate the metric as follows:

EIGRP_fomula.jpg

For an example of how EIGRP calculates the metric, please read our EIGRP tutorial (part 3).

Question 11

Question 12

Distribute List

July 26th, 2019 digitaltut 26 comments

Question 1

Question 2

Explanation

A distribute list is used to filter routing updates either coming to or leaving from our router. In this case, the “out” keyword specifies we want to filter traffic leaving from our router. Access-list 2 indicates only routing update for network 1.2.3.0/24 is allowed (notice that every access-list always has an implicit “deny all” at the end).

Question 3

Explanation

To prevent routing updates through a specified interface, use the passive-interface type number command in router configuration mode.

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_pi/configuration/xe-3s/iri-xe-3s-book/iri-default-passive-interface.html

Policy Based Routing

July 25th, 2019 digitaltut 6 comments

Question 1

Explanation

Normal policy based routing (PBR) is used to route packets that pass through the device. Packets that are generated by the router (itself) are not normally policy-routed. To control these packets, local PBR should be used. For example: Router(config)# ip local policy route-map map-tag (compared with normal PBR: Router(config-if)# ip policy route-map map-tag)

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_2/qos/configuration/guide/fqos_c/qcfpbr.html

Question 2

Explanation

The set command specifies the action(s) to take on the packets that match the criteria. You can specify any or all of the following:

* precedence: Sets precedence value in the IP header. You can specify either the precedence number or name.
* df: Sets the “Don’t Fragment” (DF) bit in the ip header.
* vrf: Sets the VPN Routing and Forwarding (VRF) instance.
* next-hop: Sets next hop to which to route the packet.
* next-hop recursive: Sets next hop to which to route the packet if the hop is to a router which is not adjacent.
* interface: Sets output interface for the packet.
* default next-hop: Sets next hop to which to route the packet if there is no explicit route for this destination.
* default interface: Sets output interface for the packet if there is no explicit route for this destination.

route_map_set_command1.jpg

route_map_set_command.jpg

(Reference: http://www.cisco.com/en/US/docs/ios/12_2/qos/configuration/guide/qcfpbr_ps1835_TSD_Products_Configuration_Guide_Chapter.html)

Question 3

Explanation

The “show route-map “route-map name” displays the policy routing match counts so we can learn if PBR reacts to packets sourced from 172.16.0.0/16 or not.

show_route-map_divert.jpg

Question 4

Explanation

First we should check the access-list log, if the hit count does not increase then no packets are matched the access-list -> the policy based routing match counts will not increase.

BGP Questions

July 24th, 2019 digitaltut 66 comments

BGP Quick Summary:

Protocol type: Path Vector
Type: EGP (External Gateway Protocol)
Packet Types: Open, Update, KeepAlive, Notification
Administrative Distance: eBGP: 20; iBGP: 200
Transport: TCP port 179
Neighbor States: Idle -> Active -> Connect -> Open Sent -> Open Confirm -> Established
1 – Idle: the initial state of a BGP connection. In this state, the BGP speaker is waiting for a BGP start event, generally either the establishment of a TCP connection or the re-establishment of a previous connection. Once the connection is established, BGP moves to the next state.
2 – Connect: In this state, BGP is waiting for the TCP connection to be formed. If the TCP connection completes, BGP will move to the OpenSent stage; if the connection cannot complete, BGP goes to Active
3 – Active: In the Active state, the BGP speaker is attempting to initiate a TCP session with the BGP speaker it wants to peer with. If this can be done, the BGP state goes to OpenSent state.
4 – OpenSent: the BGP speaker is waiting to receive an OPEN message from the remote BGP speaker
5 – OpenConfirm: Once the BGP speaker receives the OPEN message and no error is detected, the BGP speaker sends a KEEPALIVE message to the remote BGP speaker
6 – Established: All of the neighbor negotiations are complete. You will see a number, which tells us the number of prefixes the router has received from a neighbor or peer group.
Path Selection Attributes: (highest) Weight > (highest) Local Preference > Originate > (shortest) AS Path > Origin > (lowest) MED > External > IGP Cost > eBGP Peering > (lowest) Router ID
(Originate: prefer routes that it installed into BGP by itself over a route that another router installed in BGP)
Authentication: MD5
BGP Origin codes: i – IGP (injected by “network” statement), e – EGP, ? – Incomplete
AS number range: Private AS range: 64512 – 65535, Globally (unique) AS: 1 – 64511

More information about popular Path Selection Attributes
Weight Attribute:
+ Cisco proprietary
+ First attribute used in Path selection
+ Only used locally in a router (not be exchanged between BGP neighbors)
+ Higher weight is preferred
Weight_BGP_Attribute_Influence.jpg

Local Preference (LocalPrf) Attribute:
+ Sent to all iBGP neighbor (not be exchanged between eBGP neighbors)
+ Used to choose the path to external BGP neighbors
+ Higher value is preferred
+ Default value is 100

LocalPreference_BGP_Influence.jpg

MED Attribute:
+ Optional nontransitive attribute (nontransitive means that we can only advertise MED to routers that are one AS away)
+ Sent through ASes to external BGP neighbors
+ Lower value is preferred (it can be considered the external metric of a route)
+ Default value is 0

MED_BGP_Attribute_Influence.jpg

 

Question 1

Explanation

Private autonomous system (AS) numbers which range from 64512 to 65535 are used to conserve globally unique AS numbers. Globally unique AS numbers (1 – 64511) are assigned by InterNIC. These private AS number cannot be leaked to a global Border Gateway Protocol (BGP) table because they are not unique (BGP best path calculation expects unique AS numbers).

Reference: http://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/13756-32.html

Question 2

Explanation

If MTU on two interfaces are mismatched, the BGP neighbors may flap, the BGP state drops and the logs generate missing BGP hello keepalives or the other peer terminates the session.

For more information about MTU mismatched between BGP neighbors please read: http://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/116377-troubleshoot-bgp-mtu.html

Question 3

Explanation

Notice that the Administrative Distance (AD) of External BGP (eBGP) is 20 while the AD of internal BGP (iBGP) is 200.

Question 4

Explanation

Private autonomous system (AS) numbers which range from 64512 to 65535 are used to conserve globally unique AS numbers. These private AS number cannot be leaked to a global BGP table because they are not unique.

Question 5


Explanation

BGP Neighbor states are: Idle – Connect – Active – Open Sent – Open Confirm – Established

Question 6


Explanation

BGP peers are established by manual configuration between routing devices to create a TCP session on (destination) port 179.

Question 7


Explanation

Below is the list of BGP states in order, from startup to peering:

1 – Idle: the initial state of a BGP connection. In this state, the BGP speaker is waiting for a BGP start event, generally either the establishment of a TCP connection or the re-establishment of a previous connection. Once the connection is established, BGP moves to the next state.
2 – Connect: In this state, BGP is waiting for the TCP connection to be formed. If the TCP connection completes, BGP will move to the OpenSent stage; if the connection cannot complete, BGP goes to Active
3 – Active: In the Active state, the BGP speaker is attempting to initiate a TCP session with the BGP speaker it wants to peer with. If this can be done, the BGP state goes to OpenSent state.
4 – OpenSent: the BGP speaker is waiting to receive an OPEN message from the remote BGP speaker
5 – OpenConfirm: Once the BGP speaker receives the OPEN message and no error is detected, the BGP speaker sends a KEEPALIVE message to the remote BGP speaker
6 – Established: All of the neighbor negotiations are complete. You will see a number (2 in this case), which tells us the number of prefixes the router has received from a neighbor or peer group.

Question 8

Question 9

Question 10

Redistribution Questions

July 23rd, 2019 digitaltut 15 comments

Question 1

Question 2

Question 3

DHCP & DHCPv6 Questions

July 22nd, 2019 digitaltut 31 comments

Question 1

Explanation

DHCP options 3, 66, and 150 are used to configure Cisco IP Phones. Cisco IP Phones download their configuration from a TFTP server. When a Cisco IP Phone starts, if it does not have both the IP address and TFTP server IP address preconfigured, it sends a request with option 150 or 66 to the DHCP server to obtain this information.
+ DHCP option 150 provides the IP addresses of a list of TFTP servers.
+ DHCP option 66 gives the IP address or the hostname of a single TFTP server.

Reference: http://www.cisco.com/c/en/us/td/docs/security/asa/asa84/configuration/guide/asa_84_cli_config/basic_dhcp.pdf

Question 2

Explanation

Most vendor’s routers/switches have the ability to function as:
+ A DHCP client and obtain an interface IPv4 address from an upstream DHCP service
+ A DHCP relay and forward UDP DHCP messages from clients on a LAN to and from a DHCP server
+ A DHCP server whereby the router/switch services DHCP requests directly

Question 3

Explanation

Extended Unique Identifier (EUI) allows a host to assign itself a unique 64-Bit IPv6 interface identifier (EUI-64). This feature is a key benefit over IPv4 as it eliminates the need of manual configuration or DHCP as in the world of IPv4. The IPv6 EUI-64 format address is obtained through the 48-bit MAC address. The MAC address is first separated into two 24-bits, with one being OUI (Organizationally Unique Identifier) and the other being NIC specific. The 16-bit 0xFFFE is then inserted between these two 24-bits for the 64-bit EUI address. IEEE has chosen FFFE as a reserved value which can only appear in EUI-64 generated from the an EUI-48 MAC address.

Question 4

Explanation

Please notice that the “ipv6 address autoconfig” is configured on the DHCP Relay Agent (not DHCP Server). A configuration example can be found at https://community.cisco.com/t5/networking-documents/stateful-dhcpv6-relay-configuration-example/ta-p/3149338

Question 5

Explanation

A DHCPv6 configuration information pool is a named entity that includes information about available configuration parameters and policies that control assignment of the parameters to clients from the pool. A pool is configured independently of the DHCPv6 service and is associated with the DHCPv6 service through the command-line interface (CLI).
Each configuration pool can contain the following configuration parameters and operational information:
Prefix delegation information, which could include:
+ A prefix pool name and associated preferred and valid lifetimes
+ A list of available prefixes for a particular client and associated preferred and valid lifetimes
– A list of IPv6 addresses of DNS servers
– A domain search list, which is a string containing domain names for DNS resolution

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6/configuration/15-2mt/ipv6-15-2mt-book/ip6-dhcp.html

This is how to configure a DHCPv6 pool:

ipv6 unciast-routing
ipv6 dhcp pool <pool name>
address prefix <specify address prefix> lifetime <infinite> <infinite>
dns-server <specify the dns server address>
domain-name <specify the domain name>

For example:

ipv6 dhcp pool test
address prefix 2010:AA01:10::/64 lifetime infinite infinite
dns-server AAAA:BBBB:10FE:100::15
dns-server 2010:AA01::15
domain-name example.com

Reference: https://supportforums.cisco.com/document/116221/part-1-implementing-dhcpv6-stateful-dhcpv6

So we can see DHCPv6 pool supports address prefix and domain search list, DNS servers.

Question 6

Explanation

Note: A DHCPv6 relay agent is used to relay (forward) messages between the DHCPv6 client and server.

Servers and relay agents listen for DHCP messages on UDP port 547 so if a DHCPv6 relay agent cannot receive DHCP messages (because of port 547 is blocked) then the routers (clients) will not obtain DHCPv6 prefixes.

We are not sure about answer D but maybe it is related to the (absence of) “Reload Persistent Interface ID” in DHCPv6 Relay Options. This feature makes the interface ID option persistent. The interface ID is used by relay agents to decide which interface should be used to forward a RELAY-REPLY packet. A persistent interface-ID option will not change if the router acting as a relay agent goes offline during a reload or a power outage. When the router acting as a relay agent returns online, it is possible that changes to the internal interface index of the relay agent may have occurred in certain scenarios (such as, when the relay agent reboots and the number of interfaces in the interface index changes, or when the relay agents boot up and has more virtual interfaces than it did before the reboot). This feature prevents such scenarios from causing any problems.

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_dhcp/configuration/15-e/dhcp-15-e-book/dhcp-15e-book_chapter_010.html

Question 7

Explanation

In this topology DSW1 is the DHCPv6 Relay agent so it should relay (forward) the DHCPv6 Request packets (from the clients) out of its Gi1/2 interface to the DHCPv6 server. The command “ipv6 dhcp relay destination …” is used to complete this task.

Note: There is no “default-router” command for DHCPv6. The “ipv6 dhcp relay destination” is not required to configure on every router along the path between the client and server. It is ONLY required on the router functioning as the DHCPv6 relay agent.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/enterprise-ipv6-solution/whitepaper_c11-689821.html

Question 8

Explanation

The “ip helper-address” command is only configured in interface mode so it is not the correct answer.

Note: The Cisco IOS software provides the global configuration command “ip forward-protocol” to allow an administrator to forward any UDP port in addition to the eight default UDP Services. For example, to forward UDP on port 517, use the global configuration command “ip forward-protocol udp 517”. But the eight default UDP Services include DHCP services so it is not the suitable answer.

Reference and good resource: http://www.ciscopress.com/articles/article.asp?p=330807&seqNum=9

A DHCP relay agent may receive a message from another DHCP relay agent that already contains relay information. By default, the relay information from the previous relay agent is replaced. If this behavior is not suitable for your network, you can use the ip dhcp relay information policy {drop | keep | replace} global configuration command to change it -> Therefore this is the correct answer.

Reference: https://www.cisco.com/en/US/docs/ios/12_4t/ip_addr/configuration/guide/htdhcpre.html

Question 9

Explanation

If the DHCP Server is not on the same subnet with the DHCP Client, we need to configure the router on the DHCP client side to act as a DHCP Relay Agent so that it can forward DHCP messages between the DHCP Client & DHCP Server. To make a router a DHCP Relay Agent, simply put the “ip helper-address <IP-address-of-DHCP-Server>” command under the interface that receives the DHCP messages from the DHCP Client.

Question 10

EVN & VRF Questions

July 21st, 2019 digitaltut 56 comments

Quick review:

Easy Virtual Network (EVN) is an IP-based network virtualization solution that helps enable network administrators to provide traffic separation and path isolation on a shared network infrastructure. EVN uses existing Virtual Route Forwarding (VRF)-Lite technology to:
+ Simplify Layer 3 network virtualization
+ Improve shared services support
+ Enhance management, troubleshooting, and usability

Question 1

Explanation

All the subinterfaces and associated EVNs have the same IP address assigned. In other words, a trunk interface is identified by the same IP address in different EVN contexts. EVN automatically generates subinterfaces for each EVN. For example, both Blue and Green VPN Routing and Forwarding (VRF) use the same IP address of 10.0.0.1 on their trunk interface:

vrf definition Blue
vnet tag 100
vrf definition Green
vnet tag 200
!
interface gigabitethernet0/0/0
vnet trunk
ip address 10.0.0.1 255.255.255.0

-> A is correct.

In fact answer B & C are not correct because each EVN has separate routing table and forwarding table.

Note: The combination of the VPN IP routing table and the associated VPN IP forwarding table is called a VPN routing and forwarding (VRF) instance.

Question 2

Explanation

EVN is supported on any interface that supports 802.1q encapsulation, for example, an Ethernet interface. Instead of adding a new field to carry the VNET tag in a packet, the VLAN ID field in 802.1q is repurposed to carry a VNET tag. The VNET tag uses the same position in the packet as a VLAN ID. On a trunk interface, the packet gets re-encapsulated with a VNET tag. Untagged packets carrying the VLAN ID are not EVN packets and could be transported over the same trunk interfaces.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/layer-3-vpns-l3vpn/whitepaper_c11-638769.html

Question 3

Explanation

An example of using “autonomous-system {autonomous-system-number}” command is shown below:

router eigrp 100
address-family ipv4 vrf Cust
net 192.168.12.0
autonomous-system 100
no auto-summary

This configuration is performed under the Provide Edge (PE) router to run EIGRP with a Customer Edge (CE) router. The “autonomous-system 100” command indicates that the EIGRP AS100 is running between PE & CE routers.

Question 4

Question 5

Question 6

Explanation

EVN builds on the existing IP-based virtualization mechanism known as VRF-Lite. EVN provides enhancements in path isolation, simplified configuration and management, and improved shared service support

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/evn/configuration/xe-3s/evn-xe-3s-book/evn-overview.html

Maybe the “improved shared services support” term here implies about the support of sharing between different VRFs (through route-target, MP-BGP)

Question 7

Explanation

This question is not clear because we have to configure a static route pointing to the global routing table while it stated that “all interfaces are in the same VRF”. But we should understand both outside and inside interfaces want to ping the loopback interface.

Question 8

Explanation

EVN supports IPv4, static routes, Open Shortest Path First version 2 (OSPFv2), and Enhanced Interior Gateway Routing Protocol (EIGRP) for unicast routing, and Protocol Independent Multicast (PIM) and Multicast Source Discovery Protocol (MSDP) for IPv4 Multicast routing. EVN also supports Cisco Express Forwarding (CEF) and Simple Network Management Protocol (SNMP).

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/evn/configuration/xe-3s/evn-xe-3s-book/evn-overview.html

Question 9

Explanation

Route-target is is tagged to each VPN when it is exported. In other words, when a prefix is exported with a route-target, an extended BGP community is attached to that prefix. If this community is matched with the (import) route-target of the receiving side then the prefix is imported to the receiving VRF.

Question 10

Explanation

Easy Virtual Network (EVN) is an IP-based virtualization technology that provides end-to-end virtualization of two or more Layer-3 networks. You can use a single IP infrastructure to provide separate virtual networks whose traffic paths remain isolated from each other.

An EVN trunk interface connects VRF-aware routers together and provides the core with a means to transport traffic for multiple EVNs. Trunk interfaces carry tagged traffic. The tag is used to de-multiplex the packet into the corresponding EVN. A trunk interface has one subinterface for each EVN. The vnet trunk command is used to define an interface as an EVN trunk interface.

In other words, EVN trunk interfaces allow multiple VRFs to use the same physical interfaces for transmission but the data of each VRF is treated separately. Without EVN trunk interfaces we need to create many subinterfaces. Therefore virtual network trunk (VNET) decreases the network configuration required.

Note: There is no “Easy Trunk” component or technology.

EVN & VRF Questions 2

July 21st, 2019 digitaltut 9 comments

Question 1

Explanation

Route replication allows shared services because routes are replicated between virtual networks and clients who reside in one virtual network can reach prefixes that exist in another virtual network.

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/evn/configuration/xe-3s/evn-xe-3s-book/evn-shared-svcs.html

Question 2

Question 3

Explanation

Path isolation can be achieved by using a unique tag for each Virtual Network (VN) -> Answer A is correct.

Instead of adding a new field to carry the VNET tag in a packet, the VLAN ID field in 802.1q is repurposed to carry a VNET tag. The VNET tag uses the same position in the packet as a VLAN ID. On a trunk interface, the packet gets re-encapsulated with a VNET tag. Untagged packets carrying the VLAN ID are not EVN packets and could be transported over the same trunk interfaces -> Answer E is correct.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/layer-3-vpns-l3vpn/whitepaper_c11-638769.html

Question 4

Explanation

We are trying to ping the 192.168.1.2 in vrf Yellow but the Serial0/0 interfaces of both routers do not belong to this VRF so the ping fails. We need to configure S0/0 interfaces with the “ip vrf forwarding Yellow” (under interface S0/0) in order to put these interfaces into VRF Yellow.

Question 5

Question 6

Question 7

Explanation

In the link http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12-2/25ew/configuration/guide/conf/vrf.html there is a notice about route-target command: “Note: This command is effective only if BGP is running.” -> C is correct.

Answer A & F are not correct as only route distinguisher (RD) identifies the customer routing table and “allows customers to be assigned overlapping addresses”.

Answer E is not correct as “When BGP is configured, route targets are transmitted as BGP extended communities”

Question 8

Explanation

With VRF-Lite, if you want to send traffic for multiple virtual networks (that is, multiple VRFs) between two routers you need to create a subinterface for each VRF on each router -> VRF-Lite requires subinterfaces. However, with Cisco EVN, you instead create a trunk (called a Virtual Network (VNET) trunk) between the routers. Then, traffic for multiple virtual networks can travel over that single trunk interface, which uses tags to identify the virtual networks to which packets belong.

Note: Both Cisco EVN and VRF-Lite allow a single physical router to run multiple virtual router instances, and both technologies allow routes from one VRF to be selectively leaked to other VRFs. However, a major difference is the way that two physical routers interconnect. With VRF-Lite, a router is configured with multiple subinterfaces, one for each VRF. However, with Cisco EVN, routers interconnect using a VNET trunk, which simplifies configuration.

Reference: CCNP Routing and Switching ROUTE 300-101 Official Cert Guide

All EVNs within a trunk interface share the same IP infrastructure as they are on the same physical interface -> Answer C is correct.

With EVNs, a trunk interface is shared among VRFs so each command configured under this trunk is applied by all EVNs -> Answer E is correct.

Question 9

Question 10

IPv6 Questions

July 20th, 2019 digitaltut 34 comments

Question 1

Explanation

Dual-stack method is the most common technique which only requires edge routers to run both IPv4 and IPv6 while the inside routers only run IPv4. At the edge network, IPv4 packets are converted to IPv6 packets before sending out.

6to4 tunnel is a technique which relies on reserved address space 2002::/16 (you must remember this range). These tunnels determine the appropriate destination address by combining the IPv6 prefix with the globally unique destination 6to4 border router’s IPv4 address, beginning with the 2002::/16 prefix, in this format:

2002:border-router-IPv4-address::/48

For example, if the border-router-IPv4-address is 64.101.64.1, the tunnel interface will have an IPv6 prefix of 2002:4065:4001:1::/64, where 4065:4001 is the hexadecimal equivalent of 64.101.64.1. This technique allows IPv6 sites to communicate with each other over the IPv4 network without explicit tunnel setup but we have to implement it on all routers on the path.

NAT-PT provides IPv4/IPv6 protocol translation. It resides within an IP router, situated at the boundary of an IPv4 network and an IPv6 network. By installing NAT-PT between an IPv4 and IPv6 network, all IPv4 users are given access to the IPv6 network without modification in the local IPv4-hosts (and vice versa). Equally, all hosts on the IPv6 network are given access to the IPv4 hosts without modification to the local IPv6-hosts. This is accomplished with a pool of IPv4 addresses for assignment to IPv6 nodes on a dynamic basis as sessions are initiated across IPv4-IPv6 boundaries.

Question 2

Explanation

Overlay tunneling encapsulates IPv6 packets in IPv4 packets for delivery across an IPv4 infrastructure (a core network or the Internet). By using overlay tunnels, you can communicate with isolated IPv6 networks without upgrading the IPv4 infrastructure between them. Overlay tunnels can be configured between border routers or between a border router and a host; however, both tunnel endpoints must support both the IPv4 and IPv6 protocol stacks.

IPv6_tunneling.jpg

Reference: http://www.cisco.com/c/en/us/td/docs/ios/ipv6/configuration/guide/12_4t/ipv6_12_4t_book/ip6-tunnel.html

Question 3

Explanation

In Stateless Configuration mode, hosts will listen for Router Advertisements (RA) messages which are transmitted periodically from the router (DHCP Server). This RA message allows a host to create a global IPv6 address from:
+ Its interface identifier (EUI-64 address)
+ Link Prefix (obtained via RA)
Note: Global address is the combination of Link Prefix and EUI-64 address

Question 4

Explanation

The IPv6 EUI-64 format address is obtained through the 48-bit MAC address. The Mac address is first separated into two 24-bits, with one being OUI (Organizationally Unique Identifier) and the other being NIC specific. The 16-bit 0xFFFE is then inserted between these two 24-bits to for the 64-bit EUI address. IEEE has chosen FFFE as a reserved value which can only appear in EUI-64 generated from the an EUI-48 MAC address.

In this question, the MAC address C601.420F.0007 is divided into two 24-bit parts, which are “C60142” (OUI) and “0F0007” (NIC). Then “FFFE” is inserted in the middle. Therefore we have the address: C601.42FF.FE0F.0007.

Then, according to the RFC 3513 we need to invert the Universal/Local bit (“U/L” bit) in the 7th position of the first octet. The “u” bit is set to 1 to indicate Universal, and it is set to zero (0) to indicate local scope. In this case we don’t need to set this bit to 1 because it is already 1 (C6 = 11000110).

Therefore with the subnet of 2001:DB8:0:1::/64, the full IPv6 address is 2001:DB8:0:1:C601:42FF:FE0F:7/64

Question 5

Question 6

Explanation

NPTv6 stands for Network Prefix Translation. It’s a form of NAT for IPv6 and it supports one-to-one translation between inside and outside addresses

Question 7

Explanation

The command “ipv6 flowset” allows the device to track destinations to which the device has sent packets that are 1280 bytes or larger.

Question 8

Explanation

NAT64 is used to make IPv4-only servers available to IPv6 clients.

Note:
NAT44 – NAT from IPv4 to IPv4
NAT66 – NAT from IPv6 to IPv6
NAT46 – NAT from IPv4 to IPv6
NAT64 – NAT from IPv6 to IPv4

Question 9

Explanation

The IPv6 EUI-64 format address is obtained through the 48-bit MAC address. The Mac address is first separated into two 24-bits, with one being OUI (Organizationally Unique Identifier) and the other being NIC specific. The 16-bit 0xFFFE is then inserted between these two 24-bits to for the 64-bit EUI address. IEEE has chosen FFFE as a reserved value which can only appear in EUI-64 generated from the an EUI-48 MAC address.

Question 10

Explanation

IPv6 allows devices to configure their own IP addresses and other parameters automatically without the need for a DHCP server. This method is called “IPv6 Stateless Address Autoconfiguration” (which contrasts to the server-based method using DHCPv6, called “stateful”). In Stateless Autoconfiguration method, a host sends a router solicitation to request a prefix. The router then replies with a router advertisement (RA) message which contains the prefix of the link. Host will use this prefix and its MAC address to create its own unique IPv6 address.

Note:
+ RA messages are sent periodically and in response to device solicitation messages
+ In the absence of a router, a host can generate only link-local addresses. Link-local addresses are only sufficient for allowing communication among nodes that are attached to the same link

IPv6 Questions 2

July 20th, 2019 digitaltut 29 comments

Question 1

Question 2

Question 3

Explanation

Address Family Translation (AFT) using NAT64 technology can be achieved by either stateless or stateful means:
+ Stateless NAT64 is a translation mechanism for algorithmically mapping IPv6 addresses to IPv4 addresses, and IPv4 addresses to IPv6 addresses. Like NAT44, it does not maintain any bindings or session state while performing translation, and it supports both IPv6-initiated and IPv4-initiated communications.
+ Stateful NAT64 is a stateful translation mechanism for translating IPv6 addresses to IPv4 addresses, and IPv4 addresses to IPv6 addresses. Like NAT44, it is called stateful because it creates or modifies bindings or session state while performing translation. It supports both IPv6-initiated and IPv4-initiated communications using static or manual mappings.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/enterprise-ipv6-solution/white_paper_c11-676278.html

Question 4

Question 5

Explanation

When a change is made to one of the IP header fields in the IPv6 pseudo-header checksum (such as one of the IP addresses), the checksum field in the transport layer header may become invalid. Fortunately, an incremental change in the area covered by the Internet standard checksum [RFC1071] will result in a well-defined change to the checksum value [RFC1624]. So, a checksum change caused by modifying part of the area covered by the checksum can be corrected by making a complementary change to a different 16-bit field covered by the same checksum.

Reference: https://tools.ietf.org/html/rfc6296

Question 6

Question 7

Explanation

Link-local addresses are always configured with the FE80::/64 prefix. Most routing protocols use the link-local address for a next-hop.

Question 8

Explanation

A link-local address is an IPv6 unicast address that can be automatically configured on any interface using the link-local prefix FE80::/10 (1111 1110 10) and the interface identifier in the modified EUI-64 format. Link-local addresses are not necessarily bound to the MAC address (configured in a EUI-64 format). Link-local addresses can also be manually configured in the FE80::/10 format using the ipv6 address link-local command.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/ip-version-6-ipv6/113328-ipv6-lla.html

Question 9

Explanation

Stateless Address Auto Configuration (SLAAC) is a method in which the host or router interface is assigned a 64-bit prefix, and then the last 64 bits of its address are derived by the host or router with help of EUI-64 process.

Question 10

Question 11

Explanation

The components of IPv6 header is shown below:

IPv6_header.jpg

The Traffic Class field (8 bits) is where quality of service (QoS) marking for Layer 3 can be identified. In a nutshell, the higher the value of this field, the more important the packet. Your Cisco routers (and some switches) can be configured to read this value and send a high-priority packet sooner than other lower ones during times of congestion. This is very important for some applications, especially VoIP.

The Flow Label field (20 bits) is originally created for giving real-time applications special service. The flow label when set to a non-zero value now serves as a hint to routers and switches with multiple outbound paths that these packets should stay on the same path so that they will not be reordered. It has further been suggested that the flow label be used to help detect spoofed packets.

The Hop Limit field (8 bits) is similar to the Time to Live field in the IPv4 packet header. The value of the Hop Limit field specifies the maximum number of routers that an IPv6 packet can pass through before the packet is considered invalid. Each router decrements the value by one. Because no checksum is in the IPv6 header, the router can decrease the value without needing to recalculate the checksum, which saves processing resources.

IPv6 Questions 3

July 20th, 2019 digitaltut 8 comments

Question 1

Explanation

We need to summarize three IPv6 prefixes with /64 subnet mask so the summarized route should have a smaller subnet mask. As we can see all four answers have the same summarized route of 2001:DB8:: so /48 is the best choice.

Note: IPv6 consists of 8 fields with each 16 bits (8×16 = 128). All the above prefix starts with 2001:DB8:0 (16 bits x 3 = 48) so we need at least /48 mask to summarize them.

Question 2

Question 3

Question 4

Explanation

DHCPv6 can be implemented in two ways : Rapid-Commit and Normal Commit mode. In Rapid-Commit mode , the DHCP client obtain configuration parameters from the server through a rapid two message exchange (solicit and reply).

The “solicit” message is sent out by the DHCP Client to verify that there is a DHCP Server available to handle its requests.

The “reply” message is sent out by the DHCP Server to the DHCP Client, and it contains the “configurable information” that the DHCP Client requested.

Just for your information, in Normal-Commit mode, the DHCP client uses four message exchanges (solicit, advertise, request and reply). By default normal-commit is used.

Question 5

Explanation

The IPv6-to-IPv6 Network Prefix Translation (NPTv6) provides a mechanism to translate an inside IPv6 source address prefix to outside IPv6 source address prefix in IPv6 packet header and vice-versa. In other words, NPTv6 is simply rewriting IPv6 prefixes. NPTv6 does not allow to overload. It does not support mismatching prefix allocations sizes (so the network/host portion remains intact. For example you cannot cover /64 to /48).

Question 6

Question 7

Question 8

Question 9

Explanation

The automatic tunneling mechanism uses a special type of IPv6 address, termed an “IPv4-compatible” address. An IPv4-compatible address is identified by an all-zeros 96-bit prefix, and holds an IPv4 address in the low-order 32-bits. IPv4-compatible addresses are structured as follows:

ipv6toIPv4structure

Therefore, an IPv4 address of 10.67.0.2 will be written as ::10.67.0.2 or 0:0:0:0:0:0:10.67.0.2 or ::0A43:0002 (with 10[decimal] = 0A[hexa] ; 67[decimal] = 43[hexa] ; 0[hexa] = 0[decimal] ; 2[hexa] = 2[decimal])

Question 10

IPv6 Questions 4

July 20th, 2019 digitaltut 16 comments

Question 1

Question 2

Question 3

Explanation

An automatic 6to4 tunnel allows isolated IPv6 domains to be connected over an IPv4 network to remote IPv6 networks. The key difference between automatic 6to4 tunnels and manually configured tunnels is that the tunnel is not point-to-point; it is point-to-multipoint -> it allows multiple IPv4 destinations -> B is correct.

A is not correct because manually 6to4 is point-to-point -> only allows one IPv4 destination.

Configuring 6to4 (manually and automatic) requires dual-stack routers (which supports both IPv4 & IPv6) at the tunnel endpoints because they are border routers between IPv4 & IPv6 networks.

(Reference: http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-tunnel_ps6441_TSD_Products_Configuration_Guide_Chapter.html#wp1055515)

Question 4

Question 5

Explanation

In fact this question has no correct answer. The IPv6 EUI-64 format address is obtained through the 48-bit MAC address. The MAC address is first separated into two 24-bits, with one being OUI (Organizationally Unique Identifier) and the other being NIC specific. The 16-bit 0xFFFE is then inserted between these two 24-bits to for the 64-bit EUI address. IEEE has chosen FFFE as a reserved value which can only appear in EUI-64 generated from the an EUI-48 MAC address.

For example, the MAC address C601.420F.0007 is divided into two 24-bit parts, which are “C60142” (OUI) and “0F0007” (NIC). Then “FFFE” is inserted in the middle. Therefore we have the address: C601.42FF.FE0F.0007.

Question 6

Explanation

6to4 tunnel is a technique which relies on reserved address space 2002::/16 (you must remember this range). These tunnels determine the appropriate destination address by combining the IPv6 prefix with the globally unique destination 6to4 border router’s IPv4 address, beginning with the 2002::/16 prefix, in this format:

2002:border-router-IPv4-address::/48

For example, if the border-router-IPv4-address is 192.168.99.1, the tunnel interface will have an IPv6 prefix of 2002:C0A8:6301::/64, where C0A8:6301 is the hexadecimal equivalent of 192.168.99.1.

Question 7

RIPng Questions

July 19th, 2019 digitaltut 18 comments

Question 1

Explanation

The default timers of RIP and RIPng are the same. The meanings of these timers are described below:

Update: how often the router sends update. Default update timer is 30 seconds
Invalid (also called Expire): how much time must expire before a route becomes invalid since seeing a valid update; and place the route into holddown. Default invalid timer is 180 seconds
Holddown: if RIP receives an update with a hop count (metric) higher than the hop count recording in the routing table, RIP does not “believe in” that update. Default holddown timer is 180 seconds
Flush: how much time since the last valid update, until RIP deletes that route in its routing table. Default Flush timer is 240 seconds

RIPng_timer.jpg

Question 2

Question 3

Question 4

Question 5

Question 6

Explanation

This is how to disable split horizon processing for the IPv6 RIP routing process named digitaltut:

Router(config)# ipv6 router rip digitaltut
Router(config-rtr)#no split-horizon

Note: For RIP (IPv4), we have to disable/enable split horizon in interface mode. For example: Router(config-if)# ip split-horizon

Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6/command/ipv6-cr-book/ipv6-s6.html

Question 7

Explanation

This is how to change the timers for RIPng:

R1(config)#ipv6 router rip digitaltut
R1(config-rtr)#timers 5 15 10 30 (5: Update period; 15: Route timeout period; 10: Route holddown period; 30: Route garbage collection period)

Note: For IPv4 RIP, we have to change the timers in “(config-router)#”.

Security Questions

July 18th, 2019 digitaltut 19 comments

Question 1

Explanation

RADIUS combines authentication and authorization. The access-accept packets sent by the RADIUS server to the client contain authorization information. This makes it difficult to decouple authentication and authorization.

TACACS+ uses the AAA architecture, which separates AAA. This allows separate authentication solutions that can still use TACACS+ for authorization and accounting. For example, with TACACS+, it is possible to use Kerberos authentication and TACACS+ authorization and accounting. After a NAS authenticates on a Kerberos server, it requests authorization information from a TACACS+ server without having to re-authenticate. The NAS informs the TACACS+ server that it has successfully authenticated on a Kerberos server, and the server then provides authorization information.

During a session, if additional authorization checking is needed, the access server checks with a TACACS+ server to determine if the user is granted permission to use a particular command. This provides greater control over the commands that can be executed on the access server while decoupling from the authentication mechanism.

Reference: http://www.cisco.com/c/en/us/support/docs/security-vpn/remote-authentication-dial-user-service-radius/13838-10.html

Question 2

Explanation

Both RADIUS (Remote Authentication Dial-in User Service) and TACACS+ (Terminal Access Controller Access-Control System) Plus) are the main protocols to provide Authentication, Authorization, and Accounting (AAA) services on network devices.

Both RADIUS and TACACS+ support accounting of commands. Command accounting provides information about the EXEC shell commands for a specified privilege level that are being executed on a network access server. Each command accounting record includes a list of the commands executed for that privilege level, as well as the date and time each command was executed, and the user who executed it.

For example, to send accounting messages to the TACACS+ accounting server when you enter any command other than show commands at the CLI, use the aaa accounting command command in global configuration mode

Note: TACACS+ was developed by Cisco from TACACS.

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_2/security/configuration/guide/fsecur_c/scfacct.html

Question 3

Explanation

TACACS+ encrypts the entire body of the packet (but leaves a standard TACACS+ header).

TACACS+ is an AAA protocol developed by Cisco.

Question 4

Question 5

Question 6

Explanation

RADIUS combines authentication and authorization. The access-accept packets sent by the RADIUS server to the client contain authorization information. This makes it difficult to decouple authentication and authorization.

Reference: https://www.cisco.com/c/en/us/support/docs/security-vpn/remote-authentication-dial-user-service-radius/13838-10.html

Unicast Reverse Path Forwarding

July 17th, 2019 digitaltut 34 comments

Question 1

Explanation

The Unicast Reverse Path Forwarding feature (Unicast RPF) helps the network guard against malformed or “spoofed” IP packets passing through a router. A spoofed IP address is one that is manipulated to have a forged IP source address. Unicast RPF enables the administrator to drop packets that lack a verifiable source IP address at the router.

Unicast RPF is enabled on a router interface. When this feature is enabled, the router checks packets that arrive inbound on the interface to see whether the source address matches the receiving interface. Cisco Express Forwarding (CEF) is required on the router because the Forwarding Information Base (FIB) is the mechanism checked for the interface match.

Unicast RPF works in one of three different modes:
+ Strict mode: router will perform two checks for all incoming packets on a certain interface. First check is if the router has a matching entry for the source in the routing table. Second check is if the router uses the same interface to reach this source as where it received this packet on.
+ Loose mode: only check if the router has a matching entry for the source in the routing table
+ VRF mode: leverage either loose or strict mode in a given VRF and will evaluate an incoming packet’s source IP address against the VRF table configured for an eBGP neighbor.

Reference: CCIE Routing and Switching v4.0 Quick Reference, 2nd Edition

Question 2

Explanation

When Unicast Reverse Path Forwarding is enabled, the router checks packets that arrive inbound on the interface to see whether the source address matches the receiving interface.

Question 3

Explanation

First we need to understand the “allow-default” keyword here:

Normally, uRPF will not allow traffic that only matches the default route. The “allow-default” keyword will override this behavior and uRPF will allow traffic matched the default route to pass through.

In answer A, The “ip verify unicast source reachable-via rx allow-default” command under interface Fa0/0 enables uRPF strict mode on Fa0/0. Therefore traffic from the 172.16.1.0/24 network (and any traffic) can go through this interface except the 10.0.0.0/8 network because this network is matched on Fa0/1 interface only. The network 10.0.0.0/8 can only enter TUT router from Fa0/1, thus “limiting spoofed 10.0.0.0/8 hosts that could enter router”.

Question 4

Explanation

Unicast Reverse Path Forwarding (uRPF) examines the source IP address of incoming packets. If it matches with the interface used to reach this source IP then the packets are allowed to enter (strict mode).

uRPF.jpg

The syntax of configuring uRPF in interface mode is:

ip verify unicast source reachable-via {rx | any} [allow-default] [allow-self-ping] [access-list-number]
The any option enables a Loose Mode uRPF on the router. This mode allows the router to reach the source address via any interface.
The rx option enables a Strict Mode uRPF on the router. This mode ensures that the router reaches the source address only via the interface on which the packet was received.
You can also use the allow-default option, so that the default route can match when checking source address -> Answer “allow default route” is a valid option
The allow-self-ping option allows the router to ping itself -> Answer “allow self ping to router” is a valid option.
Another feature of uRPF is we can use an access-list to specify the traffic we want or don’t want to check -> Answer “allow based on ACL match” is a valid option. An example is shown below:
Router(config)#access-list 110 permit ip 192.168.1.0 0.0.0.255 any
Router(config)#interface fa0/1
Router(config-if)#ip verify unicast source reachable-via any 110
Note: Access-list “permit” statements allow traffic to be forwarded even if they fail the Unicast RPF check, access list deny statements will drop traffic matched that fail the Unicast RPF check. In above example, 192.168.1.0/24 network is allowed even if it failed uRPF check.
The last option is “source reachable via both” is not clear and it is the best answer in this case. Although it may mention about the uRPF loose mode.

Question 5

Explanation

Unicast Reverse Path Forwarding (uRPF) examines the source IP address of incoming packets. If it matches with the interface used to reach this source IP then the packets are allowed to enter (strict mode).

uRPF.jpg

Unicast RPF is enabled on a router interface. When this feature is enabled, the router checks packets that arrive inbound on the interface to see whether the source address matches the receiving interface. Cisco Express Forwarding (CEF) is required on the router because the Forwarding Information Base (FIB) is the mechanism checked for the interface match.

Unicast RPF works in one of three different modes:
+ Strict mode: router will perform two checks for all incoming packets on a certain interface. First check is if the router has a matching entry for the source in the routing table. Second check is if the router uses the same interface to reach this source as where it received this packet on.
+ Loose mode: only check if the router has a matching entry for the source in the routing table
+ VRF mode: leverage either loose or strict mode in a given VRF and will evaluate an incoming packet’s source IP address against the VRF table configured for an eBGP neighbor.

Reference: CCIE Routing and Switching v4.0 Quick Reference, 2nd Edition

This question only mentioned about “the network to which the packet’s source IP address belongs is found in the router’s FIB” so surely loose mode will accept this packet.

Question 6

Explanation

When a packet is received at the interface where Unicast RPF and ACLs have been configured, the following actions occur:
Step 1: Input ACLs configured on the inbound interface are checked.
Step 2: Unicast RPF checks to see if the packet has arrived on the best return path to the source, which it does by doing a reverse lookup in the FIB table.
Step 3: CEF table (FIB) lookup is carried out for packet forwarding.
Step 4: Output ACLs are checked on the outbound interface.
Step 5: The packet is forwarded.

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_2/security/configuration/guide/fsecur_c/scfrpf.html

Question 7

Question 8

Explanation

The command “ip verify unicast source reachable-via any” enables uRFP in loose mode, which only checks if the router has a matching entry for the source in the routing table.

Question 9

Question 10

IP Services Questions

July 16th, 2019 digitaltut 17 comments

Question 1

Explanation

The switch validates DHCP packets received on the untrusted interfaces of VLANs with DHCP snooping enabled. The switch forwards the DHCP packet unless any of the following conditions occur (in which case the packet is dropped):
+ The switch receives a packet (such as a DHCPOFFER, DHCPACK, DHCPNAK, or DHCPLEASEQUERY packet) from a DHCP server outside the network or firewall.
+ The switch receives a packet on an untrusted interface, and the source MAC address and the DHCP client hardware address do not match. This check is performed only if the DHCP snooping MAC address verification option is turned on.
+ The switch receives a DHCPRELEASE or DHCPDECLINE message from an untrusted host with an entry in the DHCP snooping binding table, and the interface information in the binding table does not match the interface on which the message was received.
+ The switch receives a DHCP packet that includes a relay agent IP address that is not 0.0.0.0.

Reference: http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/12-2SX/configuration/guide/book/snoodhcp.html#wp1101946

Question 2

Explanation

We can test the action of HSRP by tracking the loopback interface and decrease the HSRP priority so that the standby router can take the active role.

Question 3

Explanation

The “ip http secure-port

” is used to set the secure HTTP (HTTPS) server port number for listening.

Question 4

Explanation

This command shows IPsec Security Associations (SAs) built between peers. An example of the output of above command is shown below:

Router#show crypto ipsec sa
interface: FastEthernet0
    Crypto map tag: test, local addr. 12.1.1.1
   local  ident (addr/mask/prot/port): (20.1.1.0/255.255.255.0/0/0)
   remote ident (addr/mask/prot/port): (10.1.1.0/255.255.255.0/0/0)
   current_peer: 12.1.1.2
     PERMIT, flags={origin_is_acl,}
    #pkts encaps: 7767918, #pkts encrypt: 7767918, #pkts digest 7767918
    #pkts decaps: 7760382, #pkts decrypt: 7760382, #pkts verify 7760382
    #pkts compressed: 0, #pkts decompressed: 0
    #pkts not compressed: 0, #pkts compr. failed: 0, 
    #pkts decompress failed: 0, #send errors 1, #recv errors 0
     local crypto endpt.: 12.1.1.1, remote crypto endpt.: 12.1.1.2
     path mtu 1500, media mtu 1500
     current outbound spi: 3D3
     inbound esp sas:
      spi: 0x136A010F(325714191)
        transform: esp-3des esp-md5-hmac ,
        in use settings ={Tunnel, }
        slot: 0, conn id: 3442, flow_id: 1443, crypto map: test
        sa timing: remaining key lifetime (k/sec): (4608000/52)
        IV size: 8 bytes
        replay detection support: Y
     inbound ah sas:
     inbound pcp sas:
inbound pcp sas:
outbound esp sas:
   spi: 0x3D3(979)
    transform: esp-3des esp-md5-hmac ,
    in use settings ={Tunnel, }
    slot: 0, conn id: 3443, flow_id: 1444, crypto map: test
    sa timing: remaining key lifetime (k/sec): (4608000/52)
    IV size: 8 bytes
    replay detection support: Y
outbound ah sas:
outbound pcp sas:

The first part shows the interface and cypto map name that are associated with the interface. Then the inbound and outbound SAs are shown. These are either AH or ESP SAs. In this case, because you used only ESP, there are no AH inbound or outbound SAs.

Note: Maybe “inbound crypto map” here mentions about crypto map name.

Question 5

Explanation

The Management Plane Protection (MPP) feature in Cisco IOS software provides the capability to restrict the interfaces on which network management packets are allowed to enter a device. The MPP feature allows a network operator to designate one or more router interfaces as management interfaces. Device management traffic is permitted to enter a device only through these management interfaces. After MPP is enabled, no interfaces except designated management interfaces will accept network management traffic destined to the device.

In the command management-interface interface allow protocols we can configure these protocols (to allow on the designated management interface):

+ BEEP
+ FTP
+ HTTP
+ HTTPS
+ SSH, v1 and v2
+ SNMP, all versions
+ Telnet
+ TFTP

Therefore these are also the protocols that can be affected by MPP.

Reference: https://www.cisco.com/c/en/us/td/docs/ios/12_4t/12_4t11/htsecmpp.html

SNMP Questions

July 15th, 2019 digitaltut 23 comments

Note: If you are not sure about SNMP, please read our SNMP tutorial.

Question 1

Explanation

“The engineer is not concerned with authentication or encryption” so we don’t need to use SNMP version 3. And we only use “one-way SNMP notifications” so SNMP messages should be sent as traps (no need to acknowledge from the SNMP server) -> A is correct.

Question 2

Explanation

There are three SNMP security levels (for SNMPv1, SNMPv2c, and SNMPv3):

+ noAuthNoPriv: Security level that does not provide authentication or encryption.
+ authNoPriv: Security level that provides authentication but does not provide encryption.
+ authPriv: Security level that provides both authentication and encryption.

For SNMPv3, “noAuthNoPriv” level uses a username match for authentication.

Reference: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli_rel_4_0_1a/CLIConfigurationGuide/sm_snmp.html

Question 3

Explanation

The SNMPv3 Agent supports the following set of security levels:
+ NoAuthnoPriv: Communication without authentication and privacy.
+ AuthNoPriv: Communication with authentication and without privacy. The protocols used for Authentication are MD5 and SHA (Secure Hash Algorithm).
+ AuthPriv: Communication with authentication and privacy. The protocols used for Authentication are MD5 and SHA ; and for Privacy, DES (Data Encryption Standard) and AES (Advanced Encryption Standard) protocols can be used. For Privacy Support, you have to install some third-party privacy packages.

Question 4

Explanation

The SNMPv3 Agent supports the following set of security levels:
+ NoAuthnoPriv: Communication without authentication and privacy.
+ AuthNoPriv: Communication with authentication and without privacy. The protocols used for Authentication are MD5 and SHA (Secure Hash Algorithm).
+ AuthPriv: Communication with authentication and privacy. The protocols used for Authentication are MD5 and SHA ; and for Privacy, DES (Data Encryption Standard) and AES (Advanced Encryption Standard) protocols can be used. For Privacy Support, you have to install some third-party privacy packages.

In the CLI, we use “priv” keyword for “AuthPriv” (“noAuth” keyword for “noAuthnoPriv”; “auth” keyword for “AuthNoPriv”). The following example shows how to configure a remote user to receive traps at the “priv” security level when the SNMPv3 security model is enabled:
Router(config)# snmp-server group group1 v3 priv
Router(config)# snmp-server user PrivateUser group1 remote 1.2.3.4 v3 auth md5 password1 priv access des56

Question 5

Explanation

The “snmp-server manager” command is used to start the SNMP manager process. In other words, it allows the SNMP manager to begin sending and receiving SNMP requests and responses to the SNMNP agents.

SNMP_Components.jpg

Note: SNMP Manager (sometimes called Network Management System – NMS) is a software runs on the device of the network administrator (in most case, a computer) to monitor the network.

Question 6

Explanation

Both SNMPv1 and v2 did not focus much on security and they provide security based on community string only. Community string is really just a clear text password (without encryption). Any data sent in clear text over a network is vulnerable to packet sniffing and interception.

SNMPv3 provides significant enhancements to address the security weaknesses existing in the earlier versions. The concept of community string does not exist in this version. SNMPv3 provides a far more secure communication using entities, users and groups. This is achieved by implementing three new major features:
+ Message integrity: ensuring that a packet has not been modified in transit.
+ Authentication: by using password hashing (based on the HMAC-MD5 or HMAC-SHA algorithms) to ensure the message is from a valid source on the network.
+ Privacy (Encryption): by using encryption (56-bit DES encryption, for example) to encrypt the contents of a packet.

Note: Although SNMPv3 offers better security but SNMPv2c however is still more common.

Question 7

Question 8

Explanation

The command “show snmp user” displays information about the configured characteristics of SNMP users. The following example specifies the username as abcd with authentication method of MD5 and encryption method of 3DES.

Router#show snmp user abcd
User name: abcd
Engine ID: 00000009020000000C025808
storage-type: nonvolatile active access-list: 10
Rowstatus: active
Authentication Protocol: MD5
Privacy protocol: 3DES
Group name: VacmGroupName
Group name: VacmGroupName

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_4t/12_4t2/snmpv3ae.html

Question 9

Question 10

Explanation

The snmp-server host global configuration command is used to specify the recipient of an SNMP notification operation, in this case 192.168.1.3. In other words, traps of the local router will be sent to 192.168.1.3. Therefore this command is often used to manage the device.

Question 11

Explanation

The SNMP Manger can send GET, GET-NEXT and SET messages to SNMP Agents. The Agents are the monitored device while the Manager is the monitoring device. In the picture below, the Router, Server and Multilayer Switch are monitored devices.

SNMP_Messages_Flow.jpg

Syslog Questions

July 14th, 2019 digitaltut 4 comments

Question 1

Explanation

The Message Logging is divided into 8 levels as listed below:

Level Keyword Description
0 emergencies System is unusable
1 alerts Immediate action is needed
2 critical Critical conditions exist
3 errors Error conditions exist
4 warnings Warning conditions exist
5 notification Normal, but significant, conditions exist
6 informational Informational messages
7 debugging Debugging messages

The highest level is level 0 (emergencies). The lowest level is level 7. If you specify a level with the “logging console level” command, that level and all the higher levels will be displayed. For example, by using the “logging console warnings” command, all the logging of emergencies, alerts, critical, errors, warnings will be displayed.

Question 2

Explanation

Syslog can be configured to send messages to an external server for storing. The storage size does not depend on the router’s resources and is limited only by the available disk space on the external Syslog server. For example, to instruct our router to send Syslog messages to 192.168.1.2 we can simply use only this command (all parameters are at default values):

R1(config)#logging 192.168.1.2

We cannot send other options (terminal, buffer, console) to external server.

Question 3

Explanation

The “service timestamps log uptime” enables timestamps on log messages, showing the time since the system was rebooted. For example:

00:00:46: %LINK-3-UPDOWN: Interface Port-channel1, changed state to up

Question 4

Explanation

Syslog levels are listed below:

Level Keyword Description
0 emergencies System is unusable
1 alerts Immediate action is needed
2 critical Critical conditions exist
3 errors Error conditions exist
4 warnings Warning conditions exist
5 notification Normal, but significant, conditions exist
6 informational Informational messages
7 debugging Debugging messages

Number “5” in “%LINEPROTO-5- UPDOWN” is the severity level of this message so in this case it is “notification”.

Question 5

Explanation

Maybe this question wants to mention about this Syslog message:

00:00:47: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/1, changed state to up

-> The log secerity level of this warning is 3 – errors

Question 6

Question 7

Question 8

Explanation

“Increase the logging history” here is same as “increase the logging buffer”. The default buffer size is 4096 bytes. By increasing the logging buffer size we can see more history logging messages. But do not make the buffer size too large because the access point could run out of memory for other tasks. We can write the logging messages to a outside logging server instead.

NTP Questions

July 13th, 2019 digitaltut 37 comments

Question 1

Explanation

The command “ntp master [stratum]” is used to configure the device as an authoritative NTP server. You can specify a different stratum level from which NTP clients get their time synchronized. The range is from 1 to 15.

The stratum levels define the distance from the reference clock. A reference clock is a stratum 0 device that is assumed to be accurate and has little or no delay associated with it. Stratum 0 servers cannot be used on the network but they are directly connected to computers which then operate as stratum-1 servers. A stratum 1 time server acts as a primary network time standard.

ntp-stratum.jpg

A stratum 2 server is connected to the stratum 1 server; then a stratum 3 server is connected to the stratum 2 server and so on. A stratum 2 server gets its time via NTP packet requests from a stratum 1 server. A stratum 3 server gets its time via NTP packet requests from a stratum-2 server… A stratum server may also peer with other stratum servers at the same level to provide more stable and robust time for all devices in the peer group (for example a stratum 2 server can peer with other stratum 2 servers).

Question 2

Explanation

The “ntp broadcast client” command is used under interface mode to allow the device to receive Network Time Protocol (NTP) broadcast packets on that interface

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_2/configfun/command/reference/ffun_r/frf012.html#wp1123148

Question 3

Question 4

Explanation

The stratum levels define the distance from the reference clock. A reference clock is a stratum 0 device that is assumed to be accurate and has little or no delay associated with it. Stratum 0 servers cannot be used on the network but they are directly connected to computers which then operate as stratum-1 servers. A stratum 1 time server acts as a primary network time standard.

A stratum 2 server is connected to the stratum 1 server; then a stratum 3 server is connected to the stratum 2 server and so on. A stratum 2 server gets its time via NTP packet requests from a stratum 1 server. A stratum 3 server gets its time via NTP packet requests from a stratum-2 server. Therefore the lower the stratum level is, the more accurate the NTP server is. When multiple NTP servers are configured, the client will prefer the NTP server with the lowest stratum level.

NTP uses User Datagram Protocol (UDP) port 123.

Question 5

Explanation

First we need to understand some basic knowledge about NTP. There are two types of NTP messages:
+ Control messages: for reading and writing internal NTP variables and obtain NTP status information. It is not used for time synchronization so we will not care about them in this question.
+ Request/Update messages: for time synchronization. Request messages ask for synchronization information while Update messages contains synchronization information and may change the local clock.

There are four types of NTP access-groups exist to control traffic to the NTP services:
+ Peer: controls which remote devices the local device may synchronize. In other words, it permits the local router to respond to NTP request and accept NTP updates.
+ Serve: controls which remote devices may synchronize with the local device. In other words, it permits the local router to reply to NTP requests, but drops NTP update. This access-group allows control messages.
+ Serve-only: controls which remote devices may synchronize with the local device. In other words, it permits the local router to respond to NTP requests only. This access-group denies control messages.
+ Query-only: only accepts control messages. No response to NTP requests are sent, and no local system time synchronization with remote system is permitted.

From my experience, you just need to remember:
+ Peer: serve and to be served
+ Serve: serve but not to be served

Therefore in this question:
+ The “ntp access-group peer 2” command says “I can only accept NTP updates and respond to NTP (time) requests from 192.168.1.4“. -> Answer F is correct while answer D is not correct.
+ The “ntp access-group serve 1” command says “I can only reply to time requests (but cannot accept time update) from 192.168.1.1 ” -> Answer A is correct*

The “ntp master 4” indicates it is running as a time source with stratum level of 4 -> Answer B is not correct while answer C is correct.

Answer E is not correct because it can accept time requests from both 192.168.1.1 and 192.168.1.4.

*Note: In fact answer A is incorrect too because the local router can accept time requests from both 192.168.1.1 and 192.168.1.4 (not only from 192.168.1.1). Maybe this is an mistake of this question.

Question 6

Explanation

To control access to Network Time Protocol (NTP) services on the system, use the ntp access-group command in global configuration mode.

NTP supports “Control messages” and “Request/Update messages”.

+ Control messages are for reading and writing internal NTP variables and obtaining NTP status information. Not to deal with time synchronization itself.
+ NTP request/Update messages are used for actual time synchronization. Request packet obviously asks for synchronization information, and update packet contains synchronization information, and may change local clock.

When synchronizing system clocks on Cisco IOS devices only Request/Update messages are used. Therefore in this question we only care about “NTP Update message”.

Syntax:

ntp access-group [ipv4 | ipv6] {peer | query-only | serve | serve-only} {access-list-number | access-list-number-expanded | access-list-name} [kod]

+ Peer: permits router to respond to NTP requests and accept NTP updates. NTP control queries are also accepted. This is the only class which allows a router to be synchronized by other devices -> not correct. In other words, the peer keyword enables the device to receive time requests and NTP control queries and to synchronize itself to the servers specified in the access list.
+ Serve-only: Permits router to respond to NTP requests only. Rejects attempt to synchronize local system time, and does not access control queries. In other words, the serve-only keyword enables the device to receive only time requests from servers specified in the access list.
+ Serve: permits router to reply to NTP requests, but rejects NTP updates (e.g. replies from a server or update packets from a peer). Control queries are also permitted. In other words, the serve keyword enables the device to receive time requests and NTP control queries from the servers specified in the access list but not to synchronize itself to the specified servers -> this option is surely correct.

In summary, the answer “serve” is surely correct but the answer “serve-only” seems to be correct too (although the definition is not clear).

An example of using the “ntp access-group” command is shown below:

R1(config)#ntp server 178.240.12.1
R1(config)#access-list 2 permit 165.16.4.1 0.0.0.0
R1(config)#access-list 2 deny any
R1(config)#ntp access-group peer 2 // peer only to 165.16.4.1
R1(config)#access-list 3 permit 160.1.0.0 0.0.255.255
R1(config)#access-list 3 deny any
R1(config)#ntp access-group serve-only 3 //provide time services only to internal network 160.1.0.0/16

Reference:

+ http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/bsm/command/bsm-cr-book/bsm-cr-n1.html
+ http://blog.ine.com/2008/07/28/ntp-access-control/

Question 7

Question 8

Explanation

The output indicates that the local device did not receive the NTP update successfully so something went wrong during the transmission.

Question 9

Explanation

NTP operates in four different modes.
+ Server Mode is configured such that a device will synchronize NTP clients. Servers can be configured to synchronize all clients or only a specific group of clients. NTP servers, however, will not accept synchronization information from their clients. This restriction will not allow clients to update or manipulate a server’s time settings.
+ Client Mode is configured used to allow a device to set its clock by and synchronized by an external timeserver. NTP clients can be configured to use multiple servers to set their local time and can be configured to give preference to the most accurate time sources available to them. They will not, however, provide synchronization services to any other devices.
+ Peer Mode is when one NTP-enabled device does not have any authority over another. With the peering model, each device will share its time information with its peer. Additionally, each device can also provide time synchronization to the other.
+ Broadcast/Multicast Mode is a special server mode where the NTP server broadcasts its synchronization information to all clients. Broadcast mode requires that clients be on the same subnet as the server, and multicast mode requires that clients and servers have multicast capabilities configured.

Reference: http://www.pearsonitcertification.com/articles/article.aspx?p=1851440

“Interface” is not a NTP mode so answer A is not correct.

It is sure that in “peer” mode we don’t need to use the “trusted-key” command for authentication so answer C is not correct.

Question 10

Explanation

An example of the output of this command is shown below:

Router#show ntp associations
      address         ref clock     st  when  poll reach  delay  offset    disp
*~10.1.2.65        10.1.2.33        11    36    64  377    27.9   25.17    30.0
 * master (synced), # master (unsynced), + selected, - candidate, ~ configured

If there’s an asterisk (*) next to a configured peer, then you are synced to this peer and using them as the master clock. As long as one peer is the master then everything is fine. However, the key to knowing that NTP is working properly is looking at the value in the reach field.

The reach field is a circular bit buffer. It gives you the status of the last eight NTP messages (eight bits in octal is 377, so you want to see a reach field value of 377). If an NTP response packet is lost, the missing packet is tracked over the next eight NTP update intervals in the reach field. For more information about this field please read http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-110/15171-ntpassoc.html

Question 11

Question 12

Explanation

The command “ntp master [stratum]” is used to configure the device as an authoritative NTP server. You can specify a different stratum level from which NTP clients get their time synchronized. The range is from 1 to 15.

The stratum levels define the distance from the reference clock. A reference clock is a stratum 0 device that is assumed to be accurate and has little or no delay associated with it. Stratum 0 servers cannot be used on the network but they are directly connected to computers which then operate as stratum-1 servers. A stratum 1 time server acts as a primary network time standard.

ntp-stratum.jpg

A stratum 2 server is connected to the stratum 1 server; then a stratum 3 server is connected to the stratum 2 server and so on. A stratum 2 server gets its time via NTP packet requests from a stratum 1 server. A stratum 3 server gets its time via NTP packet requests from a stratum-2 server… A stratum server may also peer with other stratum servers at the same level to provide more stable and robust time for all devices in the peer group (for example a stratum 2 server can peer with other stratum 2 servers).

NAT Questions

July 12th, 2019 digitaltut 40 comments

Question 1

Question 2

Explanation

First we will not mention about the effect of the “extendable” keyword. So the purpose of the command “ip nat inside source static tcp 192.168.1.50 80 209.165.201.1 8080” is to translate packets on the inside interface with a source IP address of 192.168.1.50 and port 80 to the IP address 209.165.201.1 with port 8080. This also implies that any packet received on the outside interface with a destination address of 209.165.201.1:8080 has the destination translated to 192.168.1.50:80. Therefore answer C is correct.

Answer A is not correct this command “allows host 192.168.1.50 to access external websites using TCP port 80”, not port 8080.

Answer B is not correct because it allows external clients to connect to a web server at 209.165.201.1. The IP addresses of clients should not be 209.165.201.1.

Answer D is not correct because the configuration is correct.

Now we will talk about the keyword “extendable”.

Usually, the “extendable” keyword should be added if the same Inside Local is mapped to different Inside Global Addresses (the IP address of an inside host as it appears to the outside network). An example of this case is when you have two connections to the Internet on two ISPs for redundancy. So you will need to map two Inside Global IP addresses into one inside local IP address. For example:

nat_extendable.jpg

NAT router:
ip nat inside source static 192.168.1.1 200.1.1.1 extendable
ip nat inside source static 192.168.1.1 200.2.2.2 extendable
//Inside Local: 192.168.1.1 ; Inside Global: 200.1.1.1 & 200.2.2.2

In this case, the traffic from ISP1 and ISP2 to the Server is straightforward as ISP1 will use 200.1.1.1 and ISP2 will use 200.2.2.2 to reach the Server. But how about the traffic from the Server to the ISPs? In other words, how does NAT router know which IP (200.1.1.1 or 200.2.2.2) it should use to send traffic to ISP1 & ISP2 (this is called “ambiguous from the inside”). We tested in GNS3 and it worked correctly! So we guess the NAT router compared the Inside Global addresses with all of IP addresses of the “ip nat outside” interfaces and chose the most suitable one to forward traffic.

This is what Cisco explained about “extendable” keyword:

“They might also want to define static mappings for a particular host using each provider’s address space. The software does not allow two static translations with the same local address, though, because it is ambiguous from the inside. The router will accept these static translations and resolve the ambiguity by creating full translations (all addresses and ports) if the static translations are marked as “extendable”. For a new outside-to-inside flow, the appropriate static entry will act as a template for a full translation. For a new inside-to-outside flow, the dynamic route-map rules will be used to create a full translation”.

(Reference: http://www.cisco.com/en/US/technologies/tk648/tk361/tk438/technologies_white_paper09186a0080091cb9.html)

But it is unclear, what will happen if we don’t use a route-map?

Question 3

Explanation

The command “ip nat inside source list 1 int s0/0 overload” translates all source addresses that pass access list 1, which means all the IP addresses, into an address assigned to S0/0 interface. Overload keyword allows to map multiple IP addresses to a single registered IP address (many-to-one) by using different ports.

Question 4

Explanation

The command “ip nat inside source list 10 interface FastEthernet0/1 overload” configures NAT to overload on the address that is assigned to the Fa0/1 interface.

Question 5

Explanation

This is a static NAT command which translates all the packets received in the inside interface with a source IP address of 172.16.10.8:8080 to 172.16.10.8:80. The purpose of this NAT statement is to redirect TCP Traffic to Another TCP Port.

Question 6

Explanation

NAT64 provides communication between IPv6 and IPv4 hosts by using a form of network address translation (NAT). There are two different forms of NAT64, stateless and stateful:

+ Stateless NAT64: maps the IPv4 address into an IPv6 prefix. As the name implies, it keeps no state. It does not save any IP addresses since every v4 address maps to one v6 address. Stateless NAT64 does not conserve IP4 addresses.
+ Stateful NAT64 is a stateful translation mechanism for translating IPv6 addresses to IPv4 addresses, and IPv4 addresses to IPv6 addresses. Like NAT44, it is called stateful because it creates or modifies bindings or session state while performing translation (1:N translation). It supports both IPv6-initiated and IPv4-initiated communications using static or manual mappings. Stateful NAT64 converses IPv4 addresses.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/enterprise-ipv6-solution/white_paper_c11-676278.html

Question 7

Explanation

The “ip nat allow-static-host” command enables static IP address support. Dynamic Address Resolution Protocol (ARP) learning will be disabled on this interface, and NAT will control the creation and deletion of ARP entries for the static IP host.

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_nat/configuration/12-4/nat-12-4-book/iadnat-addr-consv.html

Question 8

Explanation

Network Address Translation-Protocol Translation (NAT-PT) has been deemed deprecated by IETF because of its tight coupling with Domain Name System (DNS) and its general limitations in translation. IETF proposed NAT64 as the viable successor to NAT-PT.

NAT64 technology facilitates communication between IPv6-only and IPv4-only hosts and networks (whether in a transit, an access, or an edge network). This solution allows both enterprises and ISPs to accelerate IPv6 adoption while simultaneously handling IPv4 address depletion. All viable translation scenarios are supported by NAT64, and therefore NAT64 is becoming the most sought translation technology.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/enterprise-ipv6-solution/white_paper_c11-676278.html

Question 9

Question 10

Explanation

The syntax should be: ipv6 nat prefix ipv6-prefix / prefix-length (for example: Router# ipv6 nat prefix 2001:DB8::/96)

Question 11

Explanation

From the link: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_nat/configuration/15-mt/nat-15-mt-book/iadnat64-stateful.pdf

Restrictions for configuring Stateful Network Address:

+ Virtual routing and forwarding (VRF)-aware NAT64 is not supported -> Answer A is correct.
+ IP Multicast is not supported -> Answer B is correct.
+ Application-level gateways (ALGs) FTP and ICMP are not supported -> Answer C is not correct.
+ Only TCP and UDP Layer 4 protocols are supported for header translation -> Answer E is not correct.
+ For Domain Name System (DNS) traffic to work, you must have a separate working installation of DNS64 -> This statement means stateful NAT64 supports DNS64 but we cannot conclude it is the only one supported by NAT64. We are not sure but maybe stateful NAT64 also supports DNS ALG.

Question 12

Explanation

NAT use four types of addresses:

* Inside local address – The IP address assigned to a host on the inside network. The address is usually not an IP address assigned by the Internet Network Information Center (InterNIC) or service provider. This address is likely to be an RFC 1918 private address.
* Inside global address – A legitimate IP address assigned by the InterNIC or service provider that represents one or more inside local IP addresses to the outside world.
* Outside local address – The IP address of an outside host as it is known to the hosts on the inside network.
* Outside global address – The IP address assigned to a host on the outside network. The owner of the host assigns this address.

NAT_terms_explained.jpg

Question 13

Question 14

IP SLA Questions

July 11th, 2019 digitaltut 51 comments

Question 1

Question 2

Explanation

IP SLA PBR (Policy-Based Routing) Object Tracking allows you to make sure that the next hop is reachable before that route is used. If the next hop is not reachable, another route is used as defined in the PBR configuration. If no other route is present in the route map, the routing table is used.

An example of configuring PBR based on tracking object is shown below:

//Configure and schedule IP SLA operations
ip sla 1
icmp-echo 10.3.3.2
ip sla schedule 1 life forever start-time now
!
// Configure Object Tracking to track the operations
track 1 ip sla 1 reachability
!
//Configure ACL
ip access-list standard ACL
permit ip 10.2.2.0/24 10.1.1.1/32
!
//Configure PBR policing on the router
route-map PBR
match ip address ACL
set ip next-hop verify-availability 10.3.3.2 track 1
set ip next-hop verify-availability 10.3.3.2 track 2 -> Track 2 is not shown here but it is used if track 1 fails
!
//Apply PBR policy on the incoming interface of the router.
interface ethernet 0/0
ip address 10.2.2.1 255.255.255.0
ip policy route-map PBR

Reference: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/6_x/nx-os/IPSLA/configuration/guide/b_Cisco_Nexus_7000_Series_NX-OS_IP_SLAs_Configuration_Guide_rel_6-x/b_Cisco_Nexus_7000_Series_NX-OS_IP_SLAs_Configuration_Guide_rel_6-x_chapter_01000.html

Question 3

Explanation

The keyword “tcp-connect” enables the responder for TCP connect operations. TCP is a connection-oriented transport layer protocol -> C is correct.

Question 4

Explanation

The “num-packets” specifies the number of packets to be sent for a jitter operation.

The “frequency” is the rate (in seconds) at which this IP SLA operation repeats. The “tos” defines a type of service (ToS) byte in the IP header of this IP SLA operation.

Question 5

Explanation

When enabled, the IP SLAs Responder allows the target device to take two time stamps both when the packet arrives on the interface at interrupt level and again just as it is leaving, eliminating the processing time. At times of high network activity, an ICMP ping test often shows a long and inaccurate response time, while an IP SLAs test shows an accurate response time due to the time stamping on the responder.

An additional benefit of the two time stamps at the target device is the ability to track one-way delay, jitter, and directional packet loss. Because much network behavior is asynchronous, it is critical to have these statistics. However, to capture one-way delay measurements the configuration of both the source device and target device with Network Time Protocol (NTP) is required. Both the source and target need to be synchronized to the same clock source. One-way jitter measurements do not require clock synchronization.

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipsla/configuration/15-mt/sla-15-mt-book/sla_overview.html

Question 6

Question 7

Question 8

Explanation

Depending on the specific Cisco IOS IP SLAs operation, statistics of delay, packet loss, jitter, packet sequence, connectivity, path, server response time, and download time are monitored within the Cisco device and stored in both CLI and SNMP MIBs.

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_4/ip_sla/configuration/guide/hsla_c/hsoverv.html

Question 9

Explanation

IP SLAs supports proactive threshold monitoring and notifications for performance parameters such as average jitter, unidirectional latency, bidirectional round-trip time (RTT), and connectivity for most IP SLAs operations. The proactive monitoring capability also provides options for configuring reaction thresholds for important VoIP related parameters including unidirectional jitter, unidirectional packet loss, and unidirectional VoIP voice quality scoring.

IP SLAs reactions are configured to trigger when a monitored value exceeds or falls below a specified level or when a monitored event, such as a timeout or connection loss, occurs. If IP SLAs measures too high or too low of any configured reaction, IP SLAs can generate a notification (in the form of SNMP trap) to a network management application or trigger another IP SLA operation to gather more data.

Cisco IOS IP SLAs can send SNMP traps that are triggered by events such as the following:
+ Connection loss
+ Timeout
+ Round-trip time threshold
+ Average jitter threshold
+ One-way packet loss
+ One-way jitter
+ One-way mean opinion score (MOS)
+ One-way latency

Question 10

Explanation

Round-trip time (RTT), also called round-trip delay, is the time required for a packet to travel from a specific source to a specific destination and back again.

An ICMP Path Echo operation measures end-to-end (full path) and hop-by-hop response time (round-trip delay) between a Cisco router and devices using IP. ICMP Path Echo is useful for determining network availability and for troubleshooting network connectivity issues.

Note: ICMP Echo only measures round-trip delay for the full path.

Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipsla/configuration/xe-3s/sla-xe-3s-book/sla_icmp_pathecho.html

Question 11

Explanation

The default route command (at the last line) must include the “track” keyword for the tracking feature to work.

ip route 0.0.0.0.0 0.0.0.0 172.20.20.2 track 10

Question 12

IP SLA Questions 2

July 11th, 2019 digitaltut 36 comments

Question 1

Explanation

User Datagram Protocol (UDP) Jitter for VoIP is the most common operation for networks that carry voice traffic, video, or UDP jitter-sensitive applications. Requires Cisco endpoints.

Note: The ICMP jitter operation is similar to the IP SLAs UDP jitter operation but does not require a Cisco endpoint (maybe only Cisco router has been designated to reply to Cisco IOS IP SLA test packets).

The config below shows an example of configuring UDP Jitter for VoIP:

Router(config)# ip sla 10
//Configures the operation as a jitter (codec) operation that will generate VoIP scores in addition to latency, jitter, and packet loss statistics. Notice that it requires an endpoint.
Router(config-ip-sla)# udp-jitter 209.165.200.225 16384 codec g711alaw advantage-factor 10
//The below configs are only optional
Router(config-ip-sla-jitter)# frequency 30
Router(config-ip-sla-jitter)# history hours-of-statistics-kept 4
Router(config-ip-sla-jitter)# owner admin
Router(config-ip-sla-jitter)# tag TelnetPollServer1
Router(config-ip-sla-jitter)# threshold 10000
Router(config-ip-sla-jitter)# timeout 10000
Router(config-ip-sla-jitter)# tos 160

Reference: http://www.cisco.com/en/US/technologies/tk648/tk362/tk920/technologies_qas0900aecd8017bd5a.html & http://www.cisco.com/c/en/us/td/docs/ios/ipsla/configuration/guide/15_s/sla_15_0s_book/sla_udp_jitter_voip.pdf

Question 2

Explanation

There is no problem with the Fa0/0 as the source interface as we want to check the ping from the LAN interface -> A is not correct.

Answer B is not correct as we must track the destination of the primary link, not backup link.

In this question, R1 pings R2 via its LAN Fa0/0 interface so maybe R1 (which is an ISP) will not know how to reply back as an ISP usually does not configure a route to a customer’s LAN -> C is correct.

There is no problem with the default route -> D is not correct.

For answer E, we need to understand about how timeout and threshold are defined:

Timeout (in milliseconds) sets the amount of time an IP SLAs operation waits for a response from its request packet. In other words, the timeout specifies how long the router should wait for a response to its ping before it is considered failed.Threshold (in milliseconds too) sets the upper threshold value for calculating network monitoring statistics created by an IP SLAs operation. Threshold is used to activate a response to IP SLA violation, e.g. send SNMP trap or start secondary SLA operation. In other words, the threshold value is only used to indicate over threshold events, which do not affect reachability but may be used to evaluate the proper settings for the timeout command.

For reachability tracking, if the return code is OK or OverThreshold, reachability is up; if not OK, reachability is down.

Therefore in this question, we are using “Reachability” tracking (via the command “track 10 ip sla 1 reachability”) so threshold value is not important and can be ignored -> Answer E is correct. In fact, answer E is not wrong but it is the best option left.

This tutorial can help you revise IP SLA tracking topic: http://www.firewall.cx/cisco-technical-knowledgebase/cisco-routers/813-cisco-router-ipsla-basic.html and http://www.ciscozine.com/using-ip-sla-to-change-routing/

Note: Maybe some of us will wonder why there are these two commands:

R1(config)#ip route 0.0.0.0 0.0.0.0 172.20.20.2 track 10
R1(config)#no ip route 0.0.0.0 0.0.0.0 172.20.20.2

In fact the two commands:

ip route 0.0.0.0 0.0.0.0 172.20.20.2 track 10
ip route 0.0.0.0 0.0.0.0 172.20.20.2

are different. These two static routes can co-exist in the routing table. Therefore if the tracking goes down, the first command will be removed but the second one still exists and the backup path is not preferred. So we have to remove the second one.

Question 3

Explanation

A primary benefit of Cisco IOS IP SLAs is accuracy, embedded flexibility, and cost-saving, a key component of which is the Cisco IOS IP SLAs responder enabled on the target device. When the responder is enabled, it allows the target device to take two timestamps: when the packet arrives on the interface at interrupt level and again just as it leaves. This eliminates processing time. This timestamping is made with a granularity of sub-millisecond (ms). The responder timestamping is very important because all routers and switches in the industry will prioritize switching traffic destined for other locations over packets destined for its local IP address (this includes Cisco IOS IP SLAs and ping test packets). Therefore, at times of high network activity, ping tests can reveal an inaccurately large response time; conversely, timestamping on the responder allows a Cisco IOS IP SLAs test to accurately represent the response time due.

Reference: http://www.cisco.com/en/US/technologies/tk648/tk362/tk920/technologies_white_paper0900aecd8017f8c9_ps6602_Products_White_Paper.html

Note: The ICMP echo operation is used to cause ICMP echo requests to be sent to a destination to check connectivity

Question 4

Explanation

The “show ip sla statistics” command displays the current operational status and statistics of all IP SLAs operations or a specified operation so the answer “operation availability” is the best choice here.

Reference: http://www.cisco.com/c/en/us/td/docs/ios/ipsla/command/reference/sla_book/sla_04.html

Question 5

Explanation

You can configure a tracked list of objects with a Boolean expression, a weight threshold, or a percentage threshold.

The example configures track list 1 to track by weight threshold.

Switch(config)# track 1 list threshold weight
Switch(config-track)# object 1 weight 15
Switch(config-track)# object 2 weight 20
Switch(config-track)# object 3 weight 30
Switch(config-track)# threshold weight up 30 down 10

If object 1, and object 2 are down, then track list 1 is up, because object 3 satisfies the up threshold value of up 30. But, if object 3 is down, both objects 1 and 2 must be up in order to satisfy the threshold weight.

This configuration can be useful if object 1 and object 2 represent two small bandwidth connections and object 3 represents one large bandwidth connection. The configured down 10 value means that once the tracked object is up, it will not go down until the threshold value is equal to or lower than 10, which in this example means that all connections are down.

The below example configures tracked list 2 with three objects and a specified percentages to measure the state of the list with an up threshold of 70 percent and a down threshold of 30 percent:

Switch(config)# track 2 list threshold percentage
Switch(config-track)# object 1
Switch(config-track)# object 2
Switch(config-track)# object 3
Switch(config-track)# threshold percentage up 51 down 10

This means as long as 51% or more of the objects are up, the list will be considered “up”. So in this case if two objects are up, track 2 is considered “up”.

Reference: http://www.cisco.com/c/en/us/td/docs/switches/blades/3020/software/release/12-2_58_se/configuration/guide/3020_scg/swhsrp.pdf

Question 6

Question 7

Question 8

Question 9

Explanation

Maybe this question wants to ask “which location IP SLAs are usually used to monitor the traffic?” then the answer should be WAN edge as IP SLA is usually used to track a remote device or service (usually via ping).

NetFlow Questions

July 10th, 2019 digitaltut 21 comments

If you are not sure about NetFlow, please read our NetFlow tutorial.

Quick review:

NetFlow is a network protocol to report information about the traffic on a router/switch or other network device. NetFlow collects and summaries the data that is carried over a device, and then transmitting that summary to a NetFlow collector for storage and analysis. An IP flow is based on a set of five, and up to seven, IP packet attributes, which may include the following:
+ Destination IP address
+ Source IP address
+ Source port
+ Destination port
+ Layer 3 protocol type
+ Class of Service (optional)
+ Router or switch interface (optional)

Question 1

Explanation

The “show ip flow export” command is used to display the status and the statistics for NetFlow accounting data export, including the main cache and all other enabled caches. An example of the output of this command is shown below:

Router# show ip flow export
Flow export v5 is enabled for main cache
Exporting flows to 10.51.12.4 (9991) 10.1.97.50 (9111)
Exporting using source IP address 10.1.97.17
Version 5 flow records
11 flows exported in 8 udp datagrams
0 flows failed due to lack of export packet
0 export packets were sent up to process level
0 export packets were dropped due to no fib
0 export packets were dropped due to adjacency issues
0 export packets were dropped due to fragmentation failures
0 export packets were dropped due to encapsulation fixup failures
0 export packets were dropped enqueuing for the RP
0 export packets were dropped due to IPC rate limiting
0 export packets were dropped due to output drops

The “output drops” line indicates the total number of export packets that were dropped because the send queue was full while the packet was being transmitted.

Reference: http://www.cisco.com/en/US/docs/ios/12_3t/netflow/command/reference/nfl_a1gt_ps5207_TSD_Products_Command_Reference_Chapter.html#wp1188401

Question 2

Explanation

In general, NetFlow requires CEF to be configured in most recent IOS releases. CEF decides which interface the traffic is sent out. With CEF disabled, router will not have specific destination interface in the NetFlow report packets. Therefore a NetFlow Collector cannot show the OUT traffic for the interface.

Question 3

Explanation

This command is used to display the current status of the specific flow exporter, in this case Flow_Exporter-1. For example

N7K1# show flow export
Flow exporter Flow_Exporter-1:
    Description: Fluke Collector
    Destination: 10.255.255.100
    VRF: default (1)
    Destination UDP Port 2055
    Source Interface Vlan10 (10.10.10.5)
    Export Version 9
    Exporter Statistics
        Number of Flow Records Exported 726
        Number of Templates Exported 1
        Number of Export Packets Sent 37
        Number of Export Bytes Sent 38712
        Number of Destination Unreachable Events 0
        Number of No Buffer Events 0
        Number of Packets Dropped (No Route to Host) 0
        Number of Packets Dropped (other) 0
        Number of Packets Dropped (LC to RP Error) 0
        Number of Packets Dropped (Output Drops) 0
        Time statistics were last cleared: Thu Feb 15 21:12:06 2015

Question 4

Explanation

The sampling mode determines the algorithm that selects a subset of traffic for NetFlow processing. In the random sampling mode, incoming packets are randomly selected so that one out of each n sequential packets is selected on average for NetFlow processing. For example, if you set the sampling rate to 1 out of 100 packets, then NetFlow might sample the 5th, 120th, 299th, 302nd, and so on packets. This sample configuration provides NetFlow data on 1 percent of total traffic. The n value is a parameter from 1 to 65535 packets that you can configure.

In the above output we can learn the number of packets that has been sampled is 10. The sampling mode is “random sampling mode” and sampling interval is 100 (NetFlow samples 1 out of 100 packets).

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/nfstatsa.html

Question 5

Explanation

The “ip flow-export destination 10.10.10.1 5858” command is used to export the information captured by the “ip flow-capture” command to the destination 10.10.10.1. “5858” is the UDP port to which NetFlow packets are sent (default is 2055). The syntax of this command is:

ip flow-export destination ip-address [udp-port] [version 5 {origin-as | peer-as}]

Question 6

Explanation

Flow monitors are the Flexible NetFlow component that is applied to interfaces to perform network traffic monitoring. Flow monitors consist of a record and a cache. You add the record to the flow monitor after you create the flow monitor. The flow monitor cache is automatically created at the time the flow monitor is applied to the first interface. Flow data is collected from the network traffic during the monitoring process based on the key and nonkey fields in the record, which is configured for the flow monitor and stored in the flow monitor cache.
For example, the following example creates a flow monitor named FLOW-MONITOR-1 and enters Flexible NetFlow flow monitor configuration mode:
Router(config)# flow monitor FLOW-MONITOR-1
Router(config-flow-monitor)#

(Reference: http://www.cisco.com/c/en/us/td/docs/ios/fnetflow/command/reference/fnf_book/fnf_01.html#wp1314030)

Question 7

Question 8

Explanation

The following is an example of configuring an interface to capture flows into the NetFlow cache. CEF followed by NetFlow flow capture is configured on the interface:

Router(config)# ip cef
Router(config)# interface ethernet 1/0
Router(config-if)# ip flow ingress
or
Router(config-if)# ip route-cache flow

Note: Either ip flow ingress or ip route-cache flow command can be used depending on the Cisco IOS Software version. Ip flow ingress is available in Cisco IOS Software Release 12.2(15)T or above.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ios-netflow/prod_white_paper0900aecd80406232.html

Question 9

Question 10

Explanation

There are two primary methods to access NetFlow data: the Command Line Interface (CLI) with show commands or utilizing an application reporting tool. If you are interested in an immediate view of what is happening in your network, the CLI can be used. The other choice is to export NetFlow to a reporting server or what is called the “NetFlow collector”.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ios-netflow/prod_white_paper0900aecd80406232.html

Question 11

Explanation

NetFlow collects statistics about traffic that flows through the router. NetFlow Data Export (NDE) enables you to export those statistics to an external data collector for analysis.

An example of configuring NetFlow data exporting is shown below:

Router(config)#interface fa0/1
Router(config-if)#ip route-cache flow
Router(config-if)#exit
Router(config)#ip flow-export destination 10.1.1.1 2055
Router(config)#ip flow-export source fa0/2 //NetFlow will use Fa0/2 as the source IP address for the UDP datagrams sent to the NetFlow Collector
Router(config)#ip flow-export version 5
Router(config)#ip flow-cache timeout active 1 //export flow records every minute.

The most important parameter when configuring NetFlow is the destination where NetFlow sends data to. Other parameters can be ignored and they will use default values (except the command “ip route-cache flow” to enable NetFlow).

Question 12

Explanation

Below is an example of the “show ip cache flow” output:

show_ip_cache_flow.jpg

Information provided includes packet size distribution (the answer says “IP packet distribution” but maybe it is “IP packet size distribution”); basic statistics about number of flows and export timer setting, a view of the protocol distribution statistics and the NetFlow cache.

Also we can see the flow samples for TCP and UDP protocols (including Total Flows, Flows/Sec, Packets/Flow…).

Question 13

Explanation

NetFlow_example.jpg

NetFlow Collector: collects flow records sent from the NetFlow exporters, parsing and storing the flows. Usually a collector is a separate software running on a network server. NetFlow records are exported to a NetFlow collector using User Datagram Protocol (UDP).

Question 14

Explanation

To configure multiple NetFlow export destinations to a router, use the following commands in global configuration mode:

Step 1: Router(config)# ip flow-export destination ip-address udp-port
Step 2: Router(config)# ip flow-export destination ip-address udp-port

The following example enables the exporting of information in NetFlow cache entries:

ip flow-export destination 10.42.42.1 9991
ip flow-export destination 10.0.101.254 1999

Reference: https://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/12s_mdnf.html

Question 15

Explanation

The distinguishing feature of the NetFlow Version 9 format is that it is template based -> Answer A is correct.

Reference: https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.html

Export bandwidth increases for version 9 (because of template flowsets) versus version 5 -> Answer D is correct.

Version 9 slightly decreases overall performance, because generating and maintaining valid template flowsets requires additional processing -> Answer E is not correct.

Reference: https://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/nfexpfv9.html

Question 16

Explanation

MPLS-aware NetFlow uses the NetFlow Version 9 export format. MPLS-aware NetFlow exports up to three labels of interest from the incoming label stack, the IP address associated with the top label, as well as traditional NetFlow data.

Reference: https://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/fsmnf24.html

Troubleshooting Questions

July 9th, 2019 digitaltut 6 comments

Question 1

Explanation

The “show memory allocating-process table” command displays statistics on allocated memory with corresponding allocating processes. This command can be also used to find out memory leaks. A memory leak occurs when a process requests or allocates memory and then forgets to free (de-allocate) the memory when it is finished that task.

Note: In fact the correct command should be “show memory allocating-process totals” (not “table”)

The “show memory summary” command displays a summary of all memory pools and memory usage per Alloc PC (address of the system call that allocated the block). An example of the output of this command is shown below:

show_memory_summary.jpg

Legend:

+ Total: the total amount of memory available after the system image loads and builds its data structures.
+ Used: the amount of memory currently allocated.
+ Free: the amount of memory currently free.
+ Lowest: the lowest amount of free memory recorded by the router since it was last booted.
+ Largest: the largest free memory block currently available.

Note: The show memory allocating-process totals command contains the same information as the first three lines of the show memory summary command.

An example of a high memory usage problem is large amount of free memory, but a small value in the “Lowest” column. In this case, a normal or abnormal event (for example, a large routing instability) causes the router to use an unusually large amount of processor memory for a short period of time, during which the memory has run out.

The show memory dead command is only used to view the memory allocated to a process which has terminated. The memory allocated to this process is reclaimed by the kernel and returned to the memory pool by the router itself when required. This is the way IOS handles memory. A memory block is considered as dead if the process which created the block exits (no longer running).

The command show memory events does not exist.

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_2/configfun/command/reference/ffun_r/frf013.html and http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/6507-mallocfail.html

Question 2

Explanation

A core dump is a file containing a process’s address space (memory) when the process terminates unexpectedly to identify the cause of the crash

Question 3

Question 4

Miscellaneous Questions

July 8th, 2019 digitaltut 32 comments

Question 1

Explanation

The command “clear ip route” clears one or more routes from both the unicast RIB (IP routing table) and all the module Forwarding Information Bases (FIBs).

Question 2

Explanation

The prefix-list “ip prefix-list name permit 10.8.0.0/16 ge 24 le 24” means
+ Check the first 16 bits of the prefix. It must be 10.8
+ The subnet mask must be greater or equal 24
+ The subnet mask must be less than or equal 24

-> The subnet mask must be exactly 24

Therefore the suitable prefix that is matched by above ip prefix-list should be 10.8.x.x/24

Question 3

Explanation

This is a new user (client) that has not been configured to accept SSL VPN connection. So that user must open a web browser, enter the URL and login successfully to be authenticated. A small software will also be downloaded and installed on the client computer for the first time. Next time the user can access file shares on that network normally.

Question 4

Explanation

Fragmentation and Path Maximum Transmission Unit Discovery (PMTUD) is a standardized technique to determine the maximum transmission unit (MTU) size on the network path between two hosts, usually with the goal of avoiding IP fragmentation. PMTUD was originally intended for routers in IPv4. However, all modern operating systems use it on endpoints.

Note: IP fragmentation involves breaking a datagram into a number of pieces that can be reassembled later.

Question 5

Explanation

Bandwidth-delay product (BDP) is the maximum amount of data “in-transit” at any point in time, between two endpoints. In other words, it is the amount of data “in flight” needed to saturate the link. You can think the link between two devices as a pipe. The cross section of the pipe represents the bandwidth and the length of the pipe represents the delay (the propagation delay due to the length of the pipe).

Therefore the Volume of the pipe = Bandwidth x Delay. The volume of the pipe is also the BDP.

Bandwidth-delay_Product.jpg

Return to our question, the formula to calculate BDP is:

BDP (bits) = total available bandwidth (bits/sec) * round trip time (sec) = 64,000 * 3 = 192,000 bits

-> BDP (bytes) = 192,000 / 8 = 24,000 bytes

Therefore we need 24KB to fulfill this link.

For your information, BDP is very important in TCP communication as it optimizes the use of bandwidth on a link. As you know, a disadvantage of TCP is it has to wait for an acknowledgment from the receiver before sending another data. The waiting time may be very long and we may not utilize full bandwidth of the link for the transmission.

Bandwidth-delay_Product_Wasted.jpg

Based on BDP, the sending host can increase the number of data sent on a link (usually by increasing the window size). In other words, the sending host can fill the whole pipe with data and no bandwidth is wasted.Bandwidth-delay_Product_Optimized.jpg

Question 6

Question 7

Explanation

Asymmetric routing is the scenario in which outing packet is through a path, returning packet is through another path. VRRP can cause asymmetric routing occur, for example:

R1 and R2 are the two routers in the local internal LAN network that are running VRRP. R1 is the master router and R2 is the backup router.

These two routers are connected to an ISP gateway router, by using BGP. This topology provides two possible outgoing and incoming paths for the traffic.

Suppose the outgoing traffic is sent through R1 but VRRP failover occurs, R2 becomes the new master router -> traffic passing through R2 instead -> asymmetric routing occurs.

Question 8

Question 9

Drag and Drop

July 7th, 2019 digitaltut 85 comments

Question 1

Explanation

NAT64 provides communication between IPv6 and IPv4 hosts by using a form of network address translation (NAT). NAT64 requires a dedicated prefix, called NAT64 prefix, to recognize which hosts need IPv4-IPv6 translation. NAT64 prefix can be a Network-specific prefix (NSP), which is configured by a network administrator, or a well-known prefix (which is 64:FF9B::/96). When a NAT64 router receives a packet which starts with NAT64 prefix, it will proceed this packet with NAT64.

NAT64 is not as simple as IPv4 NAT which only translates source or destination IPv4 address. NAT64 translates nearly everything (source & destination IP addresses, port number, IPv4/IPv6 headers… which is called a session) from IPv4 to IPv6 and vice versa. So NAT64 “modifies session during translation”.

Question 2

Explanation

The order of the BGP states is: Idle -> Connect -> (Active) -> OpenSent -> OpenConfirm -> Established

+ Idle: No peering; router is looking for neighbor. Idle (admin) means that the neighbor relationship has been administratively shut down.
+ Connect: TCP handshake completed.
+ Active: BGP tries another TCP handshake to establish a connection with the remote BGP neighbor. If it is successful, it will move to the OpenSent state. If the ConnectRetry timer expires then it will move back to the Connect state. Note: Active is not a good state.
+ OpenSent: An open message was sent to try to establish the peering.
+ OpenConfirm: Router has received a reply to the open message.
+ Established: Routers have a BGP peering session. This is the desired state.

Reference: http://www.ciscopress.com/articles/article.asp?p=1565538&seqNum=3

Question 3

Explanation

The Challenge Handshake Authentication Protocol (CHAP) verifies the identity of the peer by means of a three-way handshake. These are the general steps performed in CHAP:
1) After the LCP (Link Control Protocol) phase is complete, and CHAP is negotiated between both devices, the authenticator sends a challenge message to the peer.
2) The peer responds with a value calculated through a one-way hash function (Message Digest 5 (MD5)).
3) The authenticator checks the response against its own calculation of the expected hash value. If the values match, the authentication is successful. Otherwise, the connection is terminated.

This authentication method depends on a “secret” known only to the authenticator and the peer. The secret is not sent over the link. Although the authentication is only one-way, you can negotiate CHAP in both directions, with the help of the same secret set for mutual authentication.

Reference: http://www.cisco.com/c/en/us/support/docs/wan/point-to-point-protocol-ppp/25647-understanding-ppp-chap.html

For more information about CHAP challenge please read our PPP tutorial.

Question 5

Explanation

AAA offers different solutions that provide access control to network devices. The following services are included within its modular architectural framework:
+ Authentication – The process of validating users based on their identity and predetermined credentials, such as passwords and other mechanisms like digital certificates. Authentication controls access by requiring valid user credentials, which are typically a username and password. With RADIUS, the ASA supports PAP, CHAP, MS-CHAP1, MS-CHAP2, that means Authentication supports encryption.
+ Authorization – The method by which a network device assembles a set of attributes that regulates what tasks the user is authorized to perform. These attributes are measured against a user database. The results are returned to the network device to determine the user’s qualifications and restrictions. This database can be located locally on Cisco ASA or it can be hosted on a RADIUS or Terminal Access Controller Access-Control System Plus (TACACS+) server. In summary, Authorization controls access per user after users authenticate.
+ Accounting – The process of gathering and sending user information to an AAA server used to track login times (when the user logged in and logged off) and the services that users access. This information can be used for billing, auditing, and reporting purposes.

Question 6

Question 7

Question 8

Explanation

NAT64 provides communication between IPv6 and IPv4 hosts by using a form of network address translation (NAT). There are two different forms of NAT64, stateless and stateful:

+ Stateless NAT64: maps the IPv4 address into an IPv6 prefix. As the name implies, it keeps no state. It does not save any IP addresses since every v4 address maps to one v6 address. Stateless NAT64 does not conserve IP4 addresses.
+ Stateful NAT64 is a stateful translation mechanism for translating IPv6 addresses to IPv4 addresses, and IPv4 addresses to IPv6 addresses. Like NAT44, it is called stateful because it creates or modifies bindings or session state while performing translation (1:N translation). It supports both IPv6-initiated and IPv4-initiated communications using static or manual mappings. Stateful NAT64 converses IPv4 addresses.

NPTv6 stands for Network Prefix Translation. It’s a form of NAT for IPv6 and it supports one-to-one translation between inside and outside addresses

Question 9

Question 10

Question 11

Question 12

Question 13

Drag and Drop 2

July 7th, 2019 digitaltut 48 comments

Question 1

Question 2

Question 3

Explanation

The most common reason for excessive unicast flooding in steady-state Catalyst switch networks is the lack of proper host port configuration. Hosts, servers, and any other end-devices do not need to participate in the STP process; therefore, the link up and down states on the respective NIC interfaces should not be considered an STP topology change.

Reference: http://www.ciscopress.com/articles/article.asp?p=336872

Question 4

Question 5

Question 6

Explanation

The general rule when applying access lists is to apply standard IP access lists as close to the destination as possible and to apply extended access lists as close to the source as possible. The reasoning for this rule is that standard access lists lack granularity, it is better to implement them as close to the destination as possible; extended access lists have more potential granularity, thus they are better implemented close to the source.

Reference: http://www.ciscopress.com/articles/article.asp?p=1697887

Reflexive ACLs allow IP packets to be filtered based on upper-layer session information. They are generally used to allow outbound traffic and to limit inbound traffic in response to sessions that originate inside the router. Reflexive ACLs can be defined only with extended named IP ACLs. They cannot be defined with numbered or standard named IP ACLs, or with other protocol ACLs. Reflexive ACLs can be used in conjunction with other standard and static extended ACLs. Outbound ACL will have the ‘reflect’ keyword. It is the ACL that matches the originating traffic. Inbound ACL will have the ‘evaluate’ keyword. It is the ACL that matches the returning traffic.

Lock and key, also known as dynamic ACLs, was introduced in Cisco IOS Software Release 11.1. This feature is dependent on Telnet, authentication (local or remote), and extended ACLs.
Lock and key configuration starts with the application of an extended ACL to block traffic through the router. Users that want to traverse the router are blocked by the extended ACL until they Telnet to the router and are authenticated. The Telnet connection then drops and a single-entry dynamic ACL is added to the extended ACL that exists. This permits traffic for a particular time period; idle and absolute timeouts are possible.

Reference: https://www.cisco.com/c/en/us/support/docs/security/ios-firewall/23602-confaccesslists.html

OSPF Evaluation Sim

February 9th, 2019 digitaltut 577 comments

You have been asked to evaluate an OSPF network and to answer questions a customer has about its operation. Note: You are not allowed to use the show running-config command.

OSPF.jpg

Read more…

EIGRP Evaluation Sim

February 9th, 2019 digitaltut 161 comments

You have been asked to evaluate how EIGRP is functioning in a network.

EIGRP_Topology.jpg

Read more…

Policy Based Routing Sim

February 8th, 2019 digitaltut 257 comments

Question

Company TUT has two links to the Internet. The company policy requires that web traffic must be forwarded only to Frame Relay link if available and other traffic can go through any links. No static or default routing is allowed.

BGP_Policy_Based_Routing_Sim.jpg

 

Answer and Explanation:

Read more…

EIGRP OSPF Redistribution Sim

January 8th, 2019 digitaltut 283 comments

Question

OSPF_EIGRP_Redistribution.jpg

Answer and Explanation:

Read more…

OSPF Sim

January 8th, 2019 digitaltut 219 comments

Question

OSPF is configured on routers Amani and Lynaic. Amani’s S0/0 interface and Lynaic’s S0/1 interface are in Area 0. Lynaic’s Loopback0 interface is in Area 2.

OSPFSim

Your task is to configure the following:

Portland’s S0/0 interface in Area 1
Amani’s S0/1 interface in Area 1
Use the appropriate mask such that ONLY Portland’s S0/0 and Amnani’s S0/1 could be in Area 1.
Area 1 should not receive any external or inter-area routes (except the default route).

Answer and Explanation:

Read more…

ROUTE FAQs & Tips

August 1st, 2018 digitaltut 585 comments

In this article, I will try to summarize all the Frequently Asked Questions in the ROUTE 300-101 Exam. Hope it will save you some time searching through the Internet and asking your friends & teachers.

1. Please tell me how many questions in the real ROUTE exam, and how much time to answer them?

There are 50 questions, including 4 lab-sims. You have 120 minutes to answer them but if your native language is not English, Cisco allows you a 30-minute exam time extension (150 minutes in total).

2. How much does the ROUTE 300-101 cost? And how many points I need to pass the exam?

This exam costs $300. You need at least 790/1000 points to pass this exam.

3. I passed the ROUTE exam, will I get a certificate for it?

No, Cisco does not ship ROUTE Exam certificate, it only ships you a certificate after completing the full CCNP track of 3 exams (ROUTE, SWITCH & TSHOOT) within three years. Once again we have to notice that you need to finish these three exams within three years (not nine years). Many candidates misunderstand this and think they have nine years to complete three exams.

4. Which sims will I see in the ROUTE exam?

The popular sims now are Policy Based routing sim, EIGRP OSPF Redistribution sim, OSPF Evaluation Sim and EIGRP Evaluation Sim and please notice that the IP addresses, router names may be different (it is also true for Drag and Drop questions)

5. How many points will I get for one sim?

Maybe you will get about 80 to 100 points for each sim, just like the CCNA exam.

6. In the real exam, I clicked “Next” after choosing the answer, can I go back for reviewing?

No, you can’t go back so you can’t re-check your answers after clicking the “Next” button.

7. What is the difference between the old BSCI & the new ROUTE exam?

In the new ROUTE exam, there are no IS-IS, DHCP, Multicast questions so you can ignore them if you are reading old BSCI books. In fact, DHCP & Multicast are good topics and maybe you will have a chance to learn about them in other Cisco exams.

8. Can I use short commands, for example “conf t” instead of “configure terminal”? Will I get full mark for short commands?

From the comments on this site, maybe you will get full mark for short commands. But sometimes short commands don’t work so please be careful with them. We highly recommend you should learn the full commands.

9. What are your recommended materials for ROUTE?

There are many materials for learning ROUTE but below are popular materials that many candidates recommend.

Books

CCNP ROUTE Official Certification Guide

ROUTE Student Guide

CCNP ROUTE Portable Command Guide

CCNP Route Quick Reference Sheet

Video training

CBT Nuggets

Simulator (all are free)

 

GNS3 – the best simulator for learning ROUTE

Packet Tracer

10. How much time should I spend on each sim?

You should not spend more than 15 minutes for each sim. Recall that you only have 90 minutes so if you spend 15 minutes for each sim x 4 sims = 60 minutes. The 30 minutes left is for solving 46 multiple choice & drag-and-drop questions. If you are not a native English speaker and have 30-minute expansion (ask the mentor to confirm) than you can spend 20 minutes for each sim.

11. Can I pass without doing sims?

As mentioned above, each sim will cost you from 80 to 100 points. In the real exam you will have to solve 4 sims that give from 320 to 400 points. Suppose you answer all other questions perfectly then you will get 600 to 680 points but the passing score is 790. It means that you surely fail if ignore them.

From the calculation above, if you miss only one sim the chance to pass is average but if you miss two, the chance to pass is very, very low.

12. Are the exam questions the same in all the geographical locations?

Yes, the exam questions are the same in all geographical locations. But notice that Cisco has a pool of questions and each time you take the exam, a number of random questions will show up so you will not see all the same questions as the previous exam.

13. I don’t want to lose points so should I use the “copy running-config startup-config” after finishing each lab-sim?

In the ROUTE exam, we can’t use that command so surely you will not lose any points if not using that command.

14. I passed the ROUTE exam. Do you have any site similar for CCNP exams?
We have certprepare.com for SWITCH, networktut.com for TSHOOT, voicetut.com for CCNA Voice, securitytut.com for CCNA Security, rstut.com for CCIE Written & Lab, dstut.com for CCDA. Hope you enjoy these sites too!

15. Why don’t I see any questions and answers on digitaltut.com? I only see the explanation…

Because of copyrighted issues, we had to remove all the questions and answers. You can download a PDF file to see the questions at this link: http://www.digitaltut.com/encor-questions-and-answers

Is there anything you want to ask, just ask! All of us will help you.

IPv6 OSPF Virtual Link Sim

May 8th, 2018 digitaltut 153 comments

Question

TUT is a small company that has an existing enterprise network that is running IPv6 OSPFv3. However, R4’s loopback address (FEC0:4:4) cannot be seen in R1. Identify and fix this fault, do not change the current area assignments. Your task is complete when R4’s loopback address (FEC0:4:4) can be seen in the routing table of R1.

OSPFv3_IPv6_VirtualLink

Special Note: To gain the maximum number of points you must remove all incorrect or unneeded configuration statements related to this issue.

Answer and Explanation:

Read more…

EIGRP Stub Sim

May 8th, 2018 digitaltut 150 comments

Question

TUT Corporation has just extended their business. R3 is the new router from which they can reach all Corporate subnets. In order to raise network stableness and lower the memory usage and broadband utilization to R3, TUT Corporation makes use of route summarization together with the EIGRP Stub Routing feature. Another network engineer is responsible for this solution. However, in the process of configuring EIGRP stub routing connectivity with the remote network devices off of R3 has been missing.

EIGRPStubSim

Presently TUT has configured EIGRP on all routers in the network R2, R3, and R4. Your duty is to find and solve the connectivity failure problem with the remote office router R3. You should then configure route summarization only to the distant office router R3 to complete the task after the problem has been solved.

The success of pings from R4 to the R3 LAN interface proves that the fault has been corrected and the R3 IP routing table only contains two 10.0.0.0 subnets.

Answer and Explanation:

Read more…

EIGRP Simlet

May 5th, 2018 digitaltut 309 comments

EIGRP – SHOW IP EIGRP TOPOLOGY ALL-LINKS

 

Here you will find answers to EIGRP Simlet question

Question

Refer to the exhibit. BigBids Incorporated is a worldwide auction provider. The network uses EIGRP as its routing protocol throughout the corporation. The network administrator does not understand the convergence of EIGRP. Using the output of the show ip eigrp topology all-links command, answer the administrator’s questions.

simlet_show_ip_eigrp_topology_all_links

Read more…