Drag and drop the Cisco IOS XE subpackage on the left to the function it performs on the
Which statement about MSS is true?
Answer : D
Explanation: The maximum segment size (MSS) is a parameter of the Options field of the TCP header that specifies the largest amount of data, specified in octets, that a computer or communications device can receive in a single TCP segment. It does not count the TCP header or the IP header. The IP datagram containing a TCP segment may be self- contained within a single packet, or it may be reconstructed from several fragmented pieces; either way, the MSS limit applies to the total amount of data contained in the final, reconstructed TCP segment. The default TCP Maximum Segment Size is 536. Where a host wishes to set the maximum segment size to a value other than the default, the maximum segment size is specified as a TCP option, initially in the TCP SYN packet during the TCP handshake. The value cannot be changed after the connection is established. Reference: http://en.wikipedia.org/wiki/Maximum_segment_size
Which statement is true regarding the UDP checksum?
Answer : D
Explanation: The method used to compute the checksum is defined in RFC 768: Checksum is the 16-bit one's complement of the one's complement sum of a pseudo header of information from the IP header, the UDP header, and the data, padded with zero octets at the end (if necessary) to make a multiple of two octets. In other words, all 16-bit words are summed using one's complement arithmetic. Add the 16-bit values up. Each time a carry-out (17th bit) is produced, swing that bit around and add it back into the least significant bit. The sum is then one's complemented to yield the value of the UDP checksum field. If the checksum calculation results in the value zero (all 16 bits 0) it should be sent as the one's complement (all 1s). Reference: http://en.wikipedia.org/wiki/User_Datagram_Protocol
Refer to the exhibit.
Answer : A,B
Explanation: Installing a switch with larger buffers and correctly configuring the buffers can solve output queue problems. For each queue we need to configure the assigned buffers. The buffer is like the storage space for the interface and we have to divide it among the different queues. This is how to do it: mls qos queue-set output <queue set> buffers Q1 Q2 Q3 Q4 In this example, there is nothing hitting queue 2 or queue 3 so they are not being utilized.
Drag and drop the fragmentation characteristics on the left to the corresponding protocol on
Which two packet types does an RTP session consist of? (Choose two.)
Answer : B,C
Explanation: An RTP session is established for each multimedia stream. A session consists of an IP address with a pair of ports for RTP and RTCP. For example, audio and video streams use separate RTP sessions, enabling a receiver to deselect a particular stream. The ports which form a session are negotiated using other protocols such as RTSP (using SDP in the setup method) and SIP. According to the specification, an RTP port should be even and the RTCP port is the next higher odd port number. Reference: http://en.wikipedia.org/wiki/Real-time_Transport_Protocol
Which two mechanisms can be used to eliminate Cisco Express Forwarding polarization?
Answer : B,D
Explanation: This document describes how Cisco Express Forwarding (CEF) polarization can cause suboptimal use of redundant paths to a destination network. CEF polarization is the effect when a hash algorithm chooses a particular path and the redundant paths remain completely unused. How to Avoid CEF Polarization ✑ Alternate between default (SIP and DIP) and full (SIP + DIP + Layer4 ports) hashing inputs configuration at each layer of the network. ✑ Alternate between an even and odd number of ECMP links at each layer of the network.The CEF load-balancing does not depend on how the protocol routes are inserted in the routing table. Therefore, the OSPF routes exhibit the same behavior as EIGRP. In a hierarchical network where there are several routers that perform load-sharing in a row, they all use same algorithm to load-share. The hash algorithm load-balances this way by default: 1: 1 2: 7-8 3: 1-1-1 4: 1-1-1-2 5: 1-1-1-1-1 6: 1-2-2-2-2-2 7: 1-1-1-1-1-1-1 8: 1-1-1-2-2-2-2-2 The number before the colon represents the number of equal-cost paths. The number after the colon represents the proportion of traffic which is forwarded per path. This means that: ✑ For two equal cost paths, load-sharing is 46.666%-53.333%, not 50%-50%. ✑ For three equal cost paths, load-sharing is 33.33%-33.33%-33.33% (as expected). ✑ For four equal cost paths, load-sharing is 20%-20%-20%-40% and not 25%-25%- 25%-25%. This illustrates that, when there is even number of ECMP links, the traffic is not load- balanced. ✑ Cisco IOS introduced a concept called unique-ID/universal-ID which helps avoid CEF polarization. This algorithm, called the universal algorithm (the default in current Cisco IOS versions), adds a 32-bit router-specific value to the hash function (called the universal ID - this is a randomly generated value at the time of the switch boot up that can can be manually controlled). This seeds the hash function on each router with a unique ID, which ensures that the same source/destinatio
What is a cause for unicast flooding?
Answer : D
Explanation: Causes of Flooding The very cause of flooding is that destination MAC address of the packet is not in the L2 forwarding table of the switch. In this case the packet will be flooded out of all forwarding ports in its VLAN (except the port it was received on). Below case studies display most common reasons for destination MAC address not being known to the switch. Cause 1: Asymmetric Routing Large amounts of flooded traffic might saturate low-bandwidth links causing network performance issues or complete connectivity outage to devices connected across such low-bandwidth links Cause 2: Spanning-Tree Protocol Topology Changes Another common issue caused by flooding is Spanning-Tree Protocol (STP) Topology Change Notification (TCN). TCN is designed to correct forwarding tables after the forwarding topology has changed. This is necessary to avoid a connectivity outage, as after a topology change some destinations previously accessible via particular ports might become accessible via different ports. TCN operates by shortening the forwarding table aging time, such that if the address is not relearned, it will age out and flooding will occur Cause 3: Forwarding Table Overflow Another possible cause of flooding can be overflow of the switch forwarding table. In this case, new addresses cannot be learned and packets destined to such addresses are flooded until some space becomes available in the forwarding table. New addresses will then be learned. This is possible but rare, since most modern switches have large enough forwarding tables to accommodate MAC addresses for most designs. Reference: http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6000-series-switches/23563- 143.html
Which option is the most effective action to avoid packet loss due to microbursts?
Answer : A
Explanation: You can't avoid or prevent them as such without modifying the sending host's application/network stack so it smoothes out the bursts. However, you can manage microbursts by tuning the size of receive buffers / rings to absorb occasional microbursts.
Refer to the exhibit.
Answer : C
Explanation: L3HWFORWADING-2 Error MessageC4K_L3HWFORWARDING-2-FWDCAMFULL:L3 routing table is full. Switching to software forwarding. The hardware routing table is full; forwarding takes place in the software instead. The switch performance might be degraded. Recommended Action: Reduce the size of the routing table. Enter the ip cef command to return to hardware forwarding. Reference: http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12- 2/31sg/system/message/message/emsg.html
Which two solutions can reduce UDP latency? (Choose two.)
Answer : D,E
Explanation: IP SLA uses active traffic monitoring, which generates traffic in a continuous, reliable, and predictable manner to measure network performance. IP SLA sends data across the network to measure performance between multiple network locations or across multiple network paths. It simulates network data and IP services, and collects network performance information in real time. This information is collected: ✑ Response times ✑ One-way latency, jitter (interpacket delay variance) ✑ Packet loss ✑ Network resource availability LLQ uses the priority command. The priority command allows you to set up classes based on a variety of criteria (not just User Datagram Ports (UDP) ports) and assign priority to them, and is available for use on serial interfaces and ATM permanent virtual circuits (PVCs). A similar command, the ip rtp priority command, allows you to stipulate priority flows based only on UDP port numbers. Note: All the other answer choices can be used to improve TCP performance, but not UDP. References: http://www.cisco.com/c/en/us/td/docs/routers/xr12000/software/xr12k_r4- 2/system_monitoring/configuration/guide/b_sysmon_cg42xr12k/b_sysmon_cg42xr12k_cha pter_011.html http://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/fsllq26.html
Drag and drop the argument of the ip cef load-sharing algorithm command on the left to the
function it performs on the right.
A TCP/IP host is able to transmit small amounts of data (typically less than 1500 bytes), but
attempts to transmit larger amounts of data hang and then time out. What is the cause of
Answer : D
Explanation: Sometimes, over some IP paths, a TCP/IP node can send small amounts of data (typically less than 1500 bytes) with no difficulty, but transmission attempts with larger amounts of data hang, then time out. Often this is observed as a unidirectional problem in that large data transfers succeed in one direction but fail in the other direction. This problem is likely caused by the TCP MSS value, PMTUD failure, different LAN media types, or defective links. Reference: http://www.cisco.com/c/en/us/support/docs/additional-legacy-protocols/ms- windows-networking/13709-38.html
Which two statements about packet fragmentation on an IPv6 network are true? (Choose
Answer : A,B
Explanation: The fragment header is shown below, being 64 bits total with a 32 bit identification field:
Which two options are interface requirements for turbo flooding? (Choose two.)
Answer : A,B
Explanation: In the switch, the majority of packets are forwarded in hardware; most packets do not go through the switch CPU. For those packets that do go to the CPU, you can speed up spanning tree-based UDP flooding by a factor of about four to five times by using turbo- flooding. This feature is supported over Ethernet interfaces configured for ARPA encapsulation. Reference: http://www.cisco.com/c/en/us/td/docs/switches/metro/me3400/software/release/12- 2_50_se/configuration/guide/scg/swiprout.html