Fibre Channel (FC) Basics for CCIE DC

When first looking at the blueprint for the CCNA/CCNP/CCIE Data Center track, one of my biggest fears was storage. My entire career thus far has been based on traditional IP data networks, not storage networks. I’m used to things like MAC addresses and IP addresses, not WWPNs and FCIDs. This is a completely foreign technology to most Network Engineers. You have to think back, at some point we were young and hopeful CCNAs-to-be, we knew nothing, but that didn’t stop us! Intimidation is over-rated, so throw fear aside and know that persistence always wins.

So you’ve read all about FC, and now you want to see how to configure it. In this blog post I’ll be going through a basic FC configuration, covering some fundamental Fibre Channel topics along the way, such as:

VSANs
FLOGI
FCNS
Trunking
Zoning (Basic and Enhanced)
FC Aliases
Device Aliases
Domain ID Modification
FSPF (with traffic engineering)
SAN Port-channels

(more…)

Advertisements

Upgrade SAN-OS to NX-OS

I recently purchased a pair of MDS 9216i switches for my CCIE Data Center studies, as they will suit me for the majority of my storage studies (minus FCoE). The MDS’s shipped with old code, SAN-OS 3.0, and I needed them upgraded to at least NX-OS per the blueprint. For a lab with disruptive capabilities, this is super easy to do.

I quickly found these resources, which walked me through the upgrade:

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/4_1/upgrade/guide/upgrade.html
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/nx-os/release/notes/19964_13.html#wp350532

First, a quick show version to see what I’m currently running:

(more…)

FabricPath for CCIE DC

We’re all cut from the same cloth, or in other words, fabric. It only makes sense that we connect with each other in the most immediate way, with all lines of communication open and inviting. In this blog post I’ll be looking at FabricPath, it’s purpose and how it pertains to the CCIE Data Center lab exam. I’ll also run through a configuration, observing behaviors along the way. For those just looking for a sample config, a full config is provided at the bottom of this post.

This post assumes you already have a basic understanding of FabricPath.  For those looking for details on FabricPath, here are some great resources that helped me along the way.  

Nexus 7000 FabricPath
Cisco FabricPath Best Practices

Cisco Live:
BRKDCT-3313 – FabricPath Operation and Troubleshooting (2014)
BRKDCT-2081 – Cisco FabricPath Technology and Design (2014)

INE:
http://www.ine.com/

What is FabricPath and why use it?

(more…)

Multicast Refresher for CCIEDC

Multicast, the strange, the backwards, the elusive. At least that’s what I used to think about it until I spent some time taking it out to dinner, listening to it, and developing a strong relationship with it. I have Peter Revill and his awesome four part series to thank for that (he’s a good wing man!).

In this post I’ll be following along Peter’s Multicast series. If you’re new to multicast, or just want a quick refresher, head on over to his blog and follow us on this journey.  As a bonus, I may throw in some basic NX-OS multicast at the bottom, as it pertains to OTV.

http://www.ccierants.com/2013/02/ccie-dc-multicast-part-1.html
http://www.ccierants.com/2013/02/ccie-dc-multicast-part-2.html
http://www.ccierants.com/2013/02/ccie-dc-multicast-part-3.html
http://www.ccierants.com/2013/02/ccie-dc-multicast-part-4.html

Follow him on twitter via @ccierants

First, a quick refresher on multicast and related terms. Feel free to revisit these as you follow the post.

(more…)

Configuring BFD on Nexus NX-OS

BDF is listed in the CCIE Data Center Lab Blueprint as, “1.2.c – Implement BFD for dynamic routing protocols”. In this blog post, I’ll be explaining BFD and going over it’s relevance for dynamic routing protocols. Without further ado.

Bidirectional Forwarding Detection (BFD) is a protocol designed to quickly detect failures in the forwarding-path, and notify the configured protocols (OSPF, EIGRP, BGP, HSRP, etc) immediately, before they’ve even had an opportunity to be aware of the failure themselves. This results in expedited, sub-second detection of failed forwarding-paths, leading to quicker convergence.

BFD on NX-OS runs in Asynchronous mode, where a BFD neighbor relationship session is formed between two adjacent devices, and they exchange control packets to monitor the session. The configurable parameters used in the session include:

(more…)

Configuring Netflow on Nexus NXOS

This post is a part of my CCIE:DC studies, but will be useful for anyone needing to quickly configure NetFlow in NXOS.  For CCIE:DC purposes, an understanding of how NetFlow is configured in NXOS cannot hurt, especially since it is mentioned in the blueprint (1.5.b – Implement Netflow).

Unlike IOS, there are few steps involved to get Netflow functioning, here is a quick run down:

1. Enable Netflow
2. Configure a Netflow Flow Record
3. Configure a Netflow Flow Exporter
4. Configure a Netflow Flow Monitor
5. Apply the Netflow Monitor to the your L3 interfaces
6. (Optional) Configure a Netflow Sampler
7. (Optional) Configure Netflow timers

(more…)

Mixing SATA and SSD in Same B-Series UCS Server

Here are the basic steps I took to configure 2 local ATA disks in a RAID1 and 2 local SSD disks stand-alone on a single B440-M2:

Since we have 2 SATA drives and 2 SSD drives in the same server, we need to make sure NOT to configure a Local Disk Configuration Policy.  This one through me for a bit, since this is the preferred way to handle RAID configuration when the same disk-types are present.  If you have varying disk types, you need to manually configure your disks.

Create a basic boot policy:

boot-pol

(more…)

Configuring Nexus SAN Admin Role

If you’re in a company that uses Nexus 5Ks to run both LAN and SAN, and for some strange reason your SAN Administrator wants access to the 5Ks for zoning, just deny it.  Okay, okay, I guess that won’t fly, so let’s configure role-based access control (RBAC) to lock down what the SAN Administrator has access to.

Good thing about Nexus 5K is there is a built-in role called san-admin that we can use for this purpose.  Let’s take a look at the role privileges:

(more…)

Cisco Live 2014, here I come!

Although I’m not a NetVet, this will be my fourth attendance, and I’m stoked for this year’s Cisco Live in San Fran”Cisco”!  I’ll be doing shots (of espresso) throughout the conference to keep up with the insane amount of information streaming through my skull.  Feel free to join me.

Here’s a glimpse of my schedule.  It’s mostly Data Center focused (of course), but I’m also dabbling in some other work-related sessions, such as MPLS, RF, and CUBE SBC.

(more…)

No, really, it’s not the network.

I recently spent an unhealthy amount of days troubleshooting performance issues between remote Data Centers.  Good thing I did, too, as I got a friendly reminder about TCP, and how latency drives throughput.

We were seeing seemingly inconsistent network issues, some applications and file transfers were slow, some were fast, and some appeared to be slow in only one direction.  Jobs that used to run in minutes, now took hours.  Packet captures were showing possible signs of packet loss with DUP ACKs, etc.  Needless to say, we had some troubleshooting ahead of us to either locate the source of packet loss, or rule out the network.

First, we verified basic health of the network:

  • Validated no congestion on links between endpoints
  • Validate interface counters – look for errors, CRCs, drops, etc.
  • Validate interface configurations – speed, duplex, MTU, etc.
  • Validate QoS stats – Are you seeing an unusually high number of drops in a particular queue?
  • Validate network device system resources – CPU, Memory, etc.
  • Validate the control plane – Are packets getting punted to the router CPU?

All good so far, so why are we seeing the slowness with file transfers?  Throughput on some transfers are as low as 900KBps.  We have a 1Gbps link between sites, with only 18ms of latency (round-trip time / RTT), we should have no issue with throughput!

Looking at the packet captures, we learned TCP Windows were advertising at a very small size of 17,520 bytes, and not scaling.

small-tcp-window

This is a problem because of this very simple equation:

(more…)