Free Wireshark Training Course Online

Take a free Wireshark Jumpstart training class online at

Wednesday, November 18, 2009

Strange SYN Scans and SpongeBob Slippers

Nestled in my trusty Sponge Bob Slippers and surrounded by mounds of tissues boxes, cough syrup with Codene, a temperamental foot heater, NetFlix on-demand and business cards of every Chinese restaurant in the neighborhood... I settled in to study signatures of Nmap's fascinating OS detection process. Ahhh.... comfort packets....

First - if you haven't picked up Fyodor's Nmap Network Scanning book - put that on the top of your to-do list (wait... squeeze "Sign up for Summit 09" just above that). You'll want to snuggle up with pages 177 -178. You can get the book at Amazon or try to reach Fyodor over at - buy it directly and ask him to sign your copy - this is a hot book!

Here's the scoop - capture your traffic as you run nmap -sV -O -v against your target (version scanning, OS fingerprinting and verbose mode). Got permission, right? Good. Read on.

Nmap's OS fingerprinting process contains numerous unique packets - by building a series of butt-ugly color filters you can spot these strange packets easily with relatively low concerns of false positives (if you happen to find these packets being sent by another application you should still be concerned - it's weird behavior).

In looking through the trace file and referencing Nmap Network Scanning, I came up two color filters (both with butt-ugly background colors) that caught the majority of the unique packets generated during the scan.

Filter #1

(tcp.flags == 0x00) || (tcp.options.wscale_val == 10) || (tcp.options.mss_val < flags ="="" urgent_pointer ="="" flags="="0x02" flags="="0x02"> 0)

So shall we break this down a bit?

(tcp.flags == 0x00)
This looks for the null scans - TCP scans that have no TCP flags set.

(tcp.options.wscale_val == 10)
The TCP window scale value equal to 10. Although other TCP handshakes may use this value during the handshake process, it is unusual and listed in the book as one of the scan techniques and verified in the trace file of the OS fingerprinting process.

(tcp.options.mss_val <>
This one is a bit sticky - we're looking in the options section of the TCP header for a maximum segment size value smaller than 1,460. This did cause numerous false positives when I ran it on other trace files. Regardless, I like having this in my color filter because it points out some weird starting MSS starting value. As an option, I considered moving this to another color filter with a slightly lighter background color.

(tcp.flags == 0x29) && tcp.urgent_pointer == 0
This filter looks for the FIN, PSH and URG bits set in packets with the Urgent Pointer field set to 0.

(tcp.flags==0x02 && !frame[42:4] == 00:00:00:00)
Yeah - this is a strange one and brings up a change I'd like to see in Wireshark. This looks for packets with the SYN bit set only and the Acknowledgment Number field set at a non-zero value. So... what's the "frame[42:4]" all about? Well... Wireshark does not recognize the Acknowledgment Number field in the first packet of the handshake process as it doesn't have any use in that packet. I'd still like to see the field in those packets so I can filter on it though. I tried messing around with !tcp.ack==0 but that didn't work.

(tcp.flags==0x02 && tcp.window_size <> 0)
This looks for SYN packets with a small window size value and the window scale factor set to 0. This did hit some false positives in other trace files, but they were all hosts with strangely small window size values anyway - a bit of a concern to me anyway. I considered setting this as a separate color filter and may alter my color filters as I test this against more trace files.

Filter #2
I set another color filter for tcp.window_size < syn="="1 - it was a bit lighter in background. This color filter had lots of false positives in other trace files, but pointed out numerous TCP connections that were using non-optimal starting wndow size values.

Ohhh... look at all the pretty, er I mean ugly colors! This particular Nmap scan sequence screams "Halloween All Year Long!"

Join us at Summit 09 as we investigate other malicious traffic patterns! Register over at and I'll see you December 7th!

Enjoy life one bit at a time!

Wednesday, November 11, 2009

SSL/TLS Flawed: Using Wireshark to Decrypt Attack Traces from PhoneFactor

It seemed such a coincidence, I sent out a teaser for a project underway and alluded to the security implications - the project, however, was not related to the SSL/TLS vulnerability that hit the public last Thursday.

How bad is this SSL/TLS vulnera
bility? Amazingly horrid! Listen Up! (MP3 - 1MB) Click here to download Ron Nutter's interview with Steve Dispensa (or grab the .zip file here) - one of the PhoneFactor guys who demonstrated the vulnerability to a working group of affected vendors and representatives of various standards committees.

Read Up!
Steve Dispensa and Marsh Ray of PhoneFactor wrote an 8-page overview of the issue which is based on the TLS renegotiation process. The figure below shows the basic SSL/TLS handshake process.

In Wireshark, the display filte
r ssl.record.content_type == 22 extracts SSL/TLS handshake packets.

The document written by Steve and Ray defines the security issues demonstrated against recent Microsoft IIS and Apache httpd versions. In essence, the renegotiate attack method defined is used to inject malicious code into the "secure" connection.

One of the most interesting areas of the document focuses on the use of request splicing in which two HTTP requests are combined. The first request triggers the renegotiation while the second request effectively comments out the first request and overrides it with the malicious one.

Analyze the Attacks Yourself
Download the PhoneFactor document, numerous trace files (including decryption keys), protocol diagrams and details here.

Hint: In Wireshark, disable the Preferences > TCP > Allow Subdissector to Reassemble TCP Streams to view the SSL/TLS handshake more clearly.

Step 1: Get the Traces/Keys
Download and extract the
files into a directory called "ugly". (Again - download from here.)

Step 2: Set up SSL with Keys
Private keys to decrypt the traces are in the 'caps' and 'certs' directories. For simplicity sake, I recommend you create a \key
s directory and copy all the keys there.To decrypt the client_init_renego.pcap file, I used Preferences > Protocols > SSL and entered the following value:,443,http,c:\users\laura\keys\ws01.mogul.test.key

When you have successfully set up decry
ption, your traffic should indicate HTTP in the protocol column and, if colorization is enabled, the lovely lime-green color of HTTP traffic.

Step 3: Follow the SSL Stream
Once you have applied the decryption, you can right-click on one of the HTTP packets and select Follow SSL Stream to reassemble the traffic as shown below.

In the
figure at left we can see the request to GET /evil.html and the x-ignore line for GET /index.html. This process of using the ignore header prefix is described on page 3 of the Renegotiating TLS.pdf document.

Inside the SSL/TLS Handshake - Another "Must Read"
Jeff Moser penned an impressive blog entry entitled "The First Few Milliseconds of an HTTP Connection" which analyzes the handshake process, selection of a cipher suite and use of the RSA algorithm. Read Jeff's blog here.

What's the Solution?

The document written by Marsh Ray and Steve Dispensa paints a pretty gloomy picture of possible remedies.

"There appear to be few silver bullets to address these issues."

Ultimately, the fix will require protocol changes - a laboriously painful process that can have unforeseen consequences related to compatibility problems. The paper forthrightly defines the possibility of 'breaking' as well as backwards-compatible protocol changes. It takes serious 01's to throw that 'breaking' term in there. It's no fun being the bearer of such bad news. What a hassle.

In the meantime, I imagine the efforts to exploit vulnerable SSL/TLS connections is underway - those malicious teams might be working longer hours than the vendor/committee teams focused on a resolution.

Big money is at stake.

Enjoy life one bit at a time!

Tuesday, October 27, 2009

Double-sided and Double-dumb Printing

Summit 09 Bonus: All Summit 09 attendees will receive a full licensed copy of NetScanTools Pro - a $249 value.

It's not easy going "green"... I mean - all we wanted to do was print the customized course manuals double-sided. What's the big problem?


First let me say I wa
s printing to an HP OfficeJet Pro L7780 All-in-One printer - it supports auto-duplexing. The feature is implemented in the most funky way however! Watching the traffic live while the printer did it's 'dance of duplex' - I was shocked to see the lame process of auto-duplex printing.

I had to wonder if the HP printer group might be interested insome classes on networking.*

Ok... here's how this prin
ter does duplex printing. It's automatic, so you don't have to do the old 'refeed the paper and hope it's in the right order' process. You just select auto-duplex and away it goes. The printer prints the first page and - while still holding the paper by the bottom edges, the console reads "Ink drying - please wait." After a while the paper is then sucked back into the printer (a moment that always makes my heart jump - I've heard the shredding sound of mangled paper too many times). The second side of the paper prints and we all sigh relief - another 2-sided page done!

The trace file showed that the printer creates the pause process by sending a Zero Window packet to the client - in essence "talk to the hand" (isn't that HP's little helper logo?). Each 'drying process' essentially was caused by halted flow of print data from my client to the printer. My client sent Zero Window Probe packets to ask 'What is wrong with you, buddy?" and the printer kept the traffic at bay by sending Zero Window Probe ACKs for about 30 seconds.

The printer's TCP window size did d
ecrease down to an unacceptable zero so it wasn't lying about not having any receive buffers. It seems the printer doesn't clear out the buffer space consistently during the print process - it's buffer becomes depleted right at the point when the 'ink needs to dry" - perhaps it knows we'll all need to take a breath then so allowing the buffer to fill doesn't seem like a big deal - it's going to have to sit around idle anyway (anyone think about adding a nice 'ink fan' attachment to make my printing go faster?).

I decided to add a tcp.window_size column and filter on traffic coming from the printer. Using File > Export > File and choosing the displayed packets, I created a csv file with a column just depicting the TCP receive window sizes advertised by the printer. In Excel I selected only the tcp.win_size column and inserted a line graph.


Look at the intentional patte
rn of the window size values advertised with definite zero window conditions during the ink drying time. There's just got to be a better way!

What? Is my printer totally dumb? I didn't think so, b
ut this is certainly dumb behavior. The file is only 13,296 KB! The printer has 64 MB memory standard. Why didn't the printer allow me to send the entire document to the printer, then buffer the data while the ink dried? Why did I have to sit around and wait for the ink to dry before being able to send the next page of data? The Wireshark summary of the data sent to the printer showed the transmission rate averaged around 0.431 Mbit/second - snooze...

I also look at the spooled file in my Windows\System32\spool\PRINTERS directory - holy schmoley! The spooled file (.spl) was 670,016 KB! Whazzup with that? The idea of researching the MS spooling process and .spl file format made my stomach turn again - but it did look like this Powerpoint Notes document was processed as a graphics file.

Then came the really interesting part - I printed twenty-five copies of the custom student manual. You'd think the printer would have buffered the first copy in memory and just pulled from that when it needed to make each successive copy - right? Nope. My client sent the entire file to the printer a second time, a third time, a fourth time... etc. Whoa... that's one dumb process! No turning off my system until the entire job was sent 25 times from the spool\PRINTERS directory. Barf.

Basic network analysis should be a mandatory requirement for any vendor making network-capable devices/applications - or else they might make a network-incapable devices/applications - oh wait - they do!

Enjoy life one bit at a time!

*Although this is the lamest way to get the job done, I swear by my HP printers - they take a beating and keep on spitting out surreal amounts of pages every month - I am a tactile, visual person - there's nothing like that printed version of the spec to snuggle up with at night! So much for going green. (The only green that will fly around here will be some big $ for printers that can buffer the entire doc once - without the 'talk to the hand process' - and then print extra copies from that!)

Wednesday, October 21, 2009

Tracing the Route

Summit 09 Bonus: All Summit 09 attendees will receive a full licensed copy of NetScanTools Pro - a $249 value.

During troubleshooting processes, a standard ping test is often used to check connectivity to a host and determine the round trip late
ncy time. This process uses an ICMP Type 8 Echo Request and relies on an ICMP Type 0 Echo Reply.

Sometimes, however, the target won't respond to ICMP Echo Replies - either because it is configured to ignore these ICMP Echo Requests or because a device along the path filters these packets out so they don't reach the target.

I prefer to perform traceroute using NetScanTools' TCP option. Besides setting the TCP port and the sequence number settings, you can also set the MTU (Maximum Transmission Unit) to test the maximum packet size along a path.Another option available is to set the TCP window size - in our example I have set the window size field value to 16,384. In addition, you can define the payload - using a binary or text file.

Why would you use a big fat file for the test? A
hhh... my padawan - to test the MTU allowed through the path and consider putting a signature in the payload that should trigger an IDS or be logged by a firewall - multiple birds with one stone - connectivity testing, latency testing, IDS/firewall testing! Nice!

In the figure below, you can see my host sending a series of TCP SYN packets - the target port is 79 (finger). The packets colored with a red background have an IP header Time-to-Live value less than 5 - a sure sign of a traceroute operation.

When we reach the target, a
RST is generated in response. That's what gives us our round trip latency time.I appreciate why companies restrict ICMP-based traffic on their networks - and when I'm doing connectivity tests and latency tests, customizing my TCP-based traceroutes always sits on the top of my to-do list.

Enjoy life one bit at a time!

Wednesday, October 14, 2009

Storms Rip the Net

Summit 09 Bonus: Licensed NetScanTools Pro - All Summit 09 attendees will receive a full licensed copy of NetScanTools Pro – a $249 value.

I recognized the tone in the voice that day - the panicked sound of someone dealing with a non-functional network. In this case, the network was a critical one (I can't disclose the specific type of network or customer).

At approximately 3:34am, their critical network came crashing down - no connectivity for any hosts on the network. They'd placed Wireshark on the network and it too crashed within moments.

Ok... so there was something definitely cruising along the network wreaking havoc. I had to see those packets!With over 2,000 miles separating us, it would be a 'walk through the capture' process.

The first step - dump the GUI!

Wireshark comes with Tshark for command-line capture. The syntax used was:

tshark -c 100 -w gen1.pcap

The -c parameter indicates the number of packets to capture. The -w parameter is used to define the name of the trace file to create. Why only 100 packets? What? Well... if there is a catastrophic issue on the network that could kill systems that connect to it that quickly, it shouldn't take many packets to characterize that traffic.

Immediately upon capturing these 100 packets, I instructed the customer to disconnect from the network. You don't need network access to analyze captured traffic - trace files are processed through the Wiretap Library - directly off the disk.

The 100 packets told the story - an insane looping packet storming through the network at a blazingly rapid packets per second rate. When facing a traffic issue like this it is important to look at the IP header's Identification value. You need to differentiate between a looping packet or a series of individual packets sent from a 'killer host' (and I mean killer as in "network killer").

If the IP Identification field value is the same for all the packets, then the packet is looping somehow. If the packet has a different IP Identification field then the packets have each passed through the IP protocol separately from a host. It's an amazingly simple differentiation - and an important one.If the packets had unique IP Identification field values, we'd be looking at a single host causing the problems. We'd be delving into the MAC header of the packet to identify the location of the lousy host. (Having a master list of MAC addresses for all hosts on the network is imperative in that situation. Mark that down as something to do this week!)

In this case, all the IP Identification field contained the same value - this was a looping packet. We had an infrastructure issue. On this heavily switched network it seemed spanning tree was not doing its job. Poor spanning tree - no one really pays attention to it until it has a problem.

Being remote to the customer location, I could not look over their shoulder as they walked the network and shut down one switch at a time. It was in their hands now. I sat waiting for their call - waiting to hear if they'd found the culprit. I didn't wait long.

I waited 30 looooong minutes for the call even though hit had taken the client less than 5 minutes to find a switch that was looping traffic back through the network. They spent the other 25 minutes starting up hosts on the network to ensure all was well. The switch was configured properly - so it would be replaced with another switch while they played with the problem switch in the lab (someday... someday).This offsite analysis hits a key point in troubleshooting - the devastating failures are typically easier to spot. They scream at us. They stomp their feet and throw things. All they want is to have someone listen.

Enjoy life one bit at a time...

Join us at Summit 09 on December 7-9th! You'll get a copy of NetScanTools Pro and 3 full days of hands-on individual and group labs focused on troubleshooting and security. Download the Summit Information Guide. All Access Pass members receive a 50% discount to Chappell Summit 09. Don't miss it!

Wednesday, October 7, 2009

SNMP Snooping

Summit 09 Bonus: Licensed NetScanTools Pro - a $249 ValueAll Summit 09 attendees will receive a full licensed copy of NetScanTools Pro - a $249 value.

One of the labs for Summit 09 deals with SNMP snooping - locating information about a device by taking advantage of available MIB (Management Information Base) data through SNMP walking.
Networks abound with SNMP-based devices - we can use the Port Scanner tool to generate a simple UDP scan for port 161 to discover those SNMP devices.

In NetScanTools, I discovered a few network printers supporting SNMP. I entered the IP address of one of the printers and selected the WALK action for the Object ID (OID) I left the community string at the default as I was certain no one had changed it since the printer was plugged in.

The result - a 24-page document filled with information about that printer and the other devices on the wired and wireless networks. The standard printer information was puked out as expected, but this SNMP snoop yielded loads more information:
  • ARP table listing devices on the wired and wireless network
  • MAC layer In/Out statistics (including errors)
  • TCP In/Out statistics (including errors)
  • UDP In/Out statistics (including errors)
  • ICMP In/Out statistics (including errors)
  • Routing table
  • List of all received/transmitted ICMP packets
  • SSIDs, channel numbers and signal strength of local WLANs - not just the WLAN that the printer was on and not just on the channel the printer was on
As I started playing a bit more and finding other unique SNMP devices, I realized I needed to load some new MIBs - a MIB is a database of objects. I found hundreds of MIBs available online at

One of the coolest features in NetScanTools' SNMP tool is the ability to determine listening ports on the target without using a port scan. By generating udpLocalPort and tcpConnState queries, I could get the list of open UDP and TCP ports directly from the source.
Using NetScanTools we can discover SNMP devices on the network, load an unlimited number of additional MIBs and perform a dictionary attack to identify the community string used by SNMP devices.

Join us at Summit 09 on December 7-9th! You'll get a copy of NetScanTools Pro and 3 full days of hands-on individual and group labs focused on troubleshooting and security. Don't miss it!
Download the Summit Information Guide from All Access Pass members receive a 50% discount to Chappell Summit 09.

Enjoy life one bit at a time!


Tuesday, September 29, 2009

The Game of VoIP

I always get that tingly excited feeling when I open up new trace files.

It's not unlike the feeling I have as I pick up the pile of cards dealt to me in a game of gin. What will the cards hold? What will I see in them? That is what analysis is to me - a card game.
Last week I began my slide preparation for an upcoming set of projects related to VoIP analysis. I find VoIP traffic fascinating - its duality - signaling separate from voice - its trusting nature - using UDP - its frailty.

This VoIP project came up first in preparation for the VoIP labs for the Summit 09 training event. I wanted to include a lecture portion dealing with SIP and
RTP behavior and point out the cool new organization of telephony-related options in Wireshark. I wanted to build some hands-on labs to allow students to identify the likely points on a network that they'd look to isolate problems with the calls. I'm energized with the process of preparing new lectures, new trace files and new hands-on exercises for Summit 09.But Summit 09 had to go on the back burner for a bit... another hand had been dealt - by Microsoft.

The Microsoft Webinars (Oct 6th and Oct 28th)

I have presented two webinars for Microsoft to date - one wildly popular (where we pulled in Microsoft's largest registration and attendance counts) and the other nearly empty (seems the old Microsoft marketing engine had stalled on that one and we didn't push it because it was open to partners only). If you want to join me at the Summit 09 event on December 7-9th, check out the Summit 09 Information Document at

No RTP? A Lousy Deal?!

When I began systematically opening the VoIP files in my folder, I hit the first stumbling block of VoIP analysis. One of the first trace files I opened up did not contain the SIP packets for a call setup so Wireshark didn't apply an RTP dissector to the voice traffic - it just dumped me off at UDP. Hmmm... seems I have a hand with no runs or matches.

I've talked about the process of "Decode As" in the past for interpreting traffic traveling over non-standard ports. The same steps are quite common to use with VoIP traffic. I right clicked on a packet in the trace file and selected Decode As. On the right side, I chose one of the transport port numbers and selected RTP in the protocol list.

Voila! It's decoded as
RTP! Another option is to enable the RTP Preferences "Try to decode RTP outside of conversations". The image at left shows the traffic of an RTP stream that Wireshark decodes as just UDP. Wireshark didn't see the SIP call setup process and it decodes the voice traffic as just UDP.

Viewing the RTP Streams

When Wireshark reached version 1.2, the menu bar was revised to include a Telephony item. Once you force the RTP decode (if required), you can select Telephony > RTP > Show All Streams.

Wireshark detected two RTP streams in my trace file once I forced the RTP decode on the traffic.

As you can see from the figure at left, Wireshark can now show me the RTP streams in the trace file - one stream going in each direction. We can also see that one of the streams has a 2.1% packet loss. Packet loss is ugly in VoIP communications - when a call experiences serious packet loss, the conversation quality suffers.There's an example of a G.711 audio file with 10% packet loss at

Playing the Hand That Was Dealt
Unlike gin or poker, we have to play the hand we were dealt in network analysis. We can only make the hand stronger by filtering out traffic that is not relevant and interpreting the possible causes for the packets that remain.

In the area of VoIP analysis, my favorite filter is sip || rtp and my favorite feature is the RTP playback feature. I like to set the time column to Seconds Since Previous Displayed Packet and export the trace file to csv format.

In Excel I sort on the source column first, the number column second and then graph out the time column to show me the variance between packets (the jitter value). I could use Wireshark's graphing capabilities to give me some of this information, but this is one of the areas where a formal spreadsheet program really shines. I exported the trace file to a csv format to graph out a portion of the jitter values.

VoIP at Summit 09
I'm anxious to get back to the Summit 09 preparations - to document all the tips and tricks of VoIP analysis. I've put together a few lab exercises already, but I'm keeping my eyes open for new hands to play - if you have any VoIP traces that I could use in Summit 09, please send them to me ( Download the Summit Information Guide. All Access Pass members receive a 50% discount to Chappell Summit 09.

Enjoy life one bit at a time.

Wednesday, September 23, 2009

Summit 09 Registration Opens

It's a Geekfest Training Event! Last year's Summit was a great success with a room full of folks hunched over their laptops for three-days of labs and training. Since that time I've been researching and developing new materials - they are ready to go into Summit 09. Summit 09 will be another BYOL (Bring Your Own Laptop) event - filled with hands-on labs. This time around we will offer both individual and group labs.

Hot Tools and Key Tasks
We will focus on which tools are best for each task, such as the following:
  • traffic redirection and interception
  • IP address sanitizing
  • locating firewalled hosts
  • throughput testing
  • jitter measurement
  • identifying blocked/filtered ports
  • WLAN RF analysis

Troubleshooting Hot Spots and Testing

On day two, we will delve into a corporate network that is consistently garnering complaints. Working from the network diagram, we'll devise a plan to isolate the cause of network problems and perform proactive throughput testing and live RF analysis. We'll address both wired and wireless network problems.

Security and Forensics

On the third and final day, we focus on security. Starting with the TCP vulnerabilities and the SMB2 vulnerability announced this month, we will dissect the key issues and build a malicious/suspicious traffic profile. This profile will save you lots of time and help you spot security flaws on the network.

During the Summit you will be working on network diagrams to pinpoint where to capture traffic and follow the path of communications.The schedule is packed with the latest techniques for catching network problems, identifying suspicious traffic, testing network throughput, analyzing WLAN traffic and discovering network devices and services.

Download the
Summit Information Guide. All Access Pass members receive a 50% discount to Chappell Summit 09. Enjoy life one bit at a time. Lauraw

Monday, September 14, 2009

WLAN Profiling: It's a Good Thing

Speed up your WLAN Analysis Processes

This weekend I recorded the WLAN Analysis 101 course (available to All Access Pass members already). I spent a fair amount of time customizing a profile for Wireshark to include columns, display filters and color filters focused on WLAN traffic.

What should your customized WLAN profile include?

Three Columns to Add
Consider adding columns for Frequency/Channel information, RSSI (Receive Signal Strength Indicator), and transmit rate. Once you have these columns added to Wireshark, you can sort on the columns and even use them when exporting to a spreadsheet program for further graphing.

Hot Display Filters
Add display filters to quickly view all traffic on a specific channel - for example, == 2412 will display all Channel 1 traffic. You might also want to create filters to display all beacons for a specific SSID. The syntax for that would be wlan.fc.type_subtype == 0x08 && frame contains "wsu" to see all the beacons related to the wsu WLAN network.

Wild Color Filters
Besides creating color filters for the various channels (which use the same syntax as the display filters), consider coloring traffic that has the retry bit set to 1 (for example, wlan.fc.retry == 1) , Probe Requests/Responses and Associations/Reassociations/Disassociations. I create 'butt ugly' color filters for frames that I consider a problem, such as retry frames.

Customized profiles can include your column settings, font settings and capture filters as well. You could create a custom profile for the corporate office, a specific client or a specific network type.Customizing Wireshark to fit the network you are working on helps you sort out the traffic and spot problems faster!

The "WLAN Analysis 101" course released to All Access Pass members on September 13, 2009. This course includes color filters, display filters, WLAN trace files, Chanalyzer recordings and a 29-page course handout. This course is available for individual purchase at

Enjoy life... one bit at a time.


Wednesday, September 9, 2009

Cloud Concerns

Are We Going to Look Back and Say..."What Were We Smoking?"

You've got a blazin' fast gig network running like Usain Bolt on Jolt. You've tweaked every inch of the network and spent hours explaining that you "don't control the Internet" when their web browsing sessions slow to a crawl because some blowhard ISP decided to cap your bandwidth.
The winds have turned cold and a chill is in the air as management performs rain dances that bring ominous cloud computing towards the horizon. Now you need to watch for latency along the ISPs path to your servers... your data. What about packet loss dragging your throughput rates through the mud? Oh... don't even get me started about rate throttling! Makes me want to throttle someone! Let's ponder this case study... A customer of mine complained that communications between their offices seemed rather slow lately. They asked me to take a look at the traffic and evaluation the throughput rates. There seemed to be a consensus that some device was dropping packets along the path.

The IO graph screamed a story.The graph above is my rendition of what I'd seen that day - the trace files were to remain at the client.Do you see how that line seems to hit a ceiling? Oh yeah... a ceiling by the name of ***** (name hidden to avoid lawsuits). The ISP had promised them (and charged them)the world. In reality, however, they had put a cap on my customer's traffic that was way below what my customer paid for. Now... imagine that this is your corporate network and your service response times are dependent upon some ISP! Do you love your ISP that much?

Cloud computing brings up so many issues - the idea of putting the network health in the hands of the ISPs triggers my gag reflexes. Have you ever tried to work with an ISP to find out why your round trip latency times are so high? What are the bandwidth requirements of your apps? What will the end-to-end (client to cloud-based server) throughput be? Where will your data really be? What path will it take... today...? Who will pick up the phone when you call to say "the cloud is slow?"

Perhaps Lena Horne said it best... "Don't know why... there's no sun up in the sky... stormy weather... when my SERVER and I ain't together...". Lena knew networking!
The Analyze and Improve Network Throughput and Top 10 Reasons Your Network is Slow seminars explain throughput testing and the most common causes of poor performance.

Enjoy life - one bit at a time...


Tuesday, September 8, 2009

Do You Know Where Your Throughput is Today?

iPerf Might Scream "This Stinks!"

Last night I was laughing and crying at the same time while reading a local small town newspaper and the dissertation on bandwidth problems at the local library. They actually shut off half the computers to aleviate the problem. One quote really grabbed me - "We don't notice the bandwidth problem when no one is using the computers." Hunh?

There were a few puzzling comments in the article such as the comment that the bandwidth problem "started 3 weeks ago." That sudden onset of the problem feels like something else to me. It feels like a device along the path that is causing the problems. A quick look at the traffic would validate that.

The article tied in nicely to the iPerf Throughput Testing materials I am finishing up. The reaction to the live iPerf testing done during the Analyze and Improve Throughput course and numerous requests for more information on iPerf prompted me to start developing a course that shows how to perform a series of throughput tests for UDP and TCP traffic.

iPerf is simple and complex at the same time. The one application can be run either in client mode or server mode.

Tests can be run in one direction (from the client to the server, which is the default) or bi-directionally. Here is one of my favorite tests for iPerf:

Client: -c -u -t 60 -i 5
Server: iperf -s -u -i 5

This test enables me to locate jitter and packet loss along a path using a UDP stream sent over a 60 second time with results displayed every five seconds at both the server and the client.As you can see from the screenshot above, the path suffers packet loss reaching 39%. We were sending a steady stream at 1.05 Mbit/second specifically to identify packet loss. Well... we found it. Running the test for a full 24 hours would help us identify specific times of the day when packet loss is at its highest.

I hope to get into the local library and run Wireshark and iPerf on the network soon. I have no idea if the systems are locked down - a slight problem that might require some workaround. Meanwhile I'll peruse the shelves for those network troubleshooting books.

The "iPerf Throughput Testing" modules will be included in the Summit 09 course in December ( - the "Analyze and Improve Throughput" course is available now at

Enjoy life one bit at a time.

Tuesday, August 25, 2009

Enough is Enough! No More Broken Windows

No... I'm not Microsoft-bashing (today)... not really. After all, this issue is seen on other operating systems as well. I recorded information about this in the things that perplexes many new and experienced analysts.

You may be aware th
at Wireshark has an Expert Info Composite entry for "Window is Full" and "Frozen Window" but unfortunately, this condition can be occurring on your network without Wireshark catching it.

You can set up a
butt-ugly color filter and a display filter to alert you to this condition. Let me explain...

In the picture above, I've added column for the receive window size value set in the TCP
headers of each packet. It's a custom column using the syntax tcp.window_size. I also added a column for the tcp.len value so I can see how much data is contained in each packet.

Notice in packet 361 that is advertising a window size of 2,920 bytes - enough for two 1460-byte segments to fill as Wireshark notes in packet 363 [TCP Window Full]. The full receive buffer leads the client to begin advertising a receive window size of
0. Ok... duh... We can spot that one easily!

Now look at this screenshot. This delay is caused by a window sized problem as well - but this time the window size field didn't go alt the way down to zero - its at 536 (packet 374). That's too small for the queued up TCP segment at the other side so you might as well have said "shut up" with a window zero setting.

So what can we do about this? How can we easily see that we are having this problem when Wireshark doesn't have an Expert Notification for this? Aha! Here's where your butt-uglies come into play. Make a butt ugly filter for:

(tcp.window_size < reset ="="">

Check out the Trace File Analysis: TCP In-Depth course for more information on working with TCP traffic!

Enjoy life one bit at a time!

Monday, August 17, 2009

Sexy Spread Spectrum Signals

In the WLAN Analysis 101 course last month, I showed the effects of a cheap 2.4GHz phone on the wireless network by knocking myself off the network during my live video feed. Duh... I hope it made a point.

If I hadn't been picking up the RF signals around me, the death of my network connection would have been a mystery. After all, the cutoff was so sudden and folks in other locations around weren't having any problems at all.

The live course viewers saw the
sudden spike in the signal as I'd told them to watch the Chanalyzer Spectral View. begin to climb near channel 1 and then SCREECH! The video came to a halt and my voice (fed through VoIP on my end) became scratchy and my words almost impossible to decipher.

The figure above shows their view at the time I attacked myself! Wow! What a hot, my connection to the online seminar engine, it felt like real life - this is what really happens in the WLAN world - and we got to experience it together.

I love looking at the Chanalyzer Spectral View - it consists of time across the X axis and frequency/channel across the Y axis. The color coding is based on signal amplitude. The closer to red, the stronger the signal. Vertical stripping indicates a consistent signal on a specific frequency. Manipulating the time controller at the bottom of the Chanalyzer window enables me to focus in on a specific area of time for a clearer picture.

The Chanalyzer/Wi-Spy Adapter products are some of the sexiest products that have come around in the industry in a long time. Displaying the live RF signals around me prior to making a presentation at a conference is like wearing a hot pair of steel stilettos. Attention-getting and very sexy (in a sick and twisted geeky way).

We've now partnered with the Metageek folks on the upcoming WLAN Analysis 101 course on September 10th - if you purchase the 2.4x or DBx Wi-Spy adapters, you'll get into the live class for free. If you already own their products, you should receive a 50% off coupon via their newsletter. As soon as we record the course, you'll also receive one-week unlimited access to the recorded course.

It's a good time to get the adapter... c'mon... you know you want one! You can order the products at

Enjoy life one bit at a time!

Tuesday, August 11, 2009

Ethereal is Dead!

Gerald Combs created Ethereal over 11 years ago when his boss wouldn't buy him a brand spanking new Sniffer box - something about budgets and all... so Gerald told his Sniffer rep that he was going to write his own packet sniffing tool. While that Sniffer rep was still rolling around laughing, Gerald started working on Ethereal.

The name? Yeah - the name Ethereal was always an issue - how do you pronounce it? Ethereal (
play wav) or Ethereal (play wav)? Many a late night has been spent huddled over pizzas in the cabling closet debating that issue. The answer - Ethereal (play wa


It surprises me to find many folks haven't moved up to Wireshark - it is, after all, the successor to Ethereal. The same developers, the same creator, the same base code set, the same development directory structure. I can only assume those folks also have 8-track tape players and beam with pride when talking about their 'vinyl collection.

For fun, I went to visit the old eth website - I thought the old Ethereal website was taken down ages ago, but imagine that NIS is still reaping some benefit from all the misguided hits. Looking at the stats in Alexa was pretty interesting - you can see the dramatic move to Wireshark at the end of the first quarter of 2008 - but what the heck is happening with in 2009?

Why are people still even hitting that site? Is everyone writing a blog entry about 'dead' software projects? Did some of my old articles and courses get reissued? Who are these Neanderthals walking among us?

It's time to upgrade to Wireshark folks. Wireshark v1.2.1 was released just a few weeks ago and fixed numerous bugs in the v1.2 release. There are still a few uglies in there, but would you rather be in a car that has a window that slowly rolls up or take a bicycle on that long drive along the network analysis road?

So perhaps today is the day to throw away those old bell bottom jeans and that mood ring (and perhaps dump those Shaper Image gift cards and Clear cards
as well).

Come on - get with the times! Oh... one more thing - and you pronounce Wireshark like this (
play mp3).

Enjoy life one bit at a time!

Wednesday, August 5, 2009

Out of Sight, Out of Mind?

Embedded OS Security Issues
This month seems to be "medical industry month" around here. My email has been loaded up with various hospitals and medical facilities. One of the topics that is hot right now is 'embedded OS' security issues. For example, the three devices shown in the image above all contain Microsoft embedded operating systems - Windows Embedded CE. (See

How many hosts on your network support an embedded OS? Is the vendor keeping those hosts up-to-date with patches and security fixes? An interesting question... this is a great reason to run OS fingerprinting against the range of IP addresses supported on your network (with permission of course) to find out where the addressable devices are. Listen to the network traffic and check out the endpoint listing that Wireshark provides. Any unusual devices around?

Some of our office printers have embedded OSes in them and can tell you they've never been updated by the vendor. What outdated OS is hanging around on those boxes? We're tapping into the nets now and doing some OS fingerprinting to see what we're up against - I suggest you do the same!

Have fun one bit at a time...

Friday, July 24, 2009

One Key Sign of QoS Problems

There are some trace files that SCREAM at you! If you stand too closely you can feel spit hitting your face!

In the "Top 10 Reasons Your Network is Slow" online course (course abstract), we examine one of the causes of slow network performance. We look at a trace file of traffic that has passed through a router set up with QoS. You may not be aware how obvious QoS issues can be when analyzing traffic - feed a nice steady stream through that puppy and catch the traffic on the other side to see how it performed its duties.

Look for an EKG Pattern
In a datastream that is 'steady' - as in the video streaming example shown in the picture, we look for an "EKG pattern" in data coming through the router. This pattern is seen when data is held in the queue temporarily and then released (causing the sudden jump in the IO). As you can see in the image above, we can also spot packets that are droped by the queue. (Make sure you take a trace on the other side of the router to compare the IO graphs - you want to be certain a steady stream of data is traveling towards the QoS device and any alteration in the IO pattern has not already occurred.)

Get the Trace File
Go ahead - try checking it out yourself. Open up mcaststream-queued2.pcap in Wireshark. Select Statistics > IO Graph.

What? It's not screaming at you? Aha! That is because the X axis is too large - you are looking at ants from space! Change the X axis value to 0.01 seconds.

Do you see it? Right around 1.10 seconds into the trace - the EKG pattern! If users are not complaining about performance then dont' sweat it. Keep an eye on times when the line drops and doesn't jump up above the average point - those are dropped packets.

I'll be teaching the "Top 10 Reasons Your Network is Slow" on July 30th - it's a fun class to teach (although last time I was demonstrating the process of jamming a wireless network and nearly killed my own seminar hosting connection - duh). Register here.

Enjoy the trace! See you online!


Saturday, July 18, 2009

Brad and the Top-Secret Bl-Ear Project

Brad Pitt on the cover of wired poo-pooing the bluetooth look? No way! They aren't going pre-announce an invention that I already pre-announced at TechEd?! I quickly blew through the pages of Wired Magazine's August issue to find a picture of Brad texting at the urinals with a bourbon close by (page 89).

Whew! No mention of the Bl-Ear - the exciting beta-phase invention in bluetooth beauty and buffness. It's tough to stay ahead of the game (and game mags) in technology. Sometimes you have to be... well... inventive.

Let's face it - there are tons of products we'd love to see out there - the Bl-Ear fills a need to reduce the high Nerdlook-Factor (NF) of walking around with that bluetooth device hanging off your head - don't even start spewing the "jawbone is sexy" defense with me. No one (not even Brad) looks good with electronics hanging off their aural lobes.

Bluetooth devices are the new pocket-protectors, folks. And you need to admit it.

As you may have missed the TechEd presentation in May, I've put up a short video showing the Bl-ear over at the Chappell Seminars Projects page.

Before you go out the door today, look in the mirror. Laptop - check. iPhone - check. Starbucks card - check. Bluetooth adapter - check. Now remember - accessorize, then minimize - take off the ear-tech that screams "I hope someone wants to talk to me today".

Sign up for the Bl-Ear and watch your NF drop to near-normal levels. Oh... and just wait 'til you see their upcoming Ear-Bluds! I can hardly wait.


The Bl-ear and Blear Corporation are bunk. All rights reserved.

Sunday, July 12, 2009

Parents, 'Puters and Painkillers

"Hi hon! How are you? How are the kids? I can't print"...

Being a technologist these days is like being the family doctor in the olden days (ok, well, family doctors are still of value but mostly for prescription drugs for fun I think.)

You know what it's like - your second cousin once removed calls - you haven't seen her since that embarrassing Thanksgiving when they pulled you into singing "Muscrat Love" with them while your inebriated Aunt tried to play the piano ("I haven't played since I was a child" - no kidding?!). [That's another story.]. "Hey... are you still into computers?"
Uh... no. I'm now working at a humane beef ranch as an ozone protection analyst. Sorry.

In this case, my father was calling for help with printing.
Guiding him to view the print queue won't work - the print queue icon seems invisible to him and the Start button is out of the question ("The start button... you mean the power button? Ok. I clicked it, but my computer screen is blank now."). First things first. Do you see a light on in the front of the printer ("Yes, honey. My desk lamp is always on.")? It would be a long, slow and painful process (looking for the real family doctor for those fun meds now) to guide my father to eventually unplug and replug in the printer USB cable on his laptop ("no, Dad... the printer cable doesn't plug into the wall socket...get out from under the table before you hurt yourself.").

The printer sprung to life and began printing the 32 copies of the 70-page document he'd sent to it before calling me. Rather than try to guide him through the process of clearing the print queue I just told him that there wasn't anything he could do about it. "Just get out the recycling bin, Dad." (Making notes to give Dad reams of paper next birthday and go out to plant something green while acknowledging the guilt of prioritizing my sanity over the environment).

You must have a certain level of compassion and empathy to work in the field of technical support. I really don't know how people take calls from someone like my father every day and still maintain a life of sobriety and love towards mankind. I think the key must be...
Hang on... gotta cut this blog short... my Dad's calling... ("Honey... I've just downloaded Wireshark and I have a couple questions...") Gulp.

Family... can't live with 'em... can't DoS 'em (legally)

Sunday, July 5, 2009

Did That Tech Just Tell Me to Go Ping Myself?

"Ping and let's look for packet loss."
"Let's reinstall the operating system."
"Oh my gawd - didn't you know ping is illegal?"
"Ping takes away the addresses of others on the network."
"Not all laptops support networking, so that might be the problem."
"Did you plug in the wireless cable yet?"

Oh yes... I keep track of the amazing comments I've heard from hotel network technicians and most recently Comcast. Many of you know the story of "Bob, the Comcast technician from hell" who ended up being a trainer for the other network technicans. I can only hope Bob is now flipping burgers somewhere.

When one of my network connections began feeling last week I pulled out my tools and began to work on identifying where the problem was. I grabbed my traffic with Wireshark and noted the high rate of packet loss. Since I know that packet loss most often occurs at an inter-network device, I began running the graphical ping in NetScanTools. I could see the rate of packet loss was around 40%. Next I began a series of traceroute operations to see where I was losing packets - and BOOM! There it was. One of the routers consistently dropped packets along the path. I even went through that target to other hosts.

All I needed to do was let the Comcast technician know which router was the problem... right?

When the Comcast technician asked me to "ping", I tried not to gag. How could this 'technician' not know the basics of TCP/IP? She pronounced traceroute as "trace-ert" with absolutely no awareness that her ignorance was spilling out over the phone.

What I experienced here is the result of skipping basic training - it's really not her fault. I blame Comcast. And guess what...? Right now we are seeing companies restrict training budgets for the folks running their networks. We're going to pay a big price in the future with unskilled and out-of-date IT professionals.

Is your company restricting training? What are you doing to keep up? Does your management know the end result of de-valuing training? Where will we be in a few months... years?

I hope the free Wireshark training courses are helping out. We are focusing on getting sponsors to open up more free online training. Let your favorite vendors know they can do something great for the industry by sponsoring a free training course.

Now off to finish the Wireshark 101 handouts for class on Tuesday! Gerald (the creator of Wireshark) will be online again to answer your questions. I hope to see you there! Register at today.


Saturday, July 4, 2009

July 7th - Wireshark Jumpstart Free (Sponsored by NetOptics)

The July 7th Wireshark 101 Jumpstart is sponsored by NetOptics. I approached NetOptics because my lab is filled with NetOptics taps... the Teeny Tap, my 10/100 aggregating tap, my 10/100/1000 regenerating tap and more.

In this Wireshark 101 Jumpstart, I'll be demonstrating the following features of Wireshark:
  • Tapping into traffic
  • Choosing the interface
  • Capture filtering
  • Display filtering
  • Capturing to file sets
  • Capturing with a ring buffer
  • Altering the time column
  • Display filtering
  • Using the Expert Info Composite
  • Defining profiles
  • Reassembling streams
We already have over 1,000 registrations and only 1,000 people will be allowed to access the live online seminar. We'll open up the 'waiting room' online approximately 20 minutes before the session to allow you to get a place in the course.

See you on Tuesday, July 7th.


Saturday, June 27, 2009

Laughing at Twitter Traffic!

It's true... I was laughing out loud today... at packets!

This project came out of thin air almost... I was preparing for a podcast with the ChannelWeb group (you can listen to it at I was on the phone line early with the moderator and interviewers and making small talk.

I mentioned that I'd tried to do some Tweeting that morning and there were problems. I explained how I used Wireshark to determine the problem had nothing to do with my system. There seemed to be a problem with the website.

When the interview started, Ed Moltzen (a very impressive Tweeter and interviewer) led the discussion back to my early morning problems with As I talked about the problem, it suddenly occurred to me that people might like to know what Tweet traffic looks like. I told Ed that I'd do an analysis of a Tweet after the podcast.

I did... I immediately got working on a clean trace showing just the Tweet. That was no easy feat since my host spewed all sorts of background traffic for unrelated processes. I began identifying and whittling away traffic that was unrelated. Finally - I sent my sample Tweet and created my analysis report. But I wasn't done...

TweetDeck was ripe for an analysis... and here's when life got really fun. It turns out that when you upload your Twitter picture it is placed on an Amazon Web Server (AWS) under the original file name. Each user has a unique user ID and the image is placed in that directory under a directory called profile_images.
The picture names were hysterical!
  • WhatSheWants
  • MeNoWife
  • Spoon_too_big
You can read the entire report at I also released the MAC World Domination project details at that location.

Register for the newsletter over at to keep up with the latest projects in my lab.

Now - off I go... the packets are calling!


Sunday, June 21, 2009

iPhone: You're Sexy, but You Talk Too Much

Last week at Sharkfest I blabbered on a bit about the chatty nature of my iPhone (3G). I equated it to a yapping Chihuahua on the network. I'm still playing around a bit with numerous trace files and will have some to give away soon, but I wanted to explain how to capture your iPhone traffic and understand one of the packets that you'll see over and over and over and (you get it) again in your traffic.

I'm hanging out today on my Vista 64 system that I host the live seminars from. (No... I do not have a sexy MAC on my desk - but I do have two televisions within 10 feet of me to constantly feed me my much-needed background noise through the day.)

Before launching Wireshark or turning on my iPhone - here's what I did:

1. I hooked up a powered USB hub and populated it with three AirPcap adapters.
2. I opened the AirPcap control panel and configured each adapter to listen to a different channel - channels 1, 6 and 11.
3. I added my encryption keys in AirPcap.

Now I launched Wireshark and selected the AirPcap Multi-Channel Aggregator interface for my capture. Then I turned on my sweet, sexy-looking iPhone and...

OUCH! I watched my iPhone locate the WLAN APs, but it did not make an authentication/association until 60 seconds after I entered my passcode. Perhaps it wanted a bit more of a commitment from me? Or flowers? Or a new case?

During the startup sequence there were some unique DHCP and ARP happenings (we'll cover in a later blog) and a slew of mDNS packets. So, you ask... what the heck is mDNS and do I want 'em on my WLAN? mDNS stands for multicast DNS and is used to discover local devices as part of the zeroconfig project definition (Apple calls it Bonjour - they are so cool!). You don't need a DNS server to discover mDNS-capable devices. mDNS runs over UDP port 5353. Just use a udp.port==5353 filter or the dns display filter in Wireshark to see all mDNS and DNS traffic or build a filter for all ip.addr== traffic (the IPv4 mDNS multicast address) or ipv6.addr==FF02::FB, in the case of IPv6.

Want to try it out? On your iPhone, search in the AppStore for mDNS Watch. It's free so install it and watch it list all the mDNS-capable devices around you. In my lab it discovered my HP Officejet Pro L7700 printer and it showed me the three ports that were open on that printer - ports 513, 80 and 9100. Hmmm... this could be interesting, couldn't it?

For more information on mDNS, visit

Now... back to that hot, sexy and really verbose iPhone to work on the strange DHCP and ARP behavior (much of which is related to Bonjour).

Friday, June 12, 2009

Wireshark v1.2 Enhancements

In this week's newsletter I got carried away with details about the next version of Wireshark - it almost became a book. This blog details some of the enhancements in Wireshark v1.2.

One of the hot features that many will be thrilled about is auto-completion of display filters! HALLELUJAH! Bad typicsts rejoice (I meant to make that mistake...). Type in "i" and possible filters are shown in a drop-down list. Add a "p" and a period ("ip.") and all the possible variations of filters starting with "ip." show up. This is going to save us all a lot of time!

I already talked a bit about the GeoIP stuff in the Newletter and I'll be blogging/teaching about this a bit in the coming weeks.

There are a few changes that might sneak up on you - for example, in the Expert Info Composite area, "Window is Zero" and "Window Full" have moved to Warnings, but "Retransmissions" was not moved over - "Fast Retransmissions" are already in the Warnings area. It would be nice to have both types of retransmissions in the same window. We do now have the individual item count as well as the summary count in the tabs now, which is really nice.

There were some usability enhancements as well. For example, Wireshark v1.2 now remembers you column widths and opens up with the last configuration profile you used (watch out for this one if you're accustomed to always starting with the default profile and having to switch over).

As far as bug fixes go, the NetFlow dissector bug that could "run off with your dog, crash your truck, and write a country music song about the experience" has been fixed. No kidding - that is in the 1.2 rc1 release notes from Gerald.

Something that you may not take advantage of quite yet (but we'll cover in future newletters and online training over at is the new support for pcap-ng, the next-generation capture file format. These trace files typically end in the extension .ntar, but the recommended extension is .pcapng. This new trace file will enable us to add metadata to our trace files.

Again... the developers did a great job with this version - kudos to them all!

I'll be moving over to the new version of Wireshark for all the courses as soon as the "official" release is completed. Register for a course today!

Note: [25% Discount Code: bcbsab - use for the new Wireshark Command-Line Tools: From Editcap to Tshark - July 13, 2009 @ 10:00AM PDT/GMT-7

Survey: Chappell Seminars "Take the Reigns"

Twitter: LauraChappell

Facebook: Laura Chappell

Monday, June 1, 2009

You Can't Hide!

You may be familiar with the standard old traceroute that relies on ICMP echo request and echo reply packets to identify the path to a target and verify the target reachability. If so... how many times have you not reached the target because they filter ICMP echo replies?

An example of this would be when you try to traceroute to You'll see right after you hit the domain routers you are left in the dust. It really isn't that unusual to block ICMP echo requests at servers - no one should be pinging them anyway, right?

Using TCP Traceroute
Using NetScanTools Pro, I typically use TCP traceroutes. In the Traceroute tool, click the Setup button and choose TCP (WinPcap). You can define the starting hop, timeout in miliseconds, and retries at this point, but I go directly down to the TCP Trace Specific area.

Here's how the TCP Traceroute works - NetScanTools sends out a series of TCP SYN (handshake) packets to the target. It increments the Time-to-Live (TTL) value in the IP header (just as an ICMP traceroute does) to locate routers along the path who respond with ICMP Time to Live Exceeded in Transit messages. When the hop count is high enough to allow the TCP SYN to make it to the target, that target MUST respond - hey those are the rules of TCP. The target must respond with either a TCP SYN/ACK (indicating the target port is open) or a RST (reset, indicating the target port is closed). In this case, we don't really care if the target port is open or closed - we're just trying to get the roundtrip time using traceroute.

Firewalled/Blocked Targets
Now we know the specs for TCP say the target must respond... but what if it doesn't? What could have happened. Well... either your TCP SYN packet never made it there or the TCP SYN/ACK or RST never made it back. Make sure you run your TCP traceroute a few times to ensure sporadic packet loss isn't to blame. Most likely it is likely a firewall or some other blocking device that in your way. You couldn't find the roundtrip time, but you did find a protected host.

FYI - NetScanTools Pro 2-for-1 Price
As you may know, NetScanTools is on my 'must have' list of tools for IT professionals. The new version (updated today) is available at There is also a 2-for-1 sale online through June 15, 2009.

Learn More
In the upcoming "Trace Back to a Suspect Host" course (June 4) I'll demonstrate each form of traceroute along with numerous other invasive/non-invasive techniques for testing connectivity, paths, identities and relationships of targets. Register online at