Anatomy of a Bug

A couple of weeks ago, I realized that The Colonel Sanders Conspiracy, one of the blogs I subscribe to via RSS, had not had any new articles in several weeks. I decided to visit the site via my browser, and the was surprised to find several weeks’ worth of articles I had never seen. I sent a quick note to Joshua Hoover, author of the blog. Over the course of several emails, we worked out the problem, and Josh was able to correct it. He kindly agreed to let me document the process. I’ve outlined the steps we followed below, as a case study in debugging. ( Warning: a bit long. )

When I realized that the website had new articles not in the feed, I first thought to check the rss feed. I retrieved the feed url I’m currently subscribed to from my aggregator ( and retrieved it directly with my browser. Notice that I made sure to use the same feed url that was producing the bad results. Clicking a feed url on the site might have pulled a different feed, for example if his feed had moved. The document retrieved was blank, and viewing source revealed an empty html document, consisting of only html and body tags. Only later did I figure out that Firebird displays this when viewing source for any empty document.

Having confirmed the rss feed I was using was empty, I then checked the RSS link on the the site. They were the same. At this point, I felt it was time to submit a bug report. Using the feedback link on the site, I sent Josh a quick note, explaining what I’d seen, and including the URL of the feed I was using… just incase he had a new feed URL he had forgotten to publish.

A little while later, I got an email from Josh – and it was a great example of how to debug. He noted that he had checked the feed, which seemed ok, and provided links to two feed validators ( and Userland’s validator)… in fact, he sent direct links to validate his feed. Had I not seen an empty doc, I probably would have validated. I should have done it anyway. Josh also asked what reader I was using. Asking the right questions is half the battle when debugging.

At this point, my spider-senses were tingling. Why would the validators give it pass? I took another look, and it was still empty. At first I thought an empty doc might be a “valid” feed, but then I got curious and tried pulling it from the command line via curl. I got the feed, not an empty doc. Is that plot-thickening I smell?

So what did I know at this point? Both my browser (Firebird) and my aggregator ( were not getting the feed. Two web-based validators and a command-line retrieval tool were getting the feed. This seemed to indicate that the webserver was sending back different responses based on some criteria. My first thought was browser sniffing – checking the HTTP ‘User-Agent’ header to determine the browser type. A great tool for investigating HTTP headers is Daniel Savard’s Live HTTP Headers plugin for Mozilla and Firebird. With the plugin running, I again requested the rss feed, and reviewed the request. Curl provides an option to fake the User-Agent, so I copied Firebird’s User-Agent string and tried again. I still got the full RSS document. I also tried faking the HTTP ‘Accept’ header, which allows a user agent to tell the browser what document formats it can accept, by coping it from Firebird. Again, curl succeed where Firebird failed.

I was running out of ideas, so I tried switching tools. I fired up Safari and retrieved the feed URL again. Safari didn’t give me an empty page… it gave me an error (and a request to submit a bug report):

Could not open the page “”
The error was: “bad server response”

This was interesting… if Mozilla had reported an error, I might have saved some time assuming the document was empty. This is also a good debugging tip- don’t assume your tools are perfect. Infact, Firebird was one of the products showing the original buggy behavior; another reason to cross-check with another tool. Grasping for one more straw, I fired up NetNewsWire Lite, the aggreagtor I used before I switched to Bloglines. NNWL retrieved the feed without errors.

Stumped, I decided to drop back 10 and punt. I emailed Josh with everything I had found, and mentioned that I still felt the problem had something to do with HTTP headers. I hoped that something I had learned would trigger an idea for him. And it did. Collaboration is another key tool in debugging. At work, when I ask another developer to help me figure out a bug, more than half the time I figure it out just by trying to describe the issue out loud.

My mention of HTTP headers caused Josh to suggest his use of gzip compression on his site as a possible culprit (using gzip compression allows a webserver to compress everything sent from a webserver to a browser, to save bandwith). In his own words:

Could my feed be doing something weird because I use GZIP compression across my site? I know some RSS reader apps on Windows can’t handle GZIP, but I can’t see why a browser couldn’t.

Bingo. When I read this, I was pretty sure Josh had solved the problem. I suspected that the problem was not that some programs (like my browser) were unable to handle gzip, but because they are able to handle it. Not all user agents can handle gzip, so webservers aren’t allowed to assume – they only use gzip if the user agent states up front (in the request) that it can take gzip. If Josh’s gzip were broken, only user agents that asked for gzip (like my browser) would see the problem.

But theories don’t fix bugs. It was time for some testing. I used curl to test various scenarios, the tests I tried are outlined below.

Requests the rss, no compression. I saw the feed.
curl -i -H “Accept-Encoding: gzip”
Requests rss, asks for gzip. -i makes it print the HTTP response headers. In the headers I saw “Content-Encoding: gzip”, but the rss feed was plain text. Aha! Just to be sure, I next tried a site I know supports gzip:
curl -i -H “Accept-Encoding: gzip”
Requests homepage of, with gzip (if avail), and prints response headers. In the output, I saw the headers (including “Content-Encoding:gzip”), but the body was binary. This is what I’d expect gzip’d content to look like.
curl -H “Accept-Encoding: gzip” | gunzip –
Requests same page, still in gzip, omits headers from output (I only wanted the gzip’d stuff at this point), and pipes it into gunzip, which unzips gzip content. The output was the html of the page I requested. So now I knew how to successfully retrieve and unencode gzip’d content.
curl -H “Accept-Encoding: gzip” | gunzip –
Request the problem feed, with gzip, no headers, pump it through gunzip. This should work if the feed is gzip’d. I had already confirmed that the server claimed it was gzip’ed in the second test above, when I looked at the headers. The output was “gunzip: stdin: not in gzip format”. Looks like we have a smoking gun. If gzip was requested by the User Agent, the feed’s response header indicated the content is gzip’d, but it wasn’t… it was really in plain text. Just to be thorough, I checked Josh’s homepage next.
curl -i -H “Accept-Encoding: gzip”
Just like the first test, the header indicated “Content-Encoding: gzip” and the content was binary. Good. So the final test:
curl -H “Accept-Encoding: gzip” | gunzip –
Worked fine… Josh’s home page appeared to be correctly gzip’d. It was only the rss feed that failed. I also did some checking with ‘deflate’ encoding (an alternative to gzip), and got the same results… the feed’s headers indicated deflate encoding was sent, but the content was plain text (uncompressed).

At this point, I had found the bug, but it wasn’t fixed. I sent my findings to Josh (just once I wish one of my users (at work; not you, dear reader) would send me this kind of bug report!). Josh took the correct next step – research. He checked the website for the software he uses (Nucleus), and found that the gzip plugin had recently been updated, with the following changelog info:

NP_Gzip v2.0 (December 28, 2003)
* RSS feeds were broken when this plugin was installed. This was fixed by no longer compressing those.

He installed the updated plugin, and now everything works.

Working on this was alot of fun for me… which isn’t always the case with debugging. Collaborating via email required me to document a process I normally wouldn’t; this got me to thinking about the value of such an endeavor. Hopefully it will be valuable for others as well. As always, comments welcome. And thanks again to Joshua Hoover for allowing me to publish all of this.

Both comments and pings are currently closed.

Comments are closed.