Yeah, here we go again, it’s time to talk about feed readers. I’ll summarize what’s been happening with those tests which have been active in the past 7 days. There are many more that started and then ended and which are not included as a result.
Side note: I’m about to roll the hostname like I did last year, so if you’re participating and want to carry on, go back to the URL from the welcome mail when it stops resolving in DNS. Then delete the current testing feed and add the new one.
I do this because a lot of these things seem to be forgotten over time, and they’re just piling up in my database for nobody’s benefit. Dropping the DNS leaves the jank entirely on *their* side of the Internet. I mean, if you run a test of a reader for hundreds of days and never improve anything, what’s the point?
One thing I wanted to note before I get to the list: there are a couple of readers which have apparently added support for the Cache-Control header, and specifically the “max-age=nnnnn” part of it. The test feed sends that out, and I change the values sometimes to see which readers speed up and slow down accordingly. To the authors of those projects: I see you, and I appreciate your work! I’d put a gold star on your laptop if I could. (Not all of them are in this report since they have finished testing and are no longer reporting in.)
So then, let’s talk turkey here. I’m grouping the results by client just for simplicity. Remember that means it includes vastly different config options used by different people, different versions, different upgrade cadences, and (sigh), yes, different amounts of people clicking reload.
Audrey2: 402 days, no complaints. Pretty sure this one honors Cache-Control, so thanks for that!
Miniflux: about 10 instances. My reports for them mostly go something like “400 days, no complaints”, “402 days, three too-short polls”, “185 days, one too-short poll”, “one <1s double-poll after 400 days of otherwise perfect behavior”.
It’s just that chill in Miniflux world.
Otocyon: 365 days on the nose, and not so much as a hiccup from the single instance that’s reporting in. This was one of the “unpublished, please don’t shame me yet” agents that I only mentioned anonymously in prior reports.
NetNewsWire: about 5 of these instances, and they’re all inspiring. They all show a marked change in behaviors once upgrading past a certain point. NNW itself added some cheeky code to send version numbers to a couple of sites and I’m one of them (yes, I see what you did there). Anyway, I can now see that people are in fact upgrading, and the vastly improved behaviors speak to the work that was clearly done behind the scenes.
The NNW per-instance log tables all show the same thing: a bunch of red cells for this problem or that problem, then that instance upgrades and *poof*, gone, and everything’s clear and happy. It’s like one of those commercials for eye drops from the 80s.
Vienna: a couple of instances, both active 350+ days. One had no complaints and the other has a handful of short polls. By that, I mean “less than an hour between requests”. I assume this is from someone clicking refresh. One “refreshing” note is that they’re still making conditional requests when this happens, so they aren’t wasting much bandwidth.
FreshRSS: a couple of instances for this one, too. One has some short polls and unconditional requests over 206 days. The other one got past some kind of weird buggy spot that was in 1.25.0, and has been smooth sailing ever since (nearly a year now).
newsgoat: just one of these, and it was wobbly and too-quick at the beginning but then settled down. I would like to see how it does now. This is another reason I roll the DNS entries and reset the data: I want to see how things are working with the latest code, and leave the older stuff behind.
CommaFeed: just one of these, and it had some kind of caching issue that disappeared shortly before it flipped to version 5.7.0 in April 2025, and now it’s fine. This is another one that would probably benefit from the upcoming fresh start.
Feedbin: did a double-tap startup, where it polls (unconditionally) twice in quick succession (< 1 second apart) when the feed was added. Two instances did this, but this was back in January 2025. This is another spot where I’d like to see how it behaves now.
Rapids: 354 days, no complaints.
NextCloud-News: this one had some weird If-Modified-Since values and double-tapping at startup, but that was January 2025. (Yes, this is the reader that once sent the infamous “1800” IMS value). It’s come a long way since then and I’d like to see a fresh start from it to appreciate the work that’s been done to it.
Bloggulus: 374 days, one slightly too quick poll. Hard to complain about that.
MANOS: 391 days, and nothing to complain about. I do hear Torgo’s haunting theme music every time I write about it, though.
unnamed-feed-reader: technically, that’s a name. 402 days, no complaints.
Something that doesn’t even send a user-agent header: yeah, that’s not cool. Send SOMETHING. Come on. 399 days of just doing that but otherwise getting everything else correct, somehow. One nit: sends “” as the If-None-Matcher at startup instead of just not sending the header at all. (Sounds like the usual “null is not zero is not the empty string is not the lack of a header is not …” type thing.)
Some unidentified Firefox extension: it looks like it’s still doing the 2000-01-01 If-Modified-Since once in a great great while (like, once), and it did a super quick poll (two in under a second) once. Otherwise it’s been pretty quiet over the past 401 days.
There’s also a Thunderbird instance which does the same 2000-01-01 startup but otherwise just quietly does its thing.
NewsBlur: double-tapped at startup (March 2025), and then sent lots of nutty out of sequence If-Modified-Since values. By nutty, I mean “went to sleep for 303 days, then came back, and started sending the *previous IMS value*”. How do you do that when the server is hitting you in the face with a new value every time you poll?
Zufeed: unconditional double-tap at startup. Might be fixed. Need to see a fresh startup to be sure.
Roy and Rianne’s … etc: handful of unconditional requests over the past couple of months.
walrss: same deal: a couple of unconditionals in ~400 days.
Yarr: multiple too-fast unconditionals at startup (January 2025), and then a bunch more after that.
SpaceCowboys … etc: one instance of this program, and it did something bizarre where an ancient ETag value popped back something like three months after it stopped being served to clients. Also sends some unconditional requests and a few too-fast polls.
feed2exec: poll frequencies are all over the place, and it has the usual 59 minute vs. 60 minute fenceposting thing. The version number is static throughout so it’s not clear if it improved during these past 340 days.
Russet: a bunch of unconditional requests and the 59/60 minute thing, like others.
There are a few others which are active but which have had multiple user-agents prodding them and so polluted the data. The interval checking and IMS/INM comparisons mean nothing when multiple programs are involved, so I have to ignore those as corrupted.
And finally…
Free Reader: 100% unconditionals. Why?
QuiteRSS (I think - it lies in its User-Agent, which itself is evil): 100% unconditionals. Also, why?
Inoreader: one instance sends unconditional requests basically every other poll. Awful. The other instance sends a bunch of unconditionals, *and* it polls too quickly, including sub-second repeat polls at times. WTF?
inforss: something like 6% unconditionals out of > 2000 requests in ~400 days. I don’t get it. The 59m vs. 60m poll-timing fencepost thing it also does is minor by comparison.
feedparser: weird timing, also has problems trying to hit the 60 minute mark and instead comes a bit too early, like some others. Also frequently calls back far too quickly.
Newsboat: ETag caching is still very broken and it will get into this pathological case where it keeps sending old values even though the server is hitting it over the head with a fresh one every single time. This means it latches into 100% unconditionals in effect and that’s terrible. This seems to keep happening despite the version number changing, and it’s affecting both instancs which are reporting in.
BazQux: hundreds of out-of-sequence IMS and INM values stemming from something very very wrong with their caching implementation, resulting in 100% unconditional request generation once it latches in that state. A lot like Newsboat in that respect.
From Writing - rachelbythebay via this RSS feed

