Opened 6 years ago

Closed 6 years ago

Last modified 20 months ago

#10722 closed defect (fixed)

wiki cache invalidation headaches

Reported by: jvonau Owned by: cjb
Priority: normal Milestone:
Component: website Version:
Keywords: Cc:
Blocked By: Blocking:
Deployments affected: Action Needed: never set
Verified: no


I can see the updated webpage in firefox but os-builder returns a stale listing of activities when parsing the group url. Seeing the same issue using software-update on an XO, going to the same group url. Where would be the correct place to force a proxy reload with urllib2 in


Attachments (3)

webdump.txt.tar.gz (12.9 KB) - added by jvonau 6 years ago.
webdump2.txt.tar.gz (12.8 KB) - added by jvonau 6 years ago.
webdump2.txt (204 bytes) - added by dsd 6 years ago.
manual cache-purging program

Download all attachments as: .zip

Change History (15)

comment:1 Changed 6 years ago by cjb

  • Status changed from new to assigned

This doesn't have anything to do with bitfrost, but I've seen it before too. I'm pretty much out of ideas on fixing it; it'd be great if you can try to debug further. Perhaps our wiki setup's performing inappropriate caching.

comment:2 Changed 6 years ago by jvonau

parse_url(grpurl) pulls in bitfrost, that's where things go wrong, the next block

print >>sys.stderr, "Found activity group:", name
for name, info in results.items():

(version, url) = only_best_update(info)
print >>sys.stderr, "Examining", name, "v%s" % version
fd = urllib2.urlopen(url)

I can see "version" is already different from what firefox displays for the same webpage. Is there a way to force an update to the web-cache if you make an edit to the wiki page? If this is server side, think we would need to see what urllib2 is passing to the webserver verses what firefox does for the same webpage.

comment:3 Changed 6 years ago by jvonau

made a quick dump with:

import urllib2
f = urllib2.urlopen('')

The returned info doesn't contain the latest wiki edit: What is returned is an older oldid=253349 which is 2 edits older than the webpage.

Changed 6 years ago by jvonau


comment:4 Changed 6 years ago by jvonau

using this code I can retrieve the latest revisions

import urllib
import urllib2
url = ''
user_agent = 'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/'
values = {'name' : 'osbuilder',
          'location' : 'anywhere',
          'language' : 'Python' }
data = urllib.urlencode(values)
opener = urllib2.build_opener()
opener.addheaders = [( 'User-Agent', user_agent )]
response =

Changed 6 years ago by jvonau


comment:5 Changed 6 years ago by jvonau

Oddly enough the first round of code works now too, along with os-builder, you give anything a poke out that way?

comment:6 Changed 6 years ago by cjb

Nope, didn't touch it. :/

comment:7 Changed 6 years ago by jvonau

Think it was me... Truth is on the first run I had

opener.addheaders = [( 'Cache-Content', no-cache )]

in the mix too, so I'm not sure what flushed the pipe.. :/ Think the no-cache forced my isp to refresh its cache, then on the second run, it just worked.. If this occurs again, history has it this will with an edit on the wiki, I'll have a chance to repeat this test. Think I'll just let os-builder work for now.

comment:8 Changed 6 years ago by dsd

  • Resolution set to worksforme
  • Status changed from assigned to closed

So you think this was an ISP issue? Please reopen this if there's something you we can do on the osbuilder side, or if something can convince us that its a wiki problem.

comment:9 Changed 6 years ago by dsd

  • Component changed from build-system to website
  • Milestone changed from Not Triaged to 11.2.0-M4
  • Resolution worksforme deleted
  • Status changed from closed to reopened
  • Summary changed from os-builder via bitfrost returns stale data to wiki cache invalidation headaches
  • Version 1.5/1.0 Software Build os860 aka 10.1.3 deleted

Michael, Chris, Scott and myself chatted about this on IRC yesterday because I got hit by the same problem (yep its annoying!).

Key points:

  • runs behind a squid proxy
  • When a page gets edited, MediaWiki asks squid to invalidate its caches of that page
  • squid effectively maintains per-user or per-session caches (separating sessions through differences in HTTP headers), this is necessary because everyone sees a slightly different version of the page (e.g. mine says I'm logged in as DanielDrake, nobody else should be served that cached version)
    • This explains the case above where firefox shows a different result from olpc-os-builder
  • MediaWiki's cache invalidation is limited. If the Activities/11.2 page includes the ClockActivity page, when you make an edit to the ClockActivity page then only the ClockActivity cache will be purged; 11.2 caches will remain in use (and now be stale)
    • This is a well-known limitation, unlikely to get fixed easily
    • This explains the original problem seen
  • It's not limited to urllib/osbuilder/sugar-update-control, I saw it a few times while using firefox to create the 11.2 activities page last week.


  • Wait patiently a number of hours for caches to expire
  • After updating ClockActivity, make a trivial edit to pages such as 11.2 that include it so that caches get invalidated
    • This worked for me once, but failed the second time: my browser showed the new version, but olpc-os-builder continued receiving the old
  • Michael says that using action=purge (e.g. visiting ) should cause squid to drop caches
    • I tried this, it didn't work, software updater still got old content
  • Include "no-cache" headers in the HTTP request
    • This worked in Firefox (Ctrl+F5) when I saw the problem
    • It also solves the problem for olpc-os-builder if you run the attached python utility first
    • I don't want to make this change in bitfrost itself, because in other cases (particularly in deployment contexts), use of caches would be desirable
  • Modify squid configuration to skip cache for urllib clients e.g. "cache deny browser <regex>"
    • This is the approach I'm going to pursue next.

Changed 6 years ago by dsd

manual cache-purging program

comment:10 Changed 6 years ago by sridhar

Using action=purge worked for me.

comment:11 Changed 6 years ago by dsd

  • Resolution set to fixed
  • Status changed from reopened to closed

Yesterday we implemented the squid tweak to make it skip cache for urllib clients and it seems to be working. Let's leave it like this and keep an eye on it.

(if you tested when you wrote that comment, that config would have already been in place, explaining the discrepency between your experience and mine - you didn't need to do anything, the cache was already disabled)

comment:12 Changed 20 months ago by Quozl

  • Milestone 11.2.0-M4 deleted

Milestone 11.2.0-M4 deleted

Note: See TracTickets for help on using tickets.