Ticket #5955 (new defect)
Please test olpc-update from ship.1 to update.1 on many different XO/net configurations.
|Reported by:||gnu||Owned by:||jg|
|Priority:||blocker||Milestone:||8.2.0 (was Update.2)|
|Keywords:||release?||Cc:||cjb, cscott, mstone|
|Deployments affected:||Blocked By:|
We have various users who will be doing updates via dialup ISP's, rather than fast connections. (rt# 2962 is one.)
OLPC-update has had various problems in the past with resuming interrupted transfers and such. Let's make sure that our least connected users have the best possible experience when updating, by testing it in that environment. And then after that works well, let's try again, hanging up the phone a few times in the middle, making sure that olpc-update recovers without having to start all over again.
In machines where the datastore bugs #5719, #5744 have resulted in 100MB+ useless files sitting around in flash, does olpc-update identify and remove those before attempting to add a large amount of information to the file system? Does update.1's datastore remove such useless files the first time it runs? What cleans up after this bug?
Does the olpc-update script update the copy of olpc-update first, then run the new one to upgrade the rest of the system? This would let us patch up any problems in the copy of olpc-update that's in the old release. (Of course, this would also require that the new olpc-update be able to run successfully in *any* old release, which may be an even worse constraint. An alternative would be for olpc-update to look for a script that's specific to the original and desired releases, downloading and running that if it exists.)
Another way past this, if e.g. we need to ship a revised olpc-update to make it twice as fast on dialups, would be to make a tiny update from ship.2 that only changes olpc-update. People could upgrade to that, THEN to the big upgrade.
Are there large changes in the Library, translation files, or other parts of the system that need not actually be changed? Eliminating such things could significantly decrease the bandwidth required to do the update. Now that update.1 is stabilizing, we can look at things like its size and what pieces contribute to that.
Now that there are tens of thousands of G1G1 machines in the field, does OLPC have the bandwidth to serve update.1 to all of them on the same day? Even if we attempt to dribble out the notice that the update is available, press sites will report it and everyone will eventually pile on. Do our servers gracefully fall back (ignoring incoming TCP connections when in overload) if not, or does everybody end up getting 1-bit-per-second service (or does the server crash)? (The Internet Archive discovered that limiting their outgoing bandwidth by delaying the TCP open handshake, when their fiber was full, gave much better service than allowing it to complete and then dropping packets. The unanswered TCP open would usually complete on a retransmission a few seconds later, after a few competing users had finished their transfers; then the download would proceed at full speed. Dropped packets, on the other hand, would cause their server's TCP to halve its offered download bandwidth, or worse, resulting in very slow transfers for everyone. I'm sure you could borrow their load management software if you wanted.)