On Page Speed Service
I don’t usually bother with meta-posts (because if I did, most of this blog’s content would be about how I’d made some trivial change to something pointless). However, this is slightly more interesting than the usual: I’ve switched this site to Google’s Page Speed Service, currently being offered on a free trial basis (note: “pricing will be made available later”, so it won’t be free forever; signup here).
In a nutshell, this means that
www.farside.org.uk now resolves to a proxy
inside Google’s CDN rather than my (somewhat less reliable) Apache server
hanging off a slow pipe to the internets. This proxy acts both as a regular
caching HTTP/1.1 proxy, fetching and forwarding content held on my server, and
also transcodes the output using something similar to
mod_pagespeed, reducing the latency of the requests even
further (see the list of rewriters for more details).
The obvious question is: why bother? (after all, it’s not like this blog gets a ton of hits — especially with the two-year gap between this post and the last one). Well, mostly because it makes things more reliable (and also a little faster). Most of my output (such as it is) is on my Google+ profile now, but this site is still useful (for me, and hopefully for others) as a way to record things so that I can find them later, or just to write down original research. Moving into Google’s CDN means that not only do people geographically distant from my server get faster service, but also that nobody needs to be bothered about what happens if my server (or connection) goes down for a short while. And that’s good for everyone, me included.
How about the caveats? (other than the obvious one: that it’s not in open
signup mode): sure, there are quite a few limitations documented at the
Page Speed Service FAQ. The most notable of those is that because it relies on
a CNAME DNS record, you can’t use it for bare domains (explanation
here). Also, you have to be able to create a TXT record on the domain
for verification (in my case, in addition to the A record I’ve already added
for Google Apps domain verification, and the HTML file I’d already added for…
some other Google property
Otherwise, first off, it’s a caching HTTP proxy. That may be notable if you’re not used to having a proxy in the request path: it means that you can’t force your way past the declared cache lifetime. Roughly speaking, if the caching headers at the source say that a certain resource is cacheable without revalidation for N minutes, the proxy is allowed to keep serving that cached version during that period, whatever type of request you make. This can be a pain when trying to make changes to a stylesheet (for example), and discover that some fool (that would be me) has set the stylesheets to be cacheable for a week, because, no matter what you do, the Google proxies will send you the old version.
However, this is also nothing special: HTTP’s caching headers describe exactly what behaviour User-Agents and proxies may follow; the difference is that — with the exception of mobile carrier proxies — it’s just not that common for there to be a proxy between me and my website.
Secondly, there are some visible changes to the default caching behaviour to be aware of. For example, it looks like some resources — scripts, for example — are cached by default for five minutes, if the reference server doesn’t provide any caching headers.
Finally, the proxy will probably behave differently at the HTTP level to your
current HTTP server. For example, I noticed that HEAD requests no longer
reliably indicate the Content-Length or Transfer-Encoding that will be sent on
a GET — and also that the proxy uses a different set of rules to my local
Apache in deciding whether a resource can be sent compressed (in particular,
while Apache appears happy to deliver compressed
to anyone, Google’s proxy will only do so for a subset of User-Agents).
None of this seems like a big deal. The best part, though? If something goes
wrong (or if it turns out to be too expensive later), I haven’t lost anything:
I just switch the CNAME back so that
www.farside.org.uk points directly to my
server again, and I’ll be back where I started.
Posted at 21:18:15 BST on 31st July 2011.
Making Eclipse show Android’s source
The Eclipse support for Android development is pretty good, but one slight annoyance is that the source for the Android SDK classes isn’t available by default.
Fortunately, there’s a fairly straightforward process to fix this.
First, get hold of the source JAR for the SDK. What’s that? We don’t appear to make a source JAR available? That’s annoying.
Well, hopefully that will be fixed at some point. In the meantime, you can either download one that someone’s prepared (that’s apparently for something approximating 1.5r2 — but note that I haven’t checked how correct it is!), or you can produce your own from the git repository, from a tag of your choice.
Once you have a source JAR, you’d normally expect to just attach it directly to the library in Eclipse, but that doesn’t work with the Android SDK: Eclipse says “The JAR of this class file belongs to container ‘Android 1.5’ which does not allow modifications to source attachments on its entries.”
Which is a roundabout way of saying that the source path is fixed. If you open up the Eclipse project properties dialog, change to the Java Build Path page and Libraries tab, then expand the ‘Android 1.5’ library container and then the android.jar file (phew!), you’ll see the ‘Source attachment’ option, which shows where the source is expected to be.
The Java Build Path dialog (zoom)
For the Android 1.5 SDK, this is
Location/platforms/android-1.5/sources (and presumably similarly for
the Android 1.1 target), where
SDK Location is the path set in the
workspace preferences’ Android page. Note that the 1.0 SDK (which only
supported the 1.0 target, of course) just appends the string
sources to the
SDK Location, on the assumption that the SDK location ended with a trailing
slash. (This should only be something to look out for if you need to target
1.0; I’m pretty sure it was fixed in the plugin provided with the 1.1 SDK.)
There are two ways to put the source JAR into the right place. The
conventional way is to create a directory called
platforms/android-1.5/ (or 1.1), and then unzip the JAR (which is just a ZIP
file) into that directory. However, although
sources doesn’t have an
extension, Eclipse doesn’t actually require it to be a directory, so a simpler
way is just to rename your source JAR so that it’s called
sources, and move
it into the relevant directory.
Once you’ve moved the file or files into the right places, you just need to get Eclipse to take note of it. I found that just restarting Eclipse was enough, but if that doesn’t work you could always try refreshing the project, or closing and re-opening the project via the context menus.
Posted at 18:25:05 BST on 19th June 2009.
Troubleshooting Android’s ‘adb devices’
I’ve been playing around a bit with Android development lately, and — for the
second time — spent a while trying to work out why
adb devices wasn’t
showing me anything:
$ adb devices List of devices attached $
Thanks, adb. I’m pretty sure I’ve got a Dream plugged in there, you know. I
did get it working in the end, so I thought I’d write up some of the
troubleshooting steps so that I’ll have something to refer to the next time I
run into this problem
First off, it’s worth mentioned that if you’re on Windows, you apparently need to install some USB drivers first (and it appears that the 32-bit and 64-bit versions aren’t compatible, so you need to pick the right version too). I’m not using Windows, though, so I don’t know a whole lot about this step.
However, life’s not all rosy on Linux: the
adb command scans
/proc/bus/usb/ (as provided by usbdevfs) or
(ditto, sysfs). This is absolutely the right thing to do (
lsusb works the
same way), but it means that there’s another step (udev) between the device
detection and being able to use the device.
Evidently some of the default udev rules (possibly only on some
distributions; in particular, on Ubuntu) create device nodes that aren’t
world-readable, meaning that the device node is created, but adb can’t read it.
The easiest way to tell whether you’re having this problem is to kill the adb
daemon and restart it as
$ adb kill-server $ sudo adb start-server * daemon not running. starting it now * * daemon started successfully * $ adb devices
(The adb daemon appears in
ps as “
adb fork-server server”, by the way.)
I’ve also seen suggestions that you should be able to run
sudo adb devices to
start the server as root, but when I tried that I ended up with a daemon
running as myself again.
If this is your problem, the fix is mentioned on the
setup page I mentioned previously: you create a file called
/etc/udev/rules.d/51-android.rules that contains rules telling
udev to make the device node world-writable when a matching device is found.
The example rule provided in the Developer Guide matches any HTC devices, which
might be a bit wide-ranging: you could presumably restrict the match to just
the device’s id. (The HTC Dream and Magic share the same device id,
0bb4:0c02, or, strangely,
0bb4:0c01 when booting into HBOOT/fastboot mode.)
Finally, there’s one very important thing that I’d completely forgotten about:
if the devices appears in
lsusb output but
adb devices still shows nothing,
check that the phone is set up to allow debugging via USB
(Settings⇒Applications⇒Development⇒USB debugging). If
this is off, you’ll see nothing… and that was the step I’d forgotten about.
Things went much better after that: I think I might have had to restart the phone once when it was being insistent that there wasn’t a USB connection, but other than that, it’s all happy:
$ adb devices List of devices attached HT851N003417 device $
One more thing: while looking around, I found an issue reported
against the Android project that states that
adb is broken against Linux
kernel versions 2.6.27 and later, with identical symptoms. I’m currently using
2.6.24, so I can’t test it, but it’s worth being aware of.
Posted at 14:26:38 BST on 17th June 2009.
Understanding IPoEoATM and RFC 1483 bridging
When I switched ISP to Be Unlimited earlier this year, I wanted to use my existing ADSL router rather than the ‘Be Box’ they supplied, so I followed the instructions Be provide for using a 3rd party modem. While mostly straightforward, they do include two rather obscure requirements:
- Connection type is IPoEoATM.
- ADSL bridging should be enabled (as per RFC 1483).
Googling for ‘IPoEoATM’ returned no useful results. In contrast, the Be forums were full of useful suggestions about ‘bridging’ and ‘bridge mode’, but it didn’t seem like anyone was really sure what Be were asking for. I managed to get my router to work anyway, but I don’t like mysteries, so I did some research. In summary:
- ‘IPoEoATM’, while not a common term, is one of the standard protocols supported by every DSL modem. It’s not the same as PPPoE or PPPoA (as there’s no username/password), and it’s not the same as IPoA (that’s something else). On my router (a Netgear DG834N), it’s selected by first telling the router that it doesn’t need to login, and then choosing ‘use static IP’ instead of ‘use IPoA’.
- In this context, ‘bridging’ can refer to two separate-but-related concepts. One is about turning your modem/router into a modem/bridge; the other is about the form of packets on the (DSL) wire, and Be is definitely asking for the latter. It turns out (per the DSL spec) that this is just another way of saying that they want you to use IPoEoATM rather than IPoA.
If you’re not interested in network geekery, you can stop reading now (if you
:-)), because I’m going to go into a little bit more detail
than is probably healthy.
ADSL bridging should be enabled (as per RFC 1483)
Bridging first. For DSL routers, ‘bridge mode’ commonly refers to switching off the router part of your combined DSL-modem-and-router, leaving it as a DSL-modem-and-bridge. That is, converting it from a layer 3 router to a layer 2 switch, albeit one that’s got a single ‘DSL’ port in addition to the others.
For this to work, you need to assign externally routable addresses to your internal hosts and set each of them to use a specified (remote) IP address as their gateway, as your modem/switch will no longer being doing any kind of NAT. You probably also want to make sure you have some kind of stateful firewall between them and the outside world, since generally you’ll have been getting that ‘for free’ with NAT.
In my case, since I have more internal IP addresses than I have external IPs
allocated to me, I really didn’t want to do this if I didn’t have to. Fun
fact: you can switch a DG834N into bridge mode by going to the hidden URL
/mode.htm; bridge mode is called “Modem (modem only)”.
However, that ‘bridge mode’ isn’t the same thing as RFC 1483 bridging at all.
RFC 1483 (and RFC 2684, which obsoletes it) describes two multiplexing methods for packets traveling over ATM (ATM being the typical network infrastructure employed on the other side of the DSLAM at the end of your phone line). One method, LLC encapsulation, can carry multiple protocols (IPv4, IPv6, ARP, IPX, Ethernet, etc) per ATM virtual circuit, while the other, VC multiplexing, uses a separate ATM VC for each protocol.
LLC encapsulation is more flexible, while VC multiplexing is more efficient. (In particular, since VC multiplexing relies on out-of-band setup to provide some of the information about the packets, the overhead in some — common — cases can be zero.) RFC 2684 discusses some of the trade-offs:
The decision as to whether to use LLC encapsulation or VC-multiplexing depends on implementation and system requirements. In general, LLC encapsulation tends to require fewer VCs in a multiprotocol environment. VC multiplexing tends to reduce fragmentation overhead (e.g., an IPV4 datagram containing a TCP control packet with neither IP nor TCP options exactly fits into a single cell).
My previous ISP used DSLAMs owned by BT, and BT (as far as I know) exclusively uses VC multiplexing. Be owns their own DSLAMs, and has chosen to use LLC encapsulation, which seems like an odd choice given that I assume they’re only routing one protocol (IPv4) at present. Perhaps they’re giving themselves space for more, who knows?
Anyway, each of LLC encapsulation and VC multiplexing can be used to send packets containing either routed or bridged protocols, and each of these four combinations uses a slightly different packet format.
Be is asking for LLC-bridged, which means they want to you to send Ethernet frames rather than IPv4 packets. (Obviously, the Ethernet frames will contain IPv4 packets; I’m just talking about the outermost protocol here.)
How does this relate to whether you put your DSL router into ‘bridge mode’? A little. The two concepts are almost orthogonal: the choice of whether the DSL frame payloads sent by your modem include a host’s source address or the address of your router’s external interface is, in one sense, entirely separate from the choice of whether those payloads are Ethernet frames or IP packets.
There’s one obvious way they’re related in practice, though: if your modem is a layer 2 switch, it’s only looking at the Ethernet frame headers, not the IP packets, so — with the exception of the (rather unlikely) case in which you were able to configure a point-to-point connection between a specified internal host and the remote gateway — you would need to use Ethernet frames rather than IP packets so that your modem can switch incoming DSL payloads to the correct host.
Connection type is IPoEoATM
Onto the somewhat nebulous “IPoEoATM” term. The only official definition I found for this term comes from a technical report by the DSL Forum called “TR-101: Migration to Ethernet-Based DSL Aggregation”. It seems to primarily be about changing parts of an ISPs internal infrastructure from ATM to Ethernet, but includes a diagram of the seven (!) standardised network stacks in use on the end-user side of an ADSL DSLAM (what was that quote again? “The good thing about standards…”).
Anyway, three of these seven stacks describe various encapsulations of Ethernet frames directly within DSL frames, which I’ve not seen supported anywhere yet, while the other four deal with more typical ATM-in-DSL frames. These four were originally specified in an earlier document, TR-043: Protocols at the U Interface, where they are named as: “IP/Eth (commonly called ‘1483’)”, PPPoE, PPPoA, and IPoA. However, TR-101 gives an alternate name for the first: IPoEoATM.
Since I’ve come this far, here’s how the four DSL/ATM protocol stacks mentioned in TR-101 stack up (hah):
- IPoEoATM: (DSL (ATM (AAL5 (RFC 2684 bridged (Ethernet (IP))))))
- PPPoE: (DSL (ATM (AAL5 (RFC 2684 bridged (Ethernet (PPPoE (PPP (IP))))))))
- IPoA: (DSL (ATM (AAL5 (RFC 2684 routed (IP)))))
- PPPoA: (DSL (ATM (AAL5 (RFC 2684 routed (PPP (IP))))))
(You can see how Be’s request for “ADSL [sic] bridging per RFC 1483” is already covered by their earlier request for IPoEoATM.)
Notes: I didn’t mention AAL5 above; it’s an encapsulation allowing variable-length payloads to be carried by one or more (fixed-size) ATM cells (i.e. it’s the ATM version of TCP fragmentation). Also, PPPoA actually doesn’t use RFC 2684: it uses RFC 2364, but as far as I can see, the net effect is (bitwise and deliberately) identical to as if it were defined in terms of RFC 2684, with PPP treated as a routed protocol.
Finally, with all this encapsulation going on, you might be assuming (as I was) that a noticeable proportion of your DSL traffic is taken up with encapsulation overhead. It turns out that it’s not actually too bad (depending on IP payload size; see the appendices at the end of TR-043 for the gory details), but there’s a lot more complexity in this DSL thing than I previously realised!
Posted at 00:24:53 GMT on 14th March 2009.