Site re-design and a first glimpse on the project archive

If you were here before, you probably noticed that I redesigned this blog. This is not only because I became aware of the previous ugliness, but is also needed as preparation for the next big change on Launching the „project archive“.

There is a new link at the top, called „projects (beta)„. This post is not intended to attract more visitors to that page, not yet. It’s here to keep some of you from doing so, because it is not yet finished. I will ask a limited number of people to test the new features as they are implemented.

If you really wish, you may want to proceed into the project archive and play with the current version without explicit invitation, but since you cannot know which features should already work and which are broken by design, you won’t be able to do any useful testing without prior instruction.

But if you find any (new) errors in the blog (like missing or mis-aligned pictures, dead links, CSS glitches), please leave a comment!


The project archive is now functionally rather complete. Still, scaling and animation don’t work very well together. Also, it is currently not usable with Javascript turned off. It will be easy to add limited functionality without JS, but would be several hours of work to make all of it work.

The biggest „drawback“ is probably the lack of content. But since I can now add/modify projects from within the wordpress admin interface, I will add new content every now and then. Whenever there are bigger additions, I will blog about it.

XNX? (that’s short for „XHTML’s not XML?“)

XHTML documents are (at least I always believed so) XML documents, and as such, should be parsed with an XML parser. I don’t know of any parsers or data models which would even be able to distinguish




So I would assume that both are valid markup. Or maybe, both would be invalid. But as far as  I know, neither DTD nor XSD can allow mixed content on an element, but forbid it to be empty, so they must both be well-formed and valid (validated) XML. Maybe there are rules for XHTML which go beyond the pure XML validation, but to check those, there are special validators. And since application-level validation must take place on the data model, not on the serialized syntax level, any such application-level-rule would be defined on a layer of abstraction that cannot possibly see which syntax was used on the parsing-level.

So I am completely baffled to see that:

  • Both ways are accepted without any warnings by the W3C Validator.
  • The online version of HTML tidy also accepts both as valid input, no warnings.
  • But while „tidying“ the markup, it changes one of them into invalid XML that is not even well-formed (!!!), by changing <div/> into <div> without a closing tag.
  • Firefox 3.6 completely refuses to display invalid XHTML, like the one generated by tidy.
  • But it accepts both <div></div> and <div/> without complaining.
  • It displays <div></div> correctly, but (that’s the second shocking thing) completely messes up my page when I use <div/>.

So there are subtle differences between two identical things, which are silently misinterpreted by three software tools (validator, tidy, firefox) which should either do the right thing, or tell me that I’m doing wrong.

There are folks on the internet who say that in HTML, <element/> is not a valid syntax at all. Others say that in HTML, empty div’s, even <div></div> is not allowed. See here and here.

But whatever they say, I’m doing XHTML, and it is completely against any logic that there’s any difference between <div></div> and <div/>.

I don’t know whether to blame the W3C, or the Firefox implementors, or the ones of tidy for this, but if XML and XHTML were good for anything, than it would be to eliminate those syntactical subtleties and save me from wasting hours on chasing errors in my markup which, by definition, cannot be errors at all.

(PS: If you don’t get the meaning of the title… it’s an allusion to „GNU“ which is an acronym for „GNU’s not Unix“, which is kind of misleading, because GNU is very Unix-like – actually, it’s an attempt to be another Unix. And since the whole XML-XHTML-thing is kind of misleading as well…)

Helping relief units in Haiti and Chile – from my desk

OSM helped Haiti

When an earthquake struck Haiti in January, many organizations and companies donated geospatial data, and the volunteering individuals involved with OpenStreetMap did a great job putting it together and produced maps of Port au Prince. Not only were those maps accurate and up-to-date, but they also featured information about collapsed buildings, mobile hospitals and emergency accommodations. This was only possible because they could use satellite data that was both up-to-date and extremely detailed – and there was an unbelievable number of such sources.

It’s notable that some days before the disaster, the OSM-maps for that city where a white spot with some lines.

Port au Prince - before and after
Port au Prince - before and after

I heard of it several weeks later, and was very sorry to have missed that opportunity to help and promised to myself to lend a hand the next time something similar happens.

Now Chile needs help – and sources

Last weekend, there was a severe earth quake in Chile, and this time I noticed the incident some hours after it had happened. Of course, there was already something setup in the OSM wiki, but map sources were sparse. Even now, we still wait for up-to-date mid- and high-resolution imagery, and are basically stuck with 10 years old images from Santiago and very bad material for all the other regions.

Images from RapidEye

On Monday, the German company RapidEye AG published a 5000×5000 pixel JPEG of
the city „Concepcion“ and its surrounding area. They announced to publish future
images and a reference from before the earth quake soon. See for more information.

The resolution seems to be 6.5m (high enough to see larger roads, but
too low to actually see any damages). And, sadly, the image suffers
from heavy JPEG compression artifacts. This seems to be tile 1822122
which is (along with other tiles) available from the geodata kiosk at in diverse formats
(GeoTIFF, etc.) but at the same resolution, priced about 186 Euro
(approx. 250 US-Dollar. Maybe there is a way to get it for free, but
following the usual checkout process, that amount would be billed.)

Since I did not plan to pay that amount, I gave a shot at the JPEG. After installing some plugins into JOSM and realigning the picture, my first step was to trace out all the areas which looked like residential buildings. This is a nice trick to indicate where some mapping has to be done: If someone notices those typical gray areas on the map, he can expect residential roads as well. If there are no roads, this is sign for missing data.

Look at this map (it’s interactive) and see for yourself:
[osm_map lat=“-36.806″ long=“-73.072″ zoom=“12″ width=“460″ height=“380″]

After some hours, when I had done all of those gray areas, my eyes (and possibly my brain and the simple neural nets behind my retina) got used to the blurry, low-contrast satellite image, and I was bold enough to actually trace some roads and upload them. But still it’s very dissatisfying to be stuck with that kind of material, unable to do a great deal of helping work.

Areas which in which I did more than just gray surfaces
Areas which in which I did more than just gray surfaces

Fast, faster, OSM

Maybe it’s worth adding that normally it takes about a week until new map features are rendered into the map and can be seen online. I hoped that under the current circumstances it would be a bit faster, and was very surprised and pleased to see that my data was actually rendered and ready for consumption less then 2 minutes after I uploaded it. Wow.