Friday, May 15, 2009

A trip to Persia - GIS conference for municipalities in Mashhad




To start a few pictures from an excursion to the tomb of Ferdosi Toosi near Mashhad:

Thursday, May 14, 2009

Ontology with Processes for GIS gives a Geographic Simulation Model

The introduction of process descriptions in GIS is a long-standing desire. A partial answer is given by models of geographic, continuous processes with Partial Differential Equations. Other spatial processes can be modeled with cellular automatons and multi-agent simulation.

If a GIS maintains data with relation to observation time (e.g., snapshots) and includes in its ontology the description of the processes then the datasets in the GIS become linked by the processes. The GIS can simulate the development and compare it with the observed changes in the real world, deriving values for (local) constants for the process models, etc.

Integrating time related observations with process models in a GIS give a substantially different information system than current GIS, which are well organized, useful repositories for data. The new GIS is a spatial simulation systems.

It may be interesting to speculate on the timeframe in which such a transformation could occur; consider that from a gleam of GIS in researcher's eyes to the current reality of GIS workhorses took some 20 years; consider further that the integration of snapshot time into GIS is past the research but not yet fully integrated in practice. One may conclude that the "GIS as a geographic simulation system" may take another 25 years.

Friday, May 8, 2009

Synchronizing multiple computers

For 20 year I had only one computer, a laptop and later a tablet. I like the slate tablet from Fujitsu Siemens, because it lets me arrange screen and keyboard as I like it. Unfortunately, it is heavy and the batteries run out quickly, thus I have added a Asus eeePC 901 “for the road” - ending up with three machines: a tablet at home, a table at work and the eeePC anywhere else.

Synchronizing the three computers became crucial. I regularly missed a file needed, because it was stored on another machine, and I had to repeat any change to the interface I made.

Synchronizing the files I work on was easy: Unison does it. I set up a directory on a machine on the web (computer number 4!) and I sync each of the three machines I work on against the fourth one. Unison works reliably and points out files which have a conflict and need manual (human intelligence) resolution. But still, every change to a setup in an application had to be redone on the other machine... maddening!

When I upgraded all three computers to Ubuntu 9.04 Jaunty I set them up with exactly the same software and then decided to synchronizing all my files – including the hidden files describing my configuration and user preference using Unison. The unix-style operating system separates the setup of the computer (which is evidently different for each hardware) from all user specific aspects, which are stored under /home/user_name. This works!

As an added benefit, the files I have on computer number 4 are automatically backed up every night (with rdiff-backup, keeping all old copies). This is sufficient for retrieving previous copies of files; a backup of the installation is not needed, because it is faster to re-install all, which takes less than 2 hours, including installing additional software and the one or two edits necessary.

To reduce traffic on the web, I excluded all “cache” type files and try to find the files which are specific and must remain different on the three machines. The file with the list of path and filenames to ignore when synchronizing is included here at the end with the hope that I may get some hints what I should add or rather what I must synchronizing to avoid disasters.

I guess others may have similar needs - having all your files and setup all the time available is what Google and Microsoft try to sell. To make it a reality under my full control, the software designers should think of
1. all cached data in files or directories called 'cache' and not rely blindly on the reliability,
2. configuration in small files, easily shared,
3. allow local configurations (and other data needed per machine) in files or directories called 'local' (or with names including the hostname).
It should be possible to have a live system on a stick, which you can plug into any reasonable computer, start up, sync and have most of your files locally to work with.

For things which must be different on different machines, I use scripts I with a case statement as shown here:

#!/bin/sh
# to execute depending on the host

p904Atlanta () {
unison -sshargs "-o ProtocolKeepAlives=30" gi41_mergeAtlantaBern
exit 0
}

p904Bern () {
unison -sshargs "-o ProtocolKeepAlives=30" gi41_mergeAtlantaBern
exit 0
}

p904Hamburg () {
unison -sshargs "-o ProtocolKeepAlives=30" gi41_mergeHamburg
exit 0
}

THISHOST=$(hostname)

case $THISHOST in
bernJ) p904Atlanta;;
atlantaJ) p904Bern;;
hamburgI) p904Hamburg;;
*) echo unknown host $THISHOST;;
esac

The list of files excluded for Unison:
ignore = Path .unison/*
ignorenot = Path .unison/*.prf
ignorenot = Path .unison/*.common

ignorenot = Path .nx
ignore = Path .nx/*
ignorenot = Path .nx/config

ignorenot = Path .svngzPrefs.txt

ignore = Path .beagle
ignore = Path .cache
ignore = Path .cabal
ignore = Path .eclipse
ignore = Path .evolution
ignore = Path .mozilla-thunderbird
ignore = Path .popcache
ignore = Path .wine
ignore = Path .nautilus
ignore = Path .thumbnails

ignore = Path .xsession-errors
ignore = Path .pulse
ignore = Path .ICEauthority
ignore = Path .Xauthority

ignore = Path .dbus
ignore = Path .config/transmission
ignore = Path .opensync*
ignore = Path .ssh/ida_rsa
ignore = Path .gnome2/gnome-power-manager/profile*
ignore = Path .gconfd/saved*
ignore = Path .recently-used.xbel

ignore = Path {unison.log}
ignore = Path {local_*}
ignore = Path Photos
ignore = Path {workspace}
ignore = Path {experiments}

#avoid temp files
ignore = Name temp.*
ignore = Name .*~
ignore = Name {*.tmp}
ignore = Name theResult
ignore = Name *cache*
ignore = Name *Cache*
ignore = Name cache*
ignore = Name insertedRepo_*

ignore = Name trash
ignore = Name .trash*
ignore = Name Trash
ignore = Name *Trash*
ignore = Name *trash*

ignore = Name urlclassifier*

perms=0o1777

Laws of nature and laws of human nature

The current crisis is a welcome opportunity to rethink politics, economy and “all that”. In this and related short blogs to come, I will write down my current understanding of “how the world works”. I start with what I think is fixed and cannot be changed by humans within the time frame of human history:
  1. The laws of nature; physics, chemistry etc.

    Lawrence Lessig (Code, 2000) has pointed out that the laws of nature apply to everybody and nobody can escape them. Water flows downhill for everybody!

  2. The fundamental aspects of human nature.
    In principle, human nature is changeable but within the timespan of history, it must be accepted as constant. Human desire for love, fear of death and pain, hope and all, is a constant. Ignoring or maintaining illusions about the human nature is equally damaging as ignoring the laws of nature.

Contrasting with the constant nature of these laws, our knowledge and understanding of the laws of nature and the laws of human nature is greatly changing over time. Economically relevant is the current knowledge of these laws and how they can be used in production processes.

Changes in the knowledge of the laws of nature and the laws of human nature change the technology in a wide sense and thus the economy; values of things owned change, some become more valuable. Examples: the oil buried under the sand of Arabia became valuable with the advent of individual motorization, the quarries of limestone suitable for lithography lost value when other less demanding printing methods were found.

Sunday, May 3, 2009

Install Cabal with Ubuntu 9.04 Jaunty and GHC 6.8.2 or 6.10.1

Cabal is a marvelous help to install the Haskell packages found in Hackage. There is a very long and rapidly increasing list of Haskell packages to solve nearly any programming task collected in Hackage: reading XML files, connecting to databases, to graphical user interface (wx) or to run a web server – all this and much more is available. The only problem was to find the right versions to work together, such that the ghc package manager is satisfied and the result is running.

The regular method to install packages was:
  1. find the package on Hackage http://hackage.haskell.org/packages/archive/pkg-list.html#cat:user-interface

  2. unpack the file in a directory of your choice

  3. change into this directory

  4. runghc or runhaskell Setup.hs configure (or Setup.lhs – whatever is in the package)

  5. runghc or runhaskell Setup.hs build

  6. sudo runghc or sudo runhaskell Setup.hs install
    With this command, the compiled content is moved to /usr/local/lib and registered with the package manager of ghc (in /usr/lib/ghc-6.8.2)
  7. download the package as a .tar.gz file (link at the bottom of the page)

The procedure is simple, but time consuming to satisfy the dependencies between packages. A package may need another packages that must be previously installed, which is discovered in step 5 (configure). Then the package required must be installed first; it is usually easy to find on the hackage page of the dependent package, but may require yet another package...

Cabal automates this. The only problem was that I could not find a ready made cabal-install program and had to construct it. I had a new and clean installation for GHC 6.8.2 in Ubuntu 9.04 Jaunty (and the same should apply for Ubuntu 8.04 Hardy and 8.10 Intrepid). I loaded a bunch of packages already available in the Ubuntu repository, of which libghc-network, libghc6-parsec and libghc6-zlib (with the dependent zlib...) are likely the only relevant ones here.

The blog http://www.kuliniewicz.org/blog/archives/2009/03/24/installing-ghc-610-on-ubuntu-intrepid/comment-page-1/ gave me a start, but I ran into problems with cabal-install-0.6.2 as described there, probably because I had a 6.8.2 ghc and difficulties to build network, which I could not download. I gave up with ghc 6.10 which is not yet available for ubuntu.

I tried first to use the traditional method to install cabal-install, but found later a simpler method:

  1. getting cabal-install-0.6.0 from hackage and use
  2. sudo sh bootstrap.sh

Required are the packages mtl, parsec and network, which I had. The result is an executable ~/.cabal/bin/cal, which I copied to /usr/bin (sudo cp ~/.cabal/bin/cabal /usr/bin). Next, it may be necessary to run cabal update to download the list of packages from hackage.

Now life should be wonderful! (I did just install WxGeneric with simply sudo cabal install WxGeneric...)

Alternatively and on a different computer, the traditional approach with manual download and configure/build/install worked for the combination

  1. HTTP 3001.0.4 (produces some warnings)
  2. Cabal 1.2.3
  3. cabal-install 0.4.0

Higher versions I could not get working – but the result is satisfying.

Later note for installation with 6.10.1:

I installed manually only parsec-2.1.0.1 and network-2.1.1.2 (with runhaskell Setup.hs configu/build/install) and then the ./bootstrap script in cabal-install-0.6.2 ran.