Monday, August 31, 2009

Progress in Social Sciences?

I was interested in politics and economy when I was 20; I read the “Limits to Growth” and economic texts on development by Binswanger and Myrdal. Recently I asked myself what is changed in our understanding of politics and economy since then. The question is not what is different in politics and eocnomy today compared to the 1970 and 80s, but to identify the methodological changes, the changes in our thinking between then and now. How are the tools for analysis advanced? Is there an advance in the way we understand society or do we only have more data and different situations?
The question is about the models underlying our analysis. Economy is, primarily, based on dynamic models and typically interested in equilibrium states; markets tend to an equilibrium between supply and demand - at least in theory as presented by Samuelson. These models typically had 2 actors (or two groups of actors); the suppliers and the consumers, the industrialized and the less industrialized world etc.
In the last years, I see increasingly models with 4 (and sometimes more) actors. The Club of Rome and Simcity made the idea of simulation models by Forrester popular. Models with 4 actors – i. e. multi-agent models – were used to analyze international monetary issues; I have used it to understand why the dollarization in the 1990s in South America did not work (despite the standard static theory and the IMF predicting success) and found a wonderful 3 agent model for corruption.
These multi-agent models seem to me very helpful to explain the situations I could not understand in 1990– e.g. the politics and economy of Africa. The simple “Europe exploited its colonies” model is not correct, as is demonstrated since decolonization. A model with four actors: the elite in the industrialized world, the elite in the third world country and the population in the industrialized and third world country shows the common interest in the economic elites and explains the investments and debts, together with the political pressures observed today..... Similar models can be used to analyze the situation in Afghanistan, Pakistan, or the Iraq. John Perkins wrote “The economic hit man”; very depressing reading about U.S.A. politics in the 1980s, confirming a the model.


Friday, August 21, 2009

Fujitsu Lifebook P1630 Network

I will collect on this blog page hints for others with a P1630:

The network connection on the P1630 was sort of flaky. It improved after installing linux-backports-modules-jaunty.

sudo apt-get install linux-backports-modules-jaunty
restart!

Now I have a decent signal level and it connects quickly.

Thursday, August 20, 2009

Fujitsu Lifebook P1630 Touchscreen














I have a new ultraportable (1 kg) Lifebook P1630 with a touchscreen. I hope to use it for presentations in the same manner I use currently the Fujitsu Stylistic 5010 tablet.
The P1630 comes with Vista installed - but this is unusable slow on a Core 2 Duo processor and I do not tolerate the waiting updates of MS software forces on me. I want to run Linux, meaning Ubuntu 9.04 Jaunty!
Installation was a breeze (much faster than just the update with MS) and everything worked out of the box, except for the touchscreen.

With help from the web, I got it working. This is to report what I did, to help others with P1630 (and probably P1620). The P1630 (and 1620) connect the touchscreen via usb, not serial - thus the advice for P1510 etc. does not work (e.g. this script). The method proposed by Sam Engstrom propagated here does also not work, because it connects the touchscreen using a setserial and /dev/ttyS0. It seems that this could be solved by connecting the touchscreen to an eventN (with a hal/policy?) and then use wacdump /dev/input/eventN (N a number 1..9). I did not explore this further.

I got the touchscreen of the P1630 working with these tools. This worked immediately, with the BIOS set to "tablet" (not "touch screen") except for some confusion with the calibration. The calibration routine acted strangely on my system: after calibration any touch on the screen would open the trash folder (anybody has an explanation? I had observed this before I installed any touchscreen software already). The calibration routine did leave the max_x and max_y values set to the initial -1 values. I fixed in the code by adding two additional tests to read:

if (calibrate) {
calib_minx = ..
calib_miny = ..
calib_maxx = (calib_maxx==-1)?x:((x>calib_maxx)?x:calib_maxx);
calib_maxy = (calib_maxy==-1)?y:((y>calib_maxy)?y:calib_maxy);
}

make! sudo make install! and it works.

The sampling of the stylus is unfortunately not very rapid and handwriting looks wiggly. Does anybody have a hint how this could be improved?

Addition: I learned that there are new (corrected) versions available, the change has the same effect, but I have not tried it yet.

Friday, May 15, 2009

A trip to Persia - GIS conference for municipalities in Mashhad




To start a few pictures from an excursion to the tomb of Ferdosi Toosi near Mashhad:

Thursday, May 14, 2009

Ontology with Processes for GIS gives a Geographic Simulation Model

The introduction of process descriptions in GIS is a long-standing desire. A partial answer is given by models of geographic, continuous processes with Partial Differential Equations. Other spatial processes can be modeled with cellular automatons and multi-agent simulation.

If a GIS maintains data with relation to observation time (e.g., snapshots) and includes in its ontology the description of the processes then the datasets in the GIS become linked by the processes. The GIS can simulate the development and compare it with the observed changes in the real world, deriving values for (local) constants for the process models, etc.

Integrating time related observations with process models in a GIS give a substantially different information system than current GIS, which are well organized, useful repositories for data. The new GIS is a spatial simulation systems.

It may be interesting to speculate on the timeframe in which such a transformation could occur; consider that from a gleam of GIS in researcher's eyes to the current reality of GIS workhorses took some 20 years; consider further that the integration of snapshot time into GIS is past the research but not yet fully integrated in practice. One may conclude that the "GIS as a geographic simulation system" may take another 25 years.

Friday, May 8, 2009

Synchronizing multiple computers

For 20 year I had only one computer, a laptop and later a tablet. I like the slate tablet from Fujitsu Siemens, because it lets me arrange screen and keyboard as I like it. Unfortunately, it is heavy and the batteries run out quickly, thus I have added a Asus eeePC 901 “for the road” - ending up with three machines: a tablet at home, a table at work and the eeePC anywhere else.

Synchronizing the three computers became crucial. I regularly missed a file needed, because it was stored on another machine, and I had to repeat any change to the interface I made.

Synchronizing the files I work on was easy: Unison does it. I set up a directory on a machine on the web (computer number 4!) and I sync each of the three machines I work on against the fourth one. Unison works reliably and points out files which have a conflict and need manual (human intelligence) resolution. But still, every change to a setup in an application had to be redone on the other machine... maddening!

When I upgraded all three computers to Ubuntu 9.04 Jaunty I set them up with exactly the same software and then decided to synchronizing all my files – including the hidden files describing my configuration and user preference using Unison. The unix-style operating system separates the setup of the computer (which is evidently different for each hardware) from all user specific aspects, which are stored under /home/user_name. This works!

As an added benefit, the files I have on computer number 4 are automatically backed up every night (with rdiff-backup, keeping all old copies). This is sufficient for retrieving previous copies of files; a backup of the installation is not needed, because it is faster to re-install all, which takes less than 2 hours, including installing additional software and the one or two edits necessary.

To reduce traffic on the web, I excluded all “cache” type files and try to find the files which are specific and must remain different on the three machines. The file with the list of path and filenames to ignore when synchronizing is included here at the end with the hope that I may get some hints what I should add or rather what I must synchronizing to avoid disasters.

I guess others may have similar needs - having all your files and setup all the time available is what Google and Microsoft try to sell. To make it a reality under my full control, the software designers should think of
1. all cached data in files or directories called 'cache' and not rely blindly on the reliability,
2. configuration in small files, easily shared,
3. allow local configurations (and other data needed per machine) in files or directories called 'local' (or with names including the hostname).
It should be possible to have a live system on a stick, which you can plug into any reasonable computer, start up, sync and have most of your files locally to work with.

For things which must be different on different machines, I use scripts I with a case statement as shown here:

#!/bin/sh
# to execute depending on the host

p904Atlanta () {
unison -sshargs "-o ProtocolKeepAlives=30" gi41_mergeAtlantaBern
exit 0
}

p904Bern () {
unison -sshargs "-o ProtocolKeepAlives=30" gi41_mergeAtlantaBern
exit 0
}

p904Hamburg () {
unison -sshargs "-o ProtocolKeepAlives=30" gi41_mergeHamburg
exit 0
}

THISHOST=$(hostname)

case $THISHOST in
bernJ) p904Atlanta;;
atlantaJ) p904Bern;;
hamburgI) p904Hamburg;;
*) echo unknown host $THISHOST;;
esac

The list of files excluded for Unison:
ignore = Path .unison/*
ignorenot = Path .unison/*.prf
ignorenot = Path .unison/*.common

ignorenot = Path .nx
ignore = Path .nx/*
ignorenot = Path .nx/config

ignorenot = Path .svngzPrefs.txt

ignore = Path .beagle
ignore = Path .cache
ignore = Path .cabal
ignore = Path .eclipse
ignore = Path .evolution
ignore = Path .mozilla-thunderbird
ignore = Path .popcache
ignore = Path .wine
ignore = Path .nautilus
ignore = Path .thumbnails

ignore = Path .xsession-errors
ignore = Path .pulse
ignore = Path .ICEauthority
ignore = Path .Xauthority

ignore = Path .dbus
ignore = Path .config/transmission
ignore = Path .opensync*
ignore = Path .ssh/ida_rsa
ignore = Path .gnome2/gnome-power-manager/profile*
ignore = Path .gconfd/saved*
ignore = Path .recently-used.xbel

ignore = Path {unison.log}
ignore = Path {local_*}
ignore = Path Photos
ignore = Path {workspace}
ignore = Path {experiments}

#avoid temp files
ignore = Name temp.*
ignore = Name .*~
ignore = Name {*.tmp}
ignore = Name theResult
ignore = Name *cache*
ignore = Name *Cache*
ignore = Name cache*
ignore = Name insertedRepo_*

ignore = Name trash
ignore = Name .trash*
ignore = Name Trash
ignore = Name *Trash*
ignore = Name *trash*

ignore = Name urlclassifier*

perms=0o1777

Laws of nature and laws of human nature

The current crisis is a welcome opportunity to rethink politics, economy and “all that”. In this and related short blogs to come, I will write down my current understanding of “how the world works”. I start with what I think is fixed and cannot be changed by humans within the time frame of human history:
  1. The laws of nature; physics, chemistry etc.

    Lawrence Lessig (Code, 2000) has pointed out that the laws of nature apply to everybody and nobody can escape them. Water flows downhill for everybody!

  2. The fundamental aspects of human nature.
    In principle, human nature is changeable but within the timespan of history, it must be accepted as constant. Human desire for love, fear of death and pain, hope and all, is a constant. Ignoring or maintaining illusions about the human nature is equally damaging as ignoring the laws of nature.

Contrasting with the constant nature of these laws, our knowledge and understanding of the laws of nature and the laws of human nature is greatly changing over time. Economically relevant is the current knowledge of these laws and how they can be used in production processes.

Changes in the knowledge of the laws of nature and the laws of human nature change the technology in a wide sense and thus the economy; values of things owned change, some become more valuable. Examples: the oil buried under the sand of Arabia became valuable with the advent of individual motorization, the quarries of limestone suitable for lithography lost value when other less demanding printing methods were found.

Sunday, May 3, 2009

Install Cabal with Ubuntu 9.04 Jaunty and GHC 6.8.2 or 6.10.1

Cabal is a marvelous help to install the Haskell packages found in Hackage. There is a very long and rapidly increasing list of Haskell packages to solve nearly any programming task collected in Hackage: reading XML files, connecting to databases, to graphical user interface (wx) or to run a web server – all this and much more is available. The only problem was to find the right versions to work together, such that the ghc package manager is satisfied and the result is running.

The regular method to install packages was:
  1. find the package on Hackage http://hackage.haskell.org/packages/archive/pkg-list.html#cat:user-interface

  2. unpack the file in a directory of your choice

  3. change into this directory

  4. runghc or runhaskell Setup.hs configure (or Setup.lhs – whatever is in the package)

  5. runghc or runhaskell Setup.hs build

  6. sudo runghc or sudo runhaskell Setup.hs install
    With this command, the compiled content is moved to /usr/local/lib and registered with the package manager of ghc (in /usr/lib/ghc-6.8.2)
  7. download the package as a .tar.gz file (link at the bottom of the page)

The procedure is simple, but time consuming to satisfy the dependencies between packages. A package may need another packages that must be previously installed, which is discovered in step 5 (configure). Then the package required must be installed first; it is usually easy to find on the hackage page of the dependent package, but may require yet another package...

Cabal automates this. The only problem was that I could not find a ready made cabal-install program and had to construct it. I had a new and clean installation for GHC 6.8.2 in Ubuntu 9.04 Jaunty (and the same should apply for Ubuntu 8.04 Hardy and 8.10 Intrepid). I loaded a bunch of packages already available in the Ubuntu repository, of which libghc-network, libghc6-parsec and libghc6-zlib (with the dependent zlib...) are likely the only relevant ones here.

The blog http://www.kuliniewicz.org/blog/archives/2009/03/24/installing-ghc-610-on-ubuntu-intrepid/comment-page-1/ gave me a start, but I ran into problems with cabal-install-0.6.2 as described there, probably because I had a 6.8.2 ghc and difficulties to build network, which I could not download. I gave up with ghc 6.10 which is not yet available for ubuntu.

I tried first to use the traditional method to install cabal-install, but found later a simpler method:

  1. getting cabal-install-0.6.0 from hackage and use
  2. sudo sh bootstrap.sh

Required are the packages mtl, parsec and network, which I had. The result is an executable ~/.cabal/bin/cal, which I copied to /usr/bin (sudo cp ~/.cabal/bin/cabal /usr/bin). Next, it may be necessary to run cabal update to download the list of packages from hackage.

Now life should be wonderful! (I did just install WxGeneric with simply sudo cabal install WxGeneric...)

Alternatively and on a different computer, the traditional approach with manual download and configure/build/install worked for the combination

  1. HTTP 3001.0.4 (produces some warnings)
  2. Cabal 1.2.3
  3. cabal-install 0.4.0

Higher versions I could not get working – but the result is satisfying.

Later note for installation with 6.10.1:

I installed manually only parsec-2.1.0.1 and network-2.1.1.2 (with runhaskell Setup.hs configu/build/install) and then the ./bootstrap script in cabal-install-0.6.2 ran.

Sunday, April 12, 2009

Where is the money gone?

We have an economic crisis. Nobody has money, many starve, lost their homes etc. But where did the money go? On the blog http://scienceblogs.com/goodmath/2009/03/bad_bailouts.php#more the question was raised and some partial answers given. I will try here an explanation:

There is some confusion on terminology: By money I mean the abstract rights which are described with amounts of a currency (Dollar, Euro). By value, I will describe real things which can be used and produce benefits (e.g. my home, my car, a company with its tangible and intangible assets).


The real confusion starts when newspapers report 'today x billions were lost on the stock exchange' (or even worse 'x billions were destroyed'). Who did destroy the money? Where did it go when it was lost? - The answer is simply 'the illusion of value went away'. Here is why:

In the morning I hold 1000 shares of, say Ford Motor company; this means I own a (small piece) of the company with all its assets. They were traded for $16 the evening before and my banker things my net value is 16,000. In the evening I still have the same 1000 shares of the same company, but they are traded now for only $12. What have I lost? Obviously no value was lost, but in the view of my banker, my net value is reduced by $4000. When the stock prices came down, the stock was not representing less real value in terms of assets behind it. What came down was the market value of the stocks.

Before I look why people really lost money, let me discuss what happened when the stock prices were rising:

I bought recently 1000 shares of a company owning large buildings in Vienna for Euro 0.70. These shares are now traded for 1.30. Have I earned now 600 Eros? Not yet. If I sell the stock then I have earned 600, which I can use to invite my friends to a sumptuous dinner. Likewise, I would have realized my loss of holding Ford stock only if I sell it. A gain or a loss in money terms is associated with an action, converting money to an other asset and then back. The same is true for buying a home – there is no gain or loss as long as I hold on to it. The transactions produce a gain or loss, or better an increase or decrease in my net value as seen by my banker.

So – where is the money gone?

If I buy stock or a home but do not pay all with my own money, but ask the bank for a loan, guaranteed by the stock or a mortgage on the home, I can buy more than I could pay cash for. If the stock is traded later higher, I sell it, pay back the loan and pocket the difference. The percentage I can earn in this form is higher than if I buy with cash. Consider the above example: instead of using my 700 Euro to pay for 1000 shares, I buy 4000 and ask the bank to loan me 2100 Euro which together with my 700 pay for the 2800 total (I disregard fees etc.). If I sell at 1.30 I get 5200, pay back the 2800 and get 2400; my net value has increased by 1700 in a few weeks. My gain, expressed in percentage per annum is perhaps 250%. This is aptly called 'leverage'.

Where is the catch?

If the stock trades now lower, say 60 cents, then the bank fears for its loan and ask me to reduce the loan by giving them cash – and if I do not send it promptly, they will sell the stock for whatever price, say 55 cents. This gives 2200 of which the bank takes 2100 to pay back the loan and 100 remains for me – that is what I have left from my 700 Europa I had initially! Now I have really lost 600 Euro (compared to not having bought and sold the stock).

If many buy stock or homes paid partly by loans and market prices are sliding down such that banks see their loans not covered anymore and force the owners to sell, then prices will go further down and more owners may be forced by their banks to sell and thus pushing the market prices further down, as happened last year.

Reading Galbraith' [The Great Crash of 1929] and Krugman's [The Return of Depression Economics and the Crisis of 2008] accounts of the economic crisis indicate that the economic downturn – which is an event happening regularly every 7-15 years – started in every crisis before the crisis broke out (e.g. fall 2007) – but the crisis itself is the product of leveraged buying, which is multiplying the effects of normal economic ups and downs. We had many years of ups and some people benefited from leveraged buying, now we had the downturn and many got caught.

Still, where is the money gone?

When the market values of homes, stocks, companies etc. all increased regularly and leveraged buying of these valuable things was a good deal. People were able to sell and realize the gains in money (or were just increasing their net value in money). They used the money appearing in their bank accounts to buy real value things – like cars, dinners, massages, ships, companies – keeping the real economy going.

The non-intuitive aspect is the account is that the gains were realized first and the losses were occurring later. The bubble economy was giving credits to the ones willing to run risks. When everybody else jumped on the band-wagon, it started sliding backwards. The clever ones had left (i. e. realized) the gains and moved it to other more stable form of assets, the late comers lost their investment.

Now: who benefited from the bubble? Certainly the ones that cashed the big bonus, realized the big gains, but also all the ones which bought and sold homes and used the money for this and that – meaning a little bit everybody (including state employees, benefiting from increased taxes used to pay salaries... me included).

The figures reported in the press are the figures describing Ponzi schemes built on top of the simple examples used above: one can leverage the leveraged investment in stocks, do leverage buy-outs and translate this in stocks; one can produce securities from risky mortgages – and then create leveraged investment in these securities. For example the loss in Madoff's investment vehicle – a classic Ponzi scheme – is reported in the 50 billions; actual investment of real (?) money from outside is closer to 15 – 20 billions – the rest are gains appearing in the books but never realized.

The billions necessary to keep the system relevant banks afloat – a classic scheme of privatizing gains and sharing publicly the losses as Stiglitz has pointed out – are inflated by the leverage schemes used to create the investment vehicles now appearing n their books as toxic assets. I see not enough analysis of the causes of the crisis to believe that the bailout is using the best (least public cost) medication to overcome the current illness of the finance system.

Friday, April 3, 2009

What can go wrong when using a web page for a service?


More and more transactions are moved from personal phone contacts or paper forms to web pages. Making me use a web page is clearly an advantage for the provider of the service, because data need not be entered manually in their system by his personnel and checks on consistency of my entry are forced on me when I enter the data.
The disadvantage is on my side: the services are not easy to use and hoist the providers view of he world on me. Here a reasoned list of issues:

Finding the web page
To remember the name of a web page is only feasible for the few services I use very often, for the other I have either a bookmark in my browser or use Google search. Relying on the browser bookmarks does not work when I am not on the computer I have produced the bookmark with.
Requirement: every web service must be easily found using Google search. Links to the service should be place on the appropriate home pages.

User name and password
Every service provider sends me happily a new user name and password, which I am supposed to learn by heart and not write on a post-it note pasted to the screen. I admit that more than knowing my name and perhaps 2 to 3 passwords and codes is difficult for me! I rely on my browser to remember user name and password – which does not work reliably if one uses more than one computer.
Requirement: I can select the same user name everywhere and set the password to the one I remember; I am willing to select one with 6 letter and non-letters – but no other requirements, please! The various requirements for length, to include numbers or other signs are forcing new passwords on me that I promptly forget.

Delegation
We work in teams. Not everything a web services expect me to do I do myself, but often I can delegate some of the tasks to others. It must be possible for others to work on my behalf without me giving them my password. At least a full delegation to another user with another password to work on my behalf is necessary, for web pages which contain many actions, a divided delegation would be nice; for example, allowing others to enter data but not commit them.
Requirement: delegation to another user/password must be possible.
Screen size and browser type
I have computers with a browser – any request from a web service to change my computer or browser is inappropriate. It is the service's responsibility to adapt to my screen size. It is the most common size, sometimes a netbook, sometimes a portrait screen. I am not willing to install a new browser just to book an airline seat!
Requirement: construct web service to work on different screen sizes and with all regular browsers that support www standards.

Terminology
Every web page communicates in its own terminology and often this is a quite strange jargon unintelligible or even misleading. Organizations and parts of organizations create quickly their own languages, completely obscure to outsiders. In a human communication the speaker and the listener can adapt and correct misunderstandings – on a web page, this is not possible.
Recommendation: check the clarity of the language with several outsiders; they must understand it without any help! Add help texts to every page where the same words are used in a context.

Input
Web pages differ in how they expect inputs; problems are caused by numbers (decimal point or comma?), times and dates, but phone numbers can also cause trouble: are blanks allowed to group digits?
Requirement: Any input a human can interpret should be acceptable by the web page. If a specific format is expected, then the web page must show an example and not let the user guess, repeatedly, with increasing frustration.

Error tolerance
Errare humanum est – people make errors. A web application which assumes perfect inputs is not usable.
Requirement: The application must be tolerant for errors and give multiple ways for correcting them. It is required to be able to go back and change only one thing without entering everything again.

Interruptions
It is rare the moment I can work for a few minutes without interruption: the phone rings... Some web applications are set up to terminate a transaction if it is not completed quickly and after the interruption one has to start from scratch. If I come back after an interruption, or because I had to search for some detail to enter, it should be clear what is done and what needs to be done to progress.
Requirement: Save current state and allow interruptions.

Adherence to web interaction standards
Over the past years a set of rules has been established which are followed by nearly all applications: I can go back one page, pages open in a new browser window or in a tab, depending on my preferences. Requests for name, email etc. are marked such that the browser can fill them, etc. Many web services I have to use do not follow these standard ways of working and expect me to learn new tricks.
Requirement: adhere to the standards!

Post scriptum
I understand that programming a web page to follow these requirements may be more difficult; but why should (small) savings on the provider side justify large losses of time on my side?

Thursday, April 2, 2009

Why do I feel exhausted? - The change in the professional environment caused by technological change


I feel exhausted at the workplace and it is not the economic crisis that causes it. Universities in Europe are, at least for now, not greatly affected. Others I know feel the same – more the more they use computer in their work environment.

We all use the web for a large part of the daily transactions: banking, reserving airline tickets, dealing with purchase orders, billing and accounting, travel expenses etc. It is more convenient, as we can do ourselves what we had to go through staff and ask for assistance. A few clicks and the airline seat is reserved, the bill is paid – all done!

Is it really that easy? In theory: yes, but in practice many obstacles may be encountered on the way to perfect paperless web administration:

  • I do not remember the web address of the service I have to use, forgot my user name or password,

  • I do not understand the terminology used on the web form and there is nobody to help me with it,

  • the conventions for entering data are not the ones I am used to (e.g. 1.10 vs. 1,10)

  • I made an error and can correct only with starting all over or, worse, not at all,

  • I get interrupted and when I come back I do not see what is already done and what not; often I have to start from the beginning again.

The real problems start, if a case does not properly fit in the foreseen structure or an error is committed and must be corrected. Then I call a hot line, send email and explain the case and much time is lost before a solution is found – if ever.

I feel exhausted because the many small and easy tasks I plan to do are not completed at the end of the day. I leave some for tomorrow, and I expect tomorrow not to finish what tomorrow brings in in new tasks. The tasks should all be simple and quick, in practice the obstacles mentioned above drag many of them out to consume much more time than planned.

What was different before? I had the luxury to have worked with several wonderful personal assistants, who took care of a large part of the tasks of my job as manager in a research center and later as head of an university institute. Having an assistant allows to divide the tasks into those which require technical or scientific knowledge and those which require administrative knowledge. I could delegate the administrative tasks to a human being whom understood the my intentions and our environment and dealt with the obstacles intelligently. Now I have to cope with an artificially intelligent computer.

Even without the assistance, paper forms where easier: Filling in a paper form allowed more flexibility, was resilient to interruptions and allowed easy corrections. It is true that sometimes phone calls were necessary to understand the forms – either by myself or the assistant – but based on my observations more phone calls are (or would be) necessary with web forms.

The analysis indicates that the feeling of exhaustion is caused by the conflict between the expectation that all is “easy and quick” and the experience that I cannot do it as easily, as quickly and as effortlessly as I am told all others can. I feel dumb and inadequate. But when I ask others, then I see them suffer from mostly the same feelings – web forms do not work in most cases for most people who are not using the same form often. The management consultants who advocate the changes from paper to web forms must realize that they offload work from central administration to the users in a degree which is detrimental for the motivation of the people and detracts them from their productive tasks to learning how to cope with the ever changing web forms.

Monday, March 9, 2009

Last 20 years of Transition Countries

A paragraph from "Die Zeit":

The head of St. Petersburg TV mentioned that TV news during the Breshniew times were horrible: only official bulletins were read, but in the evening one could go for a stroll in the park without fear. In the 1990's anything could be said on TV, but in the evening you did not go for a walk for fear to be kidnapped (5th march 2009, p. 47 - my rough translation).

What does progress and democracy mean? For whom?

Thursday, March 5, 2009

Todays Demonstration of Lack of Leadership Quality of Politicians

The current flurry of activities around bank secrecy helps to asses the quality, or the lack thereof, of politicians. What is the criterion?

Reading about Helmut Schmidt, the former German chancellor, I learned that he wrote 1961 a book with the title 'Defense or Retaliation', analyzing the current situation and drawing conclusion for actions based on a realistic assessment. I take this as just one example of what a politician should be: a leader, showing the way forward and opposing unrealistic positions.

Take the current economic “crisis”, which seems to produce the momentum to overcome the abuse of bank secrecy. The principle to protect the privacy of Swiss or Austrian citizens is fine, but its importance is mostly in helping people from other countries to to shift taxes from one country (high) to another country (lower), which is hardly fair to the citizens of the country where taxes are avoided.

Forward looking politicians in Switzerland, Austria and Luxembourg should have recognized the unfair advantage they gain from the protection of tax evasion and guided their banking industry to a economically more productive business. It was more profitable, in the short run, to benefit from it.

Observing how the ministers in Switzerland and Austria rally behind the defense of the undefendable 'banking secret' demonstrates their lack of leadership clearly.

Monday, March 2, 2009

How the Tail Wags the Dog - An analysis of “too big to fail “ in the case of the Swiss bank UBS

First the facts, collected from the reputable Swiss newspaper Neue Zurcher Zeitung:
The Swiss bank UBS operates in the USA as a bank advising clients on wealth management, which is essentially advice on reducing taxation. In this function, the bank has admittedly violated US laws. The bank is threatened by an indictment with a probable consequence of the loss of the US banking license. It is given the alternative to cooperate with IRS and produce data about the clients involved in the illegal operations.

From the US perspective, this is a regular, legal procedure to enforce US taxation laws. The interesting case presents itself in Switzerland: UBS is considered “system relevant” by the swiss government, meaning it is too big to fail. Because the government judges the danger for UBS real and immediate, the government advises UBS to cooperate with the IRS, even though this is likely violating Swiss law (the famous 'banking secret') and certainly Swiss legal procedures of due process.

A company considered “system relevant” is not only protected from going bankrupt but also exempt of other laws of the land! The moral hazard of 'private gains, public losses', which we see in the discussion of the US or German banking or car management salaries, is multiplied when companies from small countries are involved. They may unpunished violate the law – the ultimate moral hazard!

A banker suggested in the (liberal) newspaper Neue Zuercher Zeitung to held upper management (CEO and the board of directors) jointly and privately liable for losses, effectively reducing the moral hazard by making their positions similar to a private owner of a company, which feels losses personally. I think this is addressing part of the problem, not the core:

The core of the problem is size (specifically relative size): UBS is for Switzerland too big to fail. Our social and political system is based on a division between state and private; state operations are controlled by (democratic) politics, private operations are controlled by law. If a company becomes so large that it is “system relevant”, it is above the law (usually only thought of the bankruptcy law, but as the above case shows, other laws, e.g. the rules protecting the privacy of its clients). Admitting that a company is 'too big to fail' means that it must be controlled politically, i.e. nationalized. How to avoid nationalization of all the big players?

Identifying size as the problem, a simple taxation scheme punishing size would reduce the advantage of bigger companies, make mergers and acquisitions unattractive and lead to breaking up the current large companies into units which individually can be allowed to fail, thus reducing the moral hazard. Size of companies today is not the result of natural growth, the economic counterpart of a Darwin like evolution, where the fittest survives, but they are the result of mergers and acquisitions (e.g. UBS is the result of the acquisition of SBC through UBS, necessary 1998 for UBS to maintain its balance sheets). The current institutions and technology award a premium to large companies, creating the moral hazard discussed above.

A very progressive tax on size could be simply based on employees (and perhaps include net turnover) to identify the element which makes a company 'too big to fail'. As a simple idea, a company would pay for each employee an amount corresponding to the total numbers of employees it has; the tax liability would be the square number of employees. Small companies would pay near to nothing, but a company like UBS with 79'000 employees would be taxed $6.4 billion cutting substantially in its net income of US$ 16.2 billion. Such a tax is not taxing the economic production but is a compensation for the risk that is created by a large cooperation and may be due at every location for the size of the company controlled from there not avoiding double taxation and effectively increasing the tax by a factor of 2.

Some Postscripts:

Ironically, UBS was the company that did not extend a credit line to Swissair and caused its immediate grounding in the morning of October 2, 2001

Fortunately neither Madoff nor Stanford's pyramid schemes were too big to fail. Imagine a merger between UBS and Madoff and the Swiss government legalizes and pays for the Ponzzi scheme!

The tail (UBS as a small player in the USA) wags the dog (Switzerland). This asymmetry makes it difficult for the swiss government to negotiate with the USA and creates a lot of David and Goliath rhetoric in Swiss newspapers.

Saturday, February 28, 2009

Methods of administration affect content of science

Administration of universities and other centers of scientific research becomes more rational. In the good old days, brilliant thinkers convinced ministers to help them to, often poorly funded, chairs at universities. Some where geniuses and advanced science, most taught their field and many produced nothing.

Rational administration makes decision following procedures which are previously fixed and relies as much as possible on objective measures to achieve objective decisions which can be audited. The new rational administration promises to eliminate at least the worst cases of incapable and unproductive employees of universities.

Unfortunately, the rational administrative paradigm has effects on how science progresses. The candidates for academic positions and promotions – i.e. the whole academic research community – organizes their work around these countable measures of achievement; I was astonished that within a decade, whole fields or national university systems refocused from networks based on friendship and personal loyalty to counting publications.

The merit of a researcher depends on his contribution to the advancement of science. The difficulty is in the observation of this contribution. It progressed in three steps: publications, reviewed publications and citations.

  • Firstly, counting publications is a poor measure, but had the effect to move university departments which did hardly ever publish anything in the 1980's to producing many reports.
  • Secondly, filtering publications by outlet and counting only articles in reviewed journals, increased the number of journals with a review process (again in an amazingly quick response to the incentive structure).
  • Thirdly, not counting publications but the perception of the contribution by other researchers in form of citation is the current state.

These measures can be refined by judging journals by their impact factor and calculate elaborate induces. A very useful tool is found on http://www.harzing.com/pop.htm; but the same author analyzed the possibilities to manipulate indices and warned against blindly believing them (Adler, N.; Harzing, A.W.K. (2009) When Knowledge Wins: Transcending the sense and nonsense of academic rankings, The Academy of Management Learning & Education, vol. 8, no. 1. Available online...) Wolfgang Wagner has in an email pointed out that blindly believing in academic indices is similar to blindly believe in rating agencies – the effects are slower in coming but perhaps similar.

This is mostly known and nevertheless it is amazing (1) how quickly the whole academic social system has been transformed and (2) how blindly administration believes in such indices to manage the university – or rather, not to manage, but to administrate the university (see http://werner-kuhn.blogspot.com/).

The system works around the writing, publishing, reading and citing publications. How?

I have learned from examples and critical reviews how to write papers; I use the same rules to review other papers and to decide on publication of papers in reviewed journals and conferences. I see among researchers, reviewers, editors and conference chairs a strong agreement, what constitutes a good paper. These 'good papers' constitute our advancement of science. It is not that 'advancement of science' constitutes a publishable paper.

Evidence can be collected from typical instructions to reviewers:

  • Papers must link to previous work on the same subject; references are crucial (especially references to work of reviewers!). Lack of 'the' pertinent citation is often sufficient to disqualify a manuscript. As a result, papers are long on review of previous work (less so in mathematics, more in geography). A boring waste of time of the writer, reviewer, editor and, perhaps, the eventual reader.
  • Paper should make a novel contribution to science, but surprisingly, reviewers of many journals read this as 'unpublished', not what is the new idea which helps others in the next step of research. A not yet published application of (known method X) to application Y appears novel. Good journals are stricter and authors send in manuscripts which improve known method X by a small amount to achieve X + epsilon. Such manuscripts fare well with reviewers: the subject is known and easy to understand for the reviewer (typically having published X or another improvement to X before), the advancement (however small) is identifiable.
  • Papers must be short – page limits are often low and attention span of readers is short. Established paradigms cannot be attacked in cases where multiple reinforcing believes must be questioned at once.

The optimal paper today is picking up a current well limited topic with sufficient previous work and improves it minimally. This is the paper that goes quickly trough journal review and gets, on average, sufficient points to make it into the programs of even strictly reviewed conferences. I am bored by reading this infinite succession of papers repeating what is already known and improve it minimally (if at all).

Such papers count, are cited and cite previous papers (and often not the original first publication); they make our index based, rational university administration happy.

What papers are not published?

  1. Novel ideas – because there is not enough previous work and reviewers are not familiar with the topic.

  2. Critical papers – because some reviewers will not like the critique (on their work) and react negatively; editors go typically with the negative vote.

  3. Papers introducing new points of view – because reviewers will claim, that this has been known before and editors will not force them to substantiate the claim.

What papers I would like to read?

Papers with controversial ideas that can be discussed – when did you see last a discussion in a scientific journal? Substantive papers with widely reviews varying between very good and very poor could advance science more than just another tame epsilon improvement.