# Light

The bright meadows - this place should be safe for children.

# English

Here you can find my english pages. When there are enough of them, they might get the same or a similar structure as the german ones.

You can view these pages like a blog by checking the

< < new english posts (weblog) > >

- they also feature an RSS-Feed.

Also you can find some more of my english writings by looking at the blog-entries in LJ which I tagged english.

Best wishes,
Arne

# A tale of foxes and freedom

Singing the songs of creation to shape a free world.

One day the silver kit asked the grey one:

“Who made the light, which brightens our singing place?”

The grey one looked at him lovingly and asked the kit to sit with him, for he would tell a story of old, a story from the days when the tribe was young.

“Once there was a time, when the world was light and happiness. During the day the sun shone on the savannah, and at night the moon cast the grass in a silver sheen.

It was during that time, when there were fewer animals in the wild, that the GNUs learned the working of songs of creation, deep and vocal, and they taught us and everyone their new findings, and the life of our skulk was happiness and love.

But while the GNUs spread their songs and made new songs for every idea they could imagine, others invaded the plains, and they stole away the songs and sang them in their own way. And they drowned out the light, and with it went the happiness and love.

And when everyone shivered in cold and darkness, and stillness and despair were drawn over the land, the others created a false light which cast small enclosures into a pale flicker, in which they let in only those animals who were willing to wear ropes on their throats and limbs, and many animals went to them to escape the darkness, while some fell deeper still and joined the others in enslaving their former friends.

Upon seeing this, the fiercest of the GNUs, the last one of the original herd, was filled with a terrible anger to see the songs of creation turned into a tool for slavery, and he made one special song which created a spark of true light in the darkness which could not be taken away, and which exposed the falsehood in the light of the others. And whenever he sang the song, those who were near him were touched by happiness.

But the others were many and the GNU was alone, and many animals succumbed to the ropes or the ropers and could move no more on their own.

To spread the song, the GNU now searched for other animals who would sing with it, and the song spread, and with it the freedom.

It was during these days, that the GNU met our founders, who lived in golden chains in a palace of glass.

In this palace they thought themselves lucky, and though the light of the palace grew ever paler and the chains grew heavier with every passing day, they didn't leave, because they feared the utter darkness out there.

When they then saw the GNU, they asked him: "Isn't your light weaker than this whole palace?" and the GNU answered: "Not if we sing it together", and they asked "But how will we eat in the darkness?" and the GNU answered "you'll eat in the light of your songs, and plants will grow wherever you sing", and they asked "But is it a song of foxes?" and the GNU said: "You can make it so", and he began to sing, and when our founders joined in, the light became shimmering silver like the moon they still remembered from the days and nights of light, and they rejoiced in its brightness.

And whenever this light touched the glass of the palace, the glass paled and showed its true being, and where the light touched the chains, they whithered and our founders went into the darkness with the newfound light of the moon as companion, and they thanked the GNU and promised to help it, whenever they were needed.

Then they set off to learn the many songs of the world and to spread the silver light of the moon wherever they came.

And so our founders learned to sing the light, which brightens every one of our songs, and as our skulk grew bigger, the light grew stronger and it became a little moon, which will grow with each new kit, until its light will fill the whole world again one day.”

The grey one looked around where many kits had quietly found a place, and then he laughed softly, before he got up to fetch himself a meal for the night, and the kits began to speak all at once about his story. And they spoke until the silver kit raised its voice and sung the song of moonlight1, and they joined in and the song filled their hearts with joy and the air with light, and they knew that wherever they would travel, this skulk was where their hearts felt home.

PS: I originally wrote this story for Phex, a free Gnutella based p2p filesharing program which also has an anonymous sibling (i2phex). It’s an even stronger fit for Firefox, though.

PPS: This story is far less loosely based on facts than it looks. There are songs of creation, namely computer programs, which once were free and which were truly taken away and used for casting others into darkness. And there was and still is the fierce GNU with his song of light and freedom, and he did spread it to make it into GNU/Linux and found the free software community we know today. If you want to know more about the story as it happened in our world, just read the less flowery story of Richard Stallman, free hackers and the creation of GNU or listen to the free song Infinite Hands.

PPPS: License: This text is given into the public under the GNU FDL without invariant sections and other free licenses by Arne Babenhauserheide (who has the copyright on it).

1. To make it perfectly clear: This moonlight is definitely not the abhorrent and patent stricken silverlight port from the mono project. The foxes sing a song of freedom. They wouldn’t accept the shackles of Microsoft after having found their freedom. Sadly the PR departments of some groups try to take over analogies and strong names. Don’t be fooled by them. The moonlight in our songs is the light coming from the moon which resonates in the voices of the kits. And that light is free as in freedom, from copyright restrictions as well as from patent restrictions – though there certainly are people who would love to patent the light of the moon. Those are the ones we need to fight to defend our freedom.

# Emacs

Cross platform, Free Software, almost all features you can think of, graphical and in the shell, learn once - use for everything. » Get Emacs «

Emacs is a self-documenting, extensible editor, a development environment and a platform for lisp-programs - for example programs to make programming easier, but also for todo-lists on steroids, reading email, posting to identi.ca, and a host of other stuff (learn lisp).

It is also one of the origins of GNU and free software (Emacs History).

In Markdown-mode it looks like this:

More on emacs on my german Emacs page.

# Babcore: Emacs Customizations everyone should have

## 1 Intro

PDF-version (for printing)

orgmode-version (for editing)

repository (for forking)

project page (for fun ☺)

Emacs Lisp (to use)

Package (to install)

I have been tweaking my emacs configuration for years, now, and I added quite some cruft. But while searching for the right way to work, I also found some gems which I direly miss in pristine emacs.

This file is about those gems.

Babcore is strongly related to Prelude. Actually it is exactly like prelude, just with the stuff I consider essential.

But before we start, there is one crucial piece of advice which everyone who uses Emacs should know:

C-g: abort


Hold control and hit g.

That gets you out of almost any situation. If anything goes wrong, just hit C-g repeatedly till the problem is gone - or you cooled off far enough to realize that a no-op is the best way to react.

To repeat: If anything goes wrong, just hit C-g.

As Emacs package, babcore needs a proper header.

;; Copyright (C) 2013 Arne Babenhauserheide

;; Author: Arne Babenhauserheide (and various others in Emacswiki and elsewhere).
;; Maintainer: Arne Babenhauserheide
;; Created 03 April 2013
;; Version: 0.0.2
;; Version Keywords: core configuration

;; This program is free software; you can redistribute it and/or
;; modify it under the terms of the GNU General Public License

;; This program is distributed in the hope that it will be useful,
;; but WITHOUT ANY WARRANTY; without even the implied warranty of
;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
;; GNU General Public License for more details.

;; You should have received a copy of the GNU General Public License
;; along with this program. If not, see <http://www.gnu.org/licenses/>.

;;; Commentary:
;; Quick Start / installation:
;; 1. Download this file and put it next to other files Emacs includes
;; 2. Add this to you .emacs file and restart emacs:
;;      (require 'babcore)
;;
;; Use Case: Use a common core configuration so you can avoid the
;;   tedious act of gathering all the basic stuff over the years and
;;   can instead concentrate on the really cool new stuff Emacs offers
;;   you.
;;
;; Todo:
;;

;;; Change Log:
;; 2013-02-29 - Initial release

;;; Code:



Additionally it needs the proper last line. See finish up for details.

## 3 Feature Gems

### 3.1 package.el, full setup

The first thing you need in emacs 24. This gives you a convenient way to install just about anything, so you really should use it.

Also I hope that it will help consolidate the various emacs tips which float around into polished packages by virtue of giving people ways to actually get the package by name - and keep it updated almost automatically.

;; Convenient package handling in emacs

(require 'package)
;; and the old elpa repo
;; and automatically parsed versiontracking repositories.

;; Make sure a package is installed
(defun package-require (package)
"Install a PACKAGE unless it is already installed
or a feature with the same name is already active.

Usage: (package-require 'package)"
; try to activate the package with at least version 0.
(package-activate package '(0))
; try to just require the package. Maybe the user has it in his local config
(condition-case nil
(require package)
; if we cannot require it, it does not exist, yet. So install it.
(error (package-install package))))

;; Initialize installed packages
(package-initialize)
;; package init not needed, since it is done anyway in emacs 24 after reading the init
;; but we have to load the list of available packages
(package-refresh-contents)



### 3.2 Flymake

Flymake is an example of a quite complex feature which really everyone should have.

It can check any kind of code, and actually anything which can be verified with a program which gives line numbers.

As alternative you might want to look into flycheck. It looks really cool, but I don’t yet have experience with it, so I cannot recommend it, yet.

;; Flymake: On the fly syntax checking

; stronger error display
(defface flymake-message-face
'((((class color) (background light)) (:foreground "#b2dfff"))
(((class color) (background dark))  (:foreground "#b2dfff")))
"Flymake message face")

; show the flymake errors in the minibuffer
(package-require 'flymake-cursor)


### 3.3 auto-complete

This gives you inline auto-completion preview with an overlay window - even in the text-console. Partially this goes as far as API-hints (for example for elisp code). Absolutely essential.

;; Inline auto completion and suggestions
(package-require 'auto-complete)


### 3.4 ido

To select a file in a huge directory, just type a few letters from that file in the correct order, leaving out the non-identifying ones. Darn cool!

; use ido mode for file and buffer Completion when switching buffers
(require 'ido)
(ido-mode t)


### 3.5 printing

Printing in pristine emacs is woefully inadequate, even though it is a standard function in almost all other current programs.

It can be easy, though:

;; Convenient printing
(require 'printing)
; make sure we use localhost as cups server
(setenv "CUPS_SERVER" "localhost")
(package-require 'cups)


### 3.6 outlining everywhere

Code folding is pretty cool to get an overview of a complex structure. So why shouldn’t you be able to do that with any kind of structured data?

; use allout minor mode to have outlining everywhere.
(allout-mode)


### 3.7 Syntax highlighting

Font-lock is the emacs name for syntax highlighting - in just about anything.

; syntax highlighting everywhere
(global-font-lock-mode 1)


### 3.8 org and babel

Org-mode is that kind of simple thing which evolves to a way of life when you realize that most of your needs actually are simple - and that the complex things can be done in simple ways, too.

It provides simple todo-lists, inline-code evaluation (as in this file) and a full-blown literate programming, reproducible research publishing platform. All from the same simple basic structure.

It might change your life… and it is the only planning solution which ever prevailed against my way of life and organization.

; Activate org-mode
(require 'org)
; and some more org stuff

; http://orgmode.org/guide/Activation.html#Activation

; The following lines are always needed.  Choose your own keys.

; And add babel inline code execution
; babel, for executing code in org-mode.
; load all language marked with (lang . t).
'((C . t)
(R . t)
(asymptote)
(awk)
(calc)
(clojure)
(comint)
(css)
(ditaa . t)
(dot . t)
(emacs-lisp . t)
(fortran)
(gnuplot . t)
(io)
(java)
(js)
(latex)
(ledger)
(lilypond)
(lisp)
(matlab)
(maxima)
(mscgen)
(ocaml)
(octave)
(org . t)
(perl)
(picolisp)
(plantuml)
(python . t)
(ref)
(ruby)
(sass)
(scala)
(scheme)
(screen)
(sh . t)
(shen)
(sql)
(sqlite)))


### 3.9 Nice line wrapping

If you’re used to other editors, you’ll want to see lines wrapped nicely at the word-border instead of lines which either get cut at the end or in the middle of a word.

global-visual-line-mode gives you that.

; Add proper word wrapping
(global-visual-line-mode t)


### 3.10 goto-chg

This is the kind of feature which looks tiny: Go to the place where you last changed something.

And then you get used to it and it becomes absolutely indispensable.

; go to the last change
(package-require 'goto-chg)
(global-set-key [(control .)] 'goto-last-change)
; M-. can conflict with etags tag search. But C-. can get overwritten
; by flyspell-auto-correct-word. And goto-last-change needs a really
; fast key.
(global-set-key [(meta .)] 'goto-last-change)


### 3.11 flyspell

Whenever you write prosa, a spellchecker is worth a lot, but it should not unnerve you.

Install aspell, then activate flyspell-mode whenever you need it.

It needs some dabbling, though, to make it work nicely with non-english text.

; Make german umlauts work.
(setq locale-coding-system 'utf-8)
(set-terminal-coding-system 'utf-8)
(set-keyboard-coding-system 'utf-8)
(set-selection-coding-system 'utf-8)
(prefer-coding-system 'utf-8)

;aspell und flyspell
(setq-default ispell-program-name "aspell")

;make aspell faster but less correctly
(setq ispell-extra-args '("--sug-mode=ultra" "-w" "äöüÄÖÜßñ"))
(setq ispell-list-command "list")


### 3.12 control-lock

If you have to do the same action repeatedly, for example with flyspell hitting next-error and next-correction hundreds of times, the need to press control can really be a strain for your fingers.

Sure, you can use viper-mode and retrain your hands for the completely alien command set of vim.

A simpler solution is adding a sticky control key - and that’s what control-lock does: You get modal editing with your standard emacs commands.

Since I am a german, I simply use the german umlauts to toggle the control-lock. You will likely want to choose your own commands here.

; control-lock-mode, so we can enter a vi style command-mode with standard emacs keys.
(package-require 'control-lock)
; also bind M-ü and M-ä to toggling control lock.
(global-set-key (kbd "M-ü") 'control-lock-toggle)
(global-set-key (kbd "C-ü") 'control-lock-toggle)
(global-set-key (kbd "M-ä") 'control-lock-toggle)
(global-set-key (kbd "C-ä") 'control-lock-toggle)
(global-set-key (kbd "C-z") 'control-lock-toggle)


### 3.13 Basic key chords

This is the second strike for saving your pinky. Yes, Emacs is hard on the pinky. Even if it were completely designed to avoid strain on the pinky, it would still be hard, because any system in which you do not have to reach for the mouse is hard on the pinky.

But it also provides some of the neatest tricks to reduce that strain, so you can make Emacs your pinky saviour.

The key chord mode allows you to hit any two keys at (almost) the same time to invoke commands. Since this can interfere with normal typing, I would only use it for letters which are rarely typed after each other.

The default chords have proven themselves to be useful in years of working with Emacs.

; use key chords invoke commands
(package-require 'key-chord)
(key-chord-mode 1)
; buffer actions
(key-chord-define-global "vg"     'eval-region)
(key-chord-define-global "vb"     'eval-buffer)
(key-chord-define-global "cy"     'yank-pop)
(key-chord-define-global "cg"     "\C-c\C-c")
; frame actions
(key-chord-define-global "xo"     'other-window);
(key-chord-define-global "x1"     'delete-other-windows)
(key-chord-define-global "x0"     'delete-window)
(defun kill-this-buffer-if-not-modified ()
(interactive)
(kill-buffer-if-not-modified (current-buffer))
(abort-recursive-edit)))
(key-chord-define-global "xk"     'kill-this-buffer-if-not-modified)
; file actions
(key-chord-define-global "bf"     'ido-switch-buffer)
(key-chord-define-global "cf"     'ido-find-file)
(key-chord-define-global "vc"     'vc-next-action)



To complement these tricks, you should also install and use workrave or at least type-break-mode.

### 3.14 X11 tricks

These are ways to improve the integration of Emacs in a graphical environment.

We have this cool editor. But it is from the 90s, and some of the more modern concepts of graphical programs have not yet been integrated into its core. Maybe because everyone just adds them to the custom setup :)

On the other hand, Emacs always provided split windows and many of the “new” window handling functions in dwm and similar - along with a level of integration with which normal graphical desktops still have to catch up. Open a file, edit it as text, quickly switch to org-mode to be able to edit an ascii table more efficiently, then switch to html mode to add some custom structure - and all that with a consistent set of key bindings.

But enough with the glorification, let’s get to the integration of stuff where Emacs arguably still has weaknesses.

#### 3.14.1 frame-to-front

Get the current Emacs frame to the front. You can for example call this via emacsclient and set it as a keyboard shortcut in your desktop (for me it is F12):

emacsclient -e "(show-frame)"


This sounds much easier than it proves to be in the end… but luckily you only have to solve it once, then you can google it anywhere…

(defun show-frame (&optional frame)
"Show the current Emacs frame or the FRAME given as argument.

And make sure that it really shows up!"
(raise-frame)
; yes, you have to call this twice. Don’t ask me why…
; select-frame-set-input-focus calls x-focus-frame and does a bit of
(select-frame-set-input-focus (selected-frame))
(select-frame-set-input-focus (selected-frame)))


#### 3.14.2 urgency hint

Make Emacs announce itself in the tray.

;; let emacs blink when something interesting happens.
;; in KDE this marks the active Emacs icon in the tray.
(defun x-urgency-hint (frame arg &optional source)
"Set the x-urgency hint for the frame to arg:

- If arg is nil, unset the urgency.
- If arg is any other value, set the urgency.

If you unset the urgency, you still have to visit the frame to make the urgency setting disappear (at least in KDE)."
(let* ((wm-hints (append (x-window-property
"WM_HINTS" frame "WM_HINTS"
source nil t) nil))
(flags (car wm-hints)))
; (message flags)
(setcar wm-hints
(if arg
(logior flags #x00000100)
(logand flags #x1ffffeff)))
(x-change-window-property "WM_HINTS" wm-hints frame "WM_HINTS" 32 t)))

(defun x-urgent (&optional arg)
"Mark the current emacs frame as requiring urgent attention.

With a prefix argument which does not equal a boolean value of nil, remove the urgency flag (which might or might not change display, depending on the window manager)."
(interactive "P")
(let (frame (car (car (cdr (current-frame-configuration)))))
(x-urgency-hint frame (not arg))))


#### 3.14.3 fullscreen mode

Hit X11 to enter fullscreen mode. Any self-respecting program should have that… and now Emacs does, too.

; fullscreen, taken from http://www.emacswiki.org/emacs/FullScreen#toc26
; should work for X und OSX with emacs 23.x (TODO find minimum version).
; for windows it uses (w32-send-sys-command #xf030) (#xf030 == 61488)
(defvar babcore-fullscreen-p t "Check if fullscreen is on or off")
(setq babcore-stored-frame-width nil)
(setq babcore-stored-frame-height nil)

(defun babcore-non-fullscreen ()
(interactive)
(if (fboundp 'w32-send-sys-command)
;; WM_SYSCOMMAND restore #xf120
(w32-send-sys-command 61728)
(progn (set-frame-parameter nil 'width
(if babcore-stored-frame-width
babcore-stored-frame-width 82))
(set-frame-parameter nil 'height
(if babcore-stored-frame-height
babcore-stored-frame-height 42))
(set-frame-parameter nil 'fullscreen nil))))

(defun babcore-fullscreen ()
(interactive)
(setq babcore-stored-frame-width (frame-width))
(setq babcore-stored-frame-height (frame-height))
(if (fboundp 'w32-send-sys-command)
;; WM_SYSCOMMAND maximaze #xf030
(w32-send-sys-command 61488)
(set-frame-parameter nil 'fullscreen 'fullboth)))

(defun toggle-fullscreen ()
(interactive)
(setq babcore-fullscreen-p (not babcore-fullscreen-p))
(if babcore-fullscreen-p
(babcore-non-fullscreen)
(babcore-fullscreen)))

(global-set-key [f11] 'toggle-fullscreen)


#### 3.14.4 default key bindings

I always hate it when some usage pattern which is consistent almost everywhere fails with some program. Especially if that is easily avoidable.

This code fixes that for Emacs in KDE.

; Default KDE keybindings to make emacs nicer integrated into KDE.

; can treat C-m as its own mapping.
; (define-key input-decode-map "\C-m" [?\C-1])

(defun revert-buffer-preserve-modes ()
(interactive)
(revert-buffer t nil t))

; C-m shows/hides the menu bar - thanks to http://stackoverflow.com/questions/2298811/how-to-turn-off-alternative-enter-with-ctrlm-in-linux
(defconst kde-default-keys-minor-mode-map
(let ((map (make-sparse-keymap)))
(set-keymap-parent map text-mode-map)
(define-key map [f5] 'revert-buffer-preserve-modes)
(define-key map [?\C-+] 'text-scale-increase)
(define-key map [?\C--] 'text-scale-decrease) ; shadows 'negative-argument which is also available via M-- and C-M--, though.
(define-key map [C-kp-subtract] 'text-scale-decrease)
map)
"Keymap for kde-default-keys-minor-mode'.")

;; Minor mode for keypad control
(define-minor-mode kde-default-keys-minor-mode
:global t
:init-value t
:lighter ""
:keymap 'kde-default-keys-minor-mode-map
)


### 3.15 Insert unicode characters

Actually you do not need any configuration here. Just use

M-x ucs-insert


To insert any unicode character. If you want to see them while selecting, have a look at xub-mode from Ergo Emacs.

### 3.16 Highlight TODO and FIXME in comments

This is a default feature in most IDEs. Since Emacs allows you to build your own IDE, it does not offer it by default… but it should, since that does not disturb anything. So we add it.

fic-ext-mode highlight TODO and FIXME in comments for common programming languages.

;; Highlight TODO and FIXME in comments
(package-require 'fic-ext-mode)
"helper function to add a callback to multiple hooks"
(dolist (mode mode-list)
(add-hook (intern (concat (symbol-name mode) "-mode-hook")) something)))

(add-something-to-mode-hooks '(c++ tcl emacs-lisp python text markdown latex) 'fic-ext-mode)



### 3.17 Save macros as functions

Now for something which should really be provided by default: You just wrote a cool emacs macro, and you are sure that you will need that again a few times.

Well, then save it!

In standard emacs that needs multiple steps. And I hate that. Something as basic as saving a macro should only need one single step. It does now (and Emacs is great, because it allows me to do this!).

This bridges the gap between function definitions and keyboard macros, making keyboard macros something like first class citizens in your Emacs.

; save the current macro as reusable function.
(defun save-current-kbd-macro-to-dot-emacs (name)
"Save the current macro as named function definition inside
your initialization file so you can reuse it anytime in the
future."
(interactive "SSave Macro as: ")
(name-last-kbd-macro name)
(save-excursion
(find-file-literally user-init-file)
(goto-char (point-max))
(insert "\n\n;; Saved macro\n")
(insert-kbd-macro name)
(insert "\n")))


### 3.18 Transparent GnuPG encryption

If you have a diary or similar, you should really use this. It only takes a few lines of code, but these few lines are the difference between encryption for those who know they need it and encryption for everyone.

; Activate transparent GnuPG encryption.
(require 'epa-file)
(epa-file-enable)


### 3.19 Colored shell commands

A shell without colors is really hard to read. Let’s make that easier.

; colored shell commands via C-!
(defun babcore-shell-execute(cmd)
"Execute a shell command in an interactive shell buffer."
(interactive "sShell command: ")
(shell (get-buffer-create "*shell-commands-buf*"))
(process-send-string (get-buffer-process "*shell-commands-buf*") (concat cmd "\n")))
(global-set-key (kbd "C-!") 'babcore-shell-execute)


### 3.20 Save backups in ~/.local/share/emacs-saves

This is just an aestetic value: Use the directories from the freedesktop specification for save files.

Thanks to the folks at CERN for this.

(setq backup-by-copying t      ; don't clobber symlinks
backup-directory-alist
'(("." . "~/.local/share/emacs-saves"))    ; don't litter my fs tree
delete-old-versions t
kept-new-versions 6
kept-old-versions 2
version-control t)       ; use versioned backups



### 3.21 Basic persistency

If I restart the computer I want my editor to make it easy for me to continue where I left off.

It’s bad enough that most likely my brain buffers were emptied. At least my editor should remember how to go on.

#### 3.21.1 saveplace

If I reopen a file, I want to start at the line at which I was when I closed it.

; save the place in files
(require 'saveplace)
(setq-default save-place t)


#### 3.21.2 recentf

Also I want to be able to see the most recently opened files. Almost every single program on my computer has a “recently opened files” list, and now emacs does, too.

; show recent files
(package-require 'recentf)
(recentf-mode 1)


#### 3.21.3 savehist

And I want to be able to call my recent commands in the minibuffer. I normally don’t type the full command name anyway, but rather C-r followed by a small part of the command. Losing that on restart really hurts, so I want to avoid that loss.

; save minibuffer history
(require 'savehist)
(savehist-mode t)


#### 3.21.4 desktop globals

This is the chainsaw of persistency. I commented it out, because it can be overkill and actually disturb more than it helps, when it recovers stuff I did not need.

;; save registers and open files over restarts,
;; thanks to http://www.xsteve.at/prg/emacs/power-user-tips.html
;; save a list of open files in ~/.emacs.desktop
;; save the desktop file automatically if it already exists
;(setq desktop-save 'if-exists)
;(desktop-save-mode 1)

;; ;; save a bunch of variables to the desktop file
;; ;; for lists specify the len of the maximal saved data also
;; (setq desktop-globals-to-save
;;       (append '((extended-command-history . 300)
;;                 (file-name-history        . 100)
;;                 (grep-history             . 30)
;;                 (compile-history          . 30)
;;                 (minibuffer-history       . 5000)
;;                 (query-replace-history    . 60)
;;                 (regexp-history           . 60)
;;                 (regexp-search-ring       . 20)
;;                 (search-ring              . 2000)
;;                 (shell-command-history    . 50)
;;                 tags-file-name
;;                 register-alist)))

;; ;; restore only 5 buffers at once and the rest lazily
;; (setq desktop-restore-eager 5)

; maybe nicer: http://github.com/doomvox/desktop-recover



### 3.22 use the system clipboard

Finally one more minor adaption: Treat the clipboard gracefully. This is a tightrope stunt and getting it wrong can feel awkward.

This is the only setting for which I’m not sure that I got it right, but it’s what I use…

; Use the system clipboard
(setq x-select-enable-clipboard t)


In case you mostly write free software, you might be as weary of hunting for the license header and copy pasting it into new files as I am. Free licenses, and especially copyleft licenses, are one of the core safeguards of free culture, because they give free software developers an edge over proprietarizing folks. But they are a pain to add to every file…

Well: No more. We now have legalese mode to take care of the inconvenient legal details for us, so we can focus on the code we write. Just call M-x legalese to add a GPL header, or C-u M-x legalese to choose another license.

(package-require 'legalese)


### 3.24 finish up

Make it possible to just (require 'babcore) and add the proper package footer.

(provide 'babcore)
;;; babcore.el ends here


## 4 Summary

With the babcore you have a core setup which exposes some of the essential features of Emacs and adds basic integration with the system which is missing in pristine Emacs.

Now go and see the M-x package-list-packages to see where you can still go - or just use Emacs and add what you need along the way. The package list is your friend, as is Emacswiki.

Happy Hacking!

Date: 2013-04-03,
,
Org version 7.9.2 with Emacs version 24
Validate XHTML 1.0

Note: As almost everything on this page, this text and code is available under the GPLv3 or later.

# Custom link completion for org-mode in 25 lines (emacs)

Update (2013-01-23): The new org-mode removed (org-make-link), so I replaced it with (concat) and uploaded a new example-file: org-custom-link-completion.el.
Happy Hacking!

## 1 Intro

I recently set up custom completion for two of my custom link types in Emacs org-mode. When I wrote on identi.ca about that, Greg Tucker-Kellog said that he’d like to see that. So I decided, I’d publish my code.

The link types I regularly need are papers (PDFs of research papers I take notes about) and bib (the bibtex entries for the papers). The following are my custom link definitions :

(setq org-link-abbrev-alist
'(("bib" . "~/Dokumente/Uni/Doktorarbeit-inverse-co2-ch4/aufschriebe/ref.bib::%s")
("notes" . "~/Dokumente/Uni/Doktorarbeit-inverse-co2-ch4/aufschriebe/papers.org::#%s")
("papers" . "~/Dokumente/Uni/Doktorarbeit-inverse-co2-ch4/aufschriebe/papers/%s.pdf")))


For some weeks I had copied the info into the links by hand. Thus an entry about a paper looks like the following.

* Title [[bib:identifier]] [[papers:name_without_suffix]]


This already suffices to be able to click the links for opening the PDF or showing the bibtex entry. Entering the links was quite inconvenient, though.

## 2 Implementation: papers

The trick to completion in org-mode is to create the function org-LINKTYPE-complete-link.

Let’s begin with the papers-links, because their completion is more basic than the completion of the bib-link.

First I created a helper function to replace all occurrences of a substring in a string1.

(defun string-replace (this withthat in)
"replace THIS with WITHTHAT' in the string IN"
(with-temp-buffer
(insert in)
(goto-char (point-min))
(replace-string this withthat)
(buffer-substring (point-min) (point-max))))



As you can see, it’s quite simple: Just create a temporary buffer and and use the default replace-string function I’m using daily while editing. Don’t assume I had figured out that elegant way myself. I just searched for it in the net and adapted the nicest code I found :)

Now we get to the real completion:

<<string-replace>>
"Create a papers link using completion."
(setq file (read-file-name "papers: " "papers/"))



The real magic is in read-file-name. That just uses the file-completion with a custom command prefix.

cleanup-link is only a small list of setq’s which removes parts of the filepath to make it compatible with the syntax for paper-links:

(let ((pwd (file-name-as-directory (expand-file-name ".")))
(pwd1 (file-name-as-directory (abbreviate-file-name
(expand-file-name ".")))))
(setq file (string-replace "papers/" "" file))
(setq file (string-replace pwd "" (string-replace pwd1 "" file)))
(setq file (string-replace ".pdf" "" file))


And that’s it. A few lines of simple elisp and I have working completion for a custom link-type which points to research papers - and can easily be adapted when I change the location of the papers.

Now don’t think I would have come up with all that elegant code myself. My favorite language is Python and I don’t think that I should have to know emacs lisp as well as Python. So I copied and adapted most of it from existing functions in emacs. Just use C-h C-f <function-name> and then follow the link to the code :)

Remember: This is free software. Reuse and learning from existing code is not just allowed but encouraged.

## 3 Implementation: bib

For the bib-links, I chose an even easier way. I just reused reftex-do-citation from reftex-mode:

<<reftex-setup>>
"Create a bibtex link using reftex autocompletion."
(concat "bib:" (reftex-do-citation nil t nil)))



For reftex-do-citation to allow using the bib-style link, I needed some setup, but I already had that in place for explicit citation inserting (not generalized as link-type), so I don’t count following as part of the actual implementation. Also I likely copied most of it from emacs-wiki :)

(defun org-mode-reftex-setup ()
(interactive)
(and (buffer-file-name) (file-exists-p (buffer-file-name))
(progn
; Reftex should use the org file as master file. See C-h v TeX-master for infos.
(setq TeX-master t)
(turn-on-reftex)
; don’t ask for the tex master on every start.
(reftex-parse-all)
(reftex-set-cite-format
'((?b . "[[bib:%l][%l-bib]]")
(?n . "[[notes:%l][%l-notes]]")
(?p . "[[papers:%l][%l-paper]]")
(?t . "%t")
(?h . "** %t\n:PROPERTIES:\n:Custom_ID: %l\n:END:\n[[papers:%l][%l-paper]]")))))
(define-key org-mode-map (kbd "C-c )") 'reftex-citation)
(define-key org-mode-map (kbd "C-c (") 'org-mode-reftex-search))



And that’s it. My custom link types now support useful completion.

## 4 Result

For papers, I get an interactive file-prompt to just select the file. It directly starts in the papers folder, so I can simply enter a few letters which appear in the paper filename and hit enter (thanks to ido-mode).

For bibtex entries, a reftex-window opens in a lower split-screen and asks me for some letters which appear somewhere in the bibtex entry. It then shows all fitting entries in brief but nice format and lets me select the entry to enter. I simply move with the arrow-keys, C-n/C-p, n/p or even C-s/C-r for searching, till the correct entry is highlighted. Then I hit enter to insert it.

And that’s it. I hope you liked my short excursion into the world of extending emacs to stay focussed while connecting seperate data sets.

I never saw a level of (possible) integration and consistency anywhere else which even came close to the possibilities of emacs.

And by the way: This article was also written in org-mode, using its literate programming features for code-samples which can actually be executed and extracted at will.

To put it all together I just need the following:

<<org-papers-complete-link>>


Now I use M-x org-babel-tangle to write the code to the file org-custom-link-completion.el. I attached that file for easier reference: org-custom-link-completion.el :)

Have fun with Emacs!

PS: Should something be missing here, feel free to get it from my public .emacs.d. I only extracted what seemed important, but I did not check if it runs in a pristine Emacs. My at-home branch is “fluss”.

## Footnotes:

1 : Creating a custom function for string replace might not have been necessary, because some function might already exist for that. But writing it myself was faster than searching for it.

# Easily converting ris-citations to bibtex with emacs and bibutils

## The problem

Nature only gives me ris-formatted citations, but I use bibtex.

Also ris is far from human readable.

## The background

ris can be reformatted to bibtext, but doing that manually disturbs my workflow when getting references while taking note about a paper in emacs.

I tend to search online for references, often just using google scholar, so when I find a ris reference, the first data I get for the ris-citation is a link.

## The solution

### Making it possible

bibutils1 can convert ris to an intermediate xml format and then convert that to bibtex.

wget -O reference.ris RIS_URL
cat reference.ris | ris2xml | xml2bib >> ref.bib


This solves the problem, but it is not convenient, because I have to switch to the terminal, download the file, convert it and append the result to my bibtex file.

### Making it convenient

With the first step, getting the ris-citation is quite inconvenient. I need 3 steps just for getting a citation.

But those steps are always the same, and since I use Emacs, I can automate and integrate them very easily. So I created a simple function in emacs, which takes the url of a ris citation, converts it to bibtex and appends the result to my local bibtex file. Now I get a ris citation with a simple call to

M-x ris-citation-to-bib


Then I enter the url and the function appends the citation to my bibtex file.2

Feel free to integrate it into your own emacs setup (additionally to the GPLv3 you can use any license used by emacswiki or worg).

(defun ris-citation-to-bib (&optional ris-url)
"get a ris citation as bibtex in one step. Just call M-x
ris-citation-to-bib and enter the ris url.
Requires bibutils: http://sourceforge.net/p/bibutils/home/Bibutils/
"
(interactive "Mris-url: ")
(save-excursion
(let ((bib-file "/home/arne/aufschriebe/ref.bib")
(bib-buffer (get-buffer "ref.bib"))
(ris-buffer (url-retrieve-synchronously ris-url)))
; firstoff check if we have the bib buffer. If yes, move point to the last line.
(if (not (member bib-buffer (buffer-list)))
(setq bib-buffer (find-file-noselect bib-file)))
(progn
(set-buffer bib-buffer)
(goto-char (point-max)))
(if ris-buffer
(set-buffer ris-buffer))
(shell-command-on-region (point-min) (point-max) "ris2xml | xml2bib" ris-buffer)
(let ((pmin (- (search-forward "@") 1))
(pmax (search-forward "}")))
(if (member bib-buffer (buffer-list))
(progn
(append-to-buffer bib-buffer pmin pmax)
(kill-buffer ris-buffer)
(set-buffer bib-buffer)
(save-buffer)
))))))


Happy Hacking!

1. To get bibutils in Gentoo, just call emerge app-text/bibutils

2. Well, actually I only use M-x ris- TAB, but that’s a detail (though I would not want to work without it :) )

# El Kanban Org: parse org-mode todo-states to use org-tables as Kanban tables

Kanban for emacs org-mode.

Update (2013-04-13): Kanban.el now lives in its own repository: on bitbucket and on a statically served http-repo (to be independent from unfree software).

Update (2013-04-10): Thanks to Han Duply, kanban links now work for entries from other files. And I uploaded kanban.el on marmalade.

Some time ago I learned about kanban, and the obvious next step was: “I want to have a kanban board from org-mode”. I searched for it, but did not find any. Not wanting to give up on the idea, I implemented my own :)

The result are two functions: kanban-todo and kanban-zero.

## kanban-todo

kanban-todo provides your TODO items as kanban-fields. You can move them in the table without having duplicates, so all the state maintenance is done in the kanban table. Once you are finished, you mark them as done and delete them from the table.

To set it up, put kanban.el somewhere in your load path and (require 'kanban) (more recent but potentially unstable version). Then just add a table like the following:

|   |   |   |
|---+---+---|
|   |   |   |
|   |   |   |
|   |   |   |
|   |   |   |
#+TBLFM: $1='(kanban-todo @# @2$2..@>$>)::@1='(kanban-headers$#)


Click C-c C-c with the point on the TBLFMT line to update the table.

The important line is the #+TBLFM. That says “use my TODO items in the TODO column, except if they are in another column” and “add kanban headers for my TODO states”

The kanban-todo function takes an optional parameter match, which you can use to restrict the kanban table to given tags. The syntax is the same as for org-mode matchers. The third argument allows you to provide a scope, for example a list of files.

To only set the scope, use nil for the matcher.

See C-h f org-map-entries and C-h v org-agenda-files for details.

## kanban-zero

kanban-zero is a zero-state Kanban: All state is managed in org-mode and the table only displays the kanban items.

To set it up, put kanban.el somwhere in your load path and (require 'kanban). Then just add a table like the following:

|   |   |   |
|---+---+---|
|   |   |   |
|   |   |   |
|   |   |   |
|   |   |   |
#+TBLFM: @2$1..@>$>='(kanban-zero @# $#)::@1='(kanban-headers$#)


The important line is the #+TBLFM. That says “show my org items in the appropriate column” and “add kanban headers for my TODO states”.

Click C-c C-c with the point on the TBLFMT line to update the table.

The kanban-zero function takes an optional parameter match, which you can use to restrict the kanban table to given tags. The syntax is the same as for org-mode matchers. The third argument allows you to provide a scope, for example a list of files.

To only set the scope, use nil for the matcher.

An example for matcher and scope would be:

#+TBLFM: @2$1..@>$>='(kanban-zero @# $# "1w6" '("/home/arne/.emacs.d/private/org/emacs-plan.org"))::@1='(kanban-headers$#)


See C-h f org-map-entries and C-h v org-agenda-files for details.

## Contribute

To contribute to kanban.el, just change the file and write a comment about your changes. Maybe I’ll setup a repo on Bitbucket at some point…

## Example

In the Hexbattle game-draft, I use kanban to track my progress:

… and so on …

### “Graphical” TODO states

To make the todo states easier to grok directly you can use unicode symbols for them. Example:

#+SEQ_TODO: ❢ ☯ ⧖ | ☺ ✔ DEFERRED ✘
| ❢ | ☯ | ⧖ | ☺ |
|---+---+---+---|
|   |   |   |   |
#+TBLFM: @1='(kanban-headers $#)::@2$1..@>$>='(kanban-zero @#$#)

In my setup they are ❢ (todo) ☯ (doing) ⧖ (waiting) and ☺ (to report). Not shown in the kanban Table are ✔ (finished), ✘ (dropped) and deferred (later), because they don’t require any action from me, so I don’t need to see them all the time.

### Collecting kanban entries via SSH

If you want to create a shared kanban table, you can use the excellent transparent network access options from Emacs tramp to collect kanban entries directly via SSH.

To use that, simply pass an explicit list of files to kanban-zero as 4th argument (if you don’t use tag matching just use nil as 3rd argument). "/ssh:host:path/to/file.org" retrieves the file ~/path/to/file.org from the host.

| ❢ | ☯ |
|---+---|
|   |   |
#+TBLFM: @1='(kanban-headers $#)::@2$1..@>$>='(kanban-zero @#$# nil (list (buffer-file-name) "/ssh:localhost:plan.org"))


Caveeat: all included kanban files have to use at least some of the same todo states: kanban.el only retrieves TODO states which are used in the current buffer.

# emacs wanderlust.el setup for reading kmail maildir

This is my wanderlust.el file to read kmail maildirs. You need to define every folder you want to read.

;; mode:--emacs-lisp--
;; wanderlust
(setq
elmo-maildir-folder-path "~/.kde/share/apps/kmail/mail"
;; where i store my mail

wl-stay-folder-window t                       ;; show the folder pane (left)
wl-folder-window-width 25                     ;; toggle on/off with 'i'

wl-smtp-posting-server "smtp.web.de"            ;; put the smtp server here
wl-local-domain "draketo.de"          ;; put something here...
wl-message-id-domain "web.de"     ;; ...

file continued:
  wl-from "Arne Babenhauserheide arne_bab@web.de"                  ;; my From:

;; note: all below are dirs (Maildirs) under elmo-maildir-folder-path
;; the '.'-prefix is for marking them as maildirs
wl-fcc ".sent-mail"                       ;; sent msgs go to the "sent"-folder
wl-default-folder ".inbox"           ;; my main inbox
wl-draft-folder ".drafts"            ;; store drafts in 'postponed'
wl-trash-folder ".trash"             ;; put trash in 'trash'
wl-spam-folder ".gruppiert/Spam"              ;; ...spam as well
wl-queue-folder ".queue"             ;; we don't use this

;; check this folder periodically, and update modeline
wl-biff-check-folder-list '(".todo") ;; check every 180 seconds
;; (default: wl-biff-check-interval)

;; hide many fields from message buffers
wl-message-ignored-field-list '("^.*:")
wl-message-visible-field-list
'("^$$To\|Cc$$:"
"^Subject:"
"^$$From\|Reply-To$$:"
"^Organization:"
"^Message-Id:"
"^$$Posted\|Date$$:"
)
wl-message-sort-field-list
'("^From"
"^Organization:"
"^Subject"
"^Date"
"^To"
"^Cc"))

; Encryption via GnuPG

(require 'mailcrypt)
(mc-setversion "gpg")    ; for PGP 2.6 (default); also "5.0" and "gpg"

;(setq mc-pgp-keydir "~/.gnupg")
;(setq mc-pgp-path "gpg")
(setq mc-encrypt-for-me t)
(setq mc-pgp-user-id "FE96C404")

(defun mc-wl-verify-signature ()
(interactive)
(save-window-excursion
(wl-summary-jump-to-current-message)
(mc-verify)))

(defun mc-wl-decrypt-message ()
(interactive)
(save-window-excursion
(wl-summary-jump-to-current-message)
(mc-decrypt))))

'(setq mc-modes-alist
(append
(quote
((wl-draft-mode (encrypt . mc-encrypt-message)
(sign . mc-sign-message))
(wl-summary-mode (decrypt . mc-wl-decrypt-message)
(verify . mc-wl-verify-signature))))
mc-modes-alist)))

; flowed text

(lambda ()
(when (string= "flowed"
(cdr (assoc "format"
(mime-content-type-parameters
(mime-entity-content-type entity)))))
(fill-flowed))))
; writing f=f
;(mime-edit-insert-tag "text" "plain" "; format=flowed")

(provide 'private-wanderlust)


## UPDATE (2012-05-07): ~/.folders

I now use a ~/.folders file, to manage my non-kmail maildir subscriptions, too. It looks like this:

.sent-mail
.~/.local/share/mail/mgl_spam   "mgl spam"
.~/.local/share/mail/to.arne_bab    "to arne_bab"
.inbox  "inbox"
.trash  "Trash"
..gruppiert.directory/.inbox.directory/Freunde  "Freunde"
.drafts "Drafts"
..gruppiert.directory/.alt.directory/Posteingang-2011-09-18 "2011-09-18"
.outbox


The mail in ~/.local/share/mail is fetched via fetchmail and procmail to have a really reliable mail fetching system which does not rely on a non-broken database or free space on the disk to keep working…

# Insert a scaled screenshot in emacs org-mode

@marjoleink asked on identi.ca1, if it is possible to use emacs org-mode for showing scaled screenshots inline while writing. Since I thought I’d enjoy some hacking, I decided to take the challenge.

It does not do auto-scaling of embedded images, as far as I know, but the use case of screenshots can be done with a simple function:

(defun org-insert-scaled-screenshot ()
"Insert a scaled screenshot
for inline display
(interactive)
(let ((filename
(concat "screenshot-"
(substring
(shell-command-to-string
"date +%Y%m%d%H%M%S")
0 -1 )
".png")))
(let ((scaledname
(concat filename "-width300.png")))
(shell-command
(concat "import -window root "
filename))
(shell-command
filename " " scaledname))
(insert (concat "[[./" scaledname "]]")))))


Now just call M-x org-redisplay-inline-images to see the screenshot (or add it to the function).

In action:

Have fun with Emacs - and happy hacking!

1. Matthew Gregg: @marjoleink "way of life" thing again, but if you can invest some time, org-mode is a really powerful note keeping environment. → Marjolein Katsma: @mcg I'm sure it is - but seriously: can you embed a diagram or screenshot, scale it, and link it to itself?

# Minimal example for literate programming with noweb in emacs org-mode

If you want to use the literate programming features in emacs org-mode, you can try this minimal example to get started: Activate org-babel-tangle, then put this into the file noweb-test.org:

Minimal example for noweb in org-mode

* Assign

First we assign abc:

#+begin_src python :noweb-ref assign_abc
abc = "abc"
#+end_src

* Use

Then we use it in a function:

#+begin_src python :noweb tangle :tangle noweb-test.py
def x():
<<assign_abc>>
return abc

print(x())
#+end_src


noweb-test.org

Hit C-c C-c to evaluate the source block. Hit C-c C-v C-t to put the expanded code into the file noweb-test.py.

The exported code looks like this:

  def x():
abc = "abc"
return abc
print(x())
noweb-test.py

(html generated with org-export-as-html-to-buffer and slightly reniced to escape the additional parsing I have on my site)

And with org-export-as-pdf we get this:

noweb-test.pdf

Add :results output to the #+begin_src line of the second block to see the print results under that block when you hit C-c C-c in the block.

You can also use properties of headlines for giving the noweb-ref. Org-mode can then even concatenate several source blocks into one noweb reference. Just hit C-c C-x p to set a property (or use M-x org-set-property), then set noweb-ref to the name you want to use to embed all blocks under this heading together.

Note: org-babel prefixes each line of an included code-block with the prefix used for the reference (here <<assign_abc>>). This way you can easily include blocks inside python functions.

Have fun with Emacs and org-mode!

I just found the excellent pydoc-info mode for emacs from Jon Waltman. It allows me to hit C-h S in a python file and enter a module name to see the documentation right away. If the point is on a symbol (=module or class or function), I can just hit enter to see its docs.

In its default configuration (see the Readme) it “only” reads the python documentation. This alone is really cool when writing new python code, but it s not enough, since I often use third party modules.

And now comes the treat: If those modules use sphinx for documentation (≥1.1), I can integrate them just like the standard python documentation!

It took me some time to get it right, but now I have all the documentation for the inverse modelling framework I contribute to directly at my fingertips: Just hit C-h S ENTER when I’m on some symbol and a window shows me the docs:

The text in this image is from Wouter Peters. Used here as short citation which should be legal almost everywhere under citation rules.

I want to save you the work of figuring out how to do that yourself, so here’s a short guide for integrating the documentation for your python program into emacs.

## Integrating your own documentation into emacs

The prerequisite for integrating your own documentation is to use sphinx for documenting your code. See their tutorial for info how to set it up. As soon as sphinx works for you, follow this guide to integrate your docs in your emacs.

### Install pydoc-info

First get pydoc-info and the python infofile (adapt this to your local setup):

# get the mode
cd ~/.emacs.d/libs
hg clone https://bitbucket.org/jonwaltman/pydoc-info
# and the pregenerated info-file for python
gunzip python.info
sudo cp python.info /usr/share/info
sudo install-info --info-dir=/usr/share/info python.info


(I also added pydoc-info as subrepo to my .emacs.d repo to make it easy to transfer my adaption between my different computers)

To build the info file for python yourself, have a look at the Readme.

### Turn your documentation into info

Now turn your own documentation into an info document and install it.

Sphinx uses a core configuration file named conf.py. Add the following to that file, replacing all values but index and False by the appropriate names for you project:

# One entry per manual page.
# list of tuples (startdocname,
# targetname, title, author, dir_entry,
# description, category, toctree_only).
texinfo_documents = [
('index', # startdocname, keep this!
'TARGETNAME', # targetname
u'Long Title', # title
u'Author Name', # author
'Name in the Directory Index of Info', # dir_entry
u'Long Description', # description
'Software Development', # cathegory
False), # better keep this, too, i think.
]


Then call sphinx and install the info files like this (maybe adapted to your local setup):

sphinx-build -b texinfo source/ texinfo/
cd texinfo
sudo install-info --info-dir=/usr/share/info TARGETNAME.info
sudo cp TARGETNAME.info /usr/share/info/


### Activate pydoc-info, including your documentation

; Show python-documentation as info-pages via C-h S
(require 'pydoc-info)
:mode 'python-mode
:parse-rule 'pydoc-info-python-symbol-at-point
:doc-spec
'(("(python)Index" pydoc-info-lookup-transform-entry)
("(TARGETNAME)Index" pydoc-info-lookup-transform-entry)))


# Recipes for presentations with beamer latex using emacs org-mode

I wrote some recipes for creating the kinds of slides I need with emacs org-mode export to beamer latex.

Update: Read ox-beamer to see how to adapt this to work with the new export engine in org-mode 0.8.

Below is an html export of the org-mode file. Naturally it does not look as impressive as the real slides, but it captures all the sources, so I think it has some value.

Note: To be able to use the simple block-creation commands, you need to add #+startup: beamer to the header of your file or explicitely activate org-beamer with M-x org-beamer-mode.

PS: I hereby allow use of these slides under any of the licenses used by worg and/or the emacs wiki.

## 1 Introduction

### 1.1 Usage

#### 1.1.2 C-f <file which ends in .org>

Hello World

#+LaTeX_CLASS: beamer
#+BEAMER_FRAME_LEVEL: 2

* Hello
** Hello GNU
Nice to see you!


### 1.2 org-mode + beamer = love

#### 1.2.1 Code    BMCOL

Recipes
#+LaTeX_CLASS: beamer
#+BEAMER_FRAME_LEVEL: 2
* Introduction
** org-mode + beamer =  love
*** Code :BMCOL:
:PROPERTIES:
:BEAMER_col: 0.7
:END:
<example block>
*** Simple block  :BMCOL:B_block:
:PROPERTIES:
:BEAMER_col: 0.3
:BEAMER_env: block
:END:
it's that easy!


it's that easy!

### 1.3 Two columns - in commands

#### 1.3.1 Commands    BMCOL B_block

** Two columns - in commands
*** Commands
C-c C-b | 0.7
C-c C-b b
C-n
<eTAB (write example) C-n C-n
*** Result
C-c C-b | 0.3
C-c C-b b
even easier - and faster!


#### 1.3.2 Result    BMCOL B_block

even easier - and faster!

## 2 Recipes

### 2.1 Four blocks - code

*** Column 1 :B_ignoreheading:BMCOL:
:PROPERTIES:
:BEAMER_col: 0.5
:END:

*** One
*** Three

:PROPERTIES:
:BEAMER_col: 0.5
:END:

*** Two
*** Four


### 2.3 Four nice blocks - commands

***
C-c C-b | 0.5 # column
C-c C-b i # ignore heading
*** One
C-c C-b b # block
*** Three
C-c C-b b
***
C-c C-b | 0.5
C-c C-b i
*** Two
C-c C-b b
*** Four
C-c C-b b


### 2.5 Top-aligned blocks

#### 2.5.1 Code    B_block BMCOL

*** Code                                                      :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.5
:BEAMER_envargs: C[t]
:END:

*** Result                                                    :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.5
:END:
pretty nice!


pretty nice!

### 2.6 Two columns with text underneath - code

#### 2.6.1B_columns

• Code    BMCOL

\tiny

***  :B_columns:
:PROPERTIES:
:BEAMER_env: columns
:END:

**** Code :BMCOL:
:PROPERTIES:
:BEAMER_col: 0.6
:END:

**** Result :BMCOL:
:PROPERTIES:
:BEAMER_col: 0.4
:END:

:PROPERTIES:
:END:
Much text underneath! Very Much.
Maybe too much. The whole width!


\normalsize

• Result    BMCOL

Much text underneath! Very Much. Maybe too much. The whole width!

### 2.7 Nice quotes

#### 2.7.1 Code    B_block BMCOL

#+begin_quote
Emacs org-mode is a
great presentation tool -
Fast to beautiful slides.
- Arne Babenhauserheide
#+end_quote


#### 2.7.2 Result    B_block BMCOL

Emacs org-mode is a great presentation tool - Fast to beautiful slides.

• Arne Babenhauserheide

### 2.8 Math snippet

#### 2.8.2 Inline    B_block

$$1 + 2 = 3$$ is clear


#### 2.8.3 As equation    B_block

$1 + 2 \cdot 3 = 7$


#### 2.8.5 Inline    B_block

$$1 + 2 = 3$$ is clear

#### 2.8.6 As equation    B_block

$1 + 2 \cdot 3 = 7$

### 2.9 $$\LaTeX$$

#### 2.9.1 Code    BMCOL B_block

$$\LaTeX$$ gives a space
after math mode.

\LaTeX{} does it, too.

\LaTeX does not.

At the end of a sentence
both work.
Try \LaTeX. Or try \LaTeX{}.

Only $$\LaTeX$$ and $$\LaTeX{}$$
also work with HTML export.


#### 2.9.2 Result    BMCOL B_block

$$\LaTeX$$ gives a space after math mode.

\LaTeX{} does it, too.

\LaTeX does not.

At the end of a sentence both work. Try \LaTeX. Or try \LaTeX{}.

Only $$\LaTeX$$ and $$\LaTeX{}$$ also work with HTML export.

### 2.10 Images with caption and label

#### 2.10.1B_columns

• Code    B_block BMCOL
#+caption: GNU Emacs icon
#+label: fig:emacs-icon
[[/usr/share/icons/hicolor/128x128/apps/emacs.png]]

This is image (\ref{fig:emacs-icon})

• Result    B_block BMCOL

This is image (emacs-icon)

Autoscaled to the block width!

### 2.11 Examples

#### 2.11.1 Code    BMCOL B_block

: #+bla: foo


Gives an example, which does not interfere with regular org-mode parsing.

#+begin_example
content
#+end_example


Gives a simpler multiline example which can interfere.

#### 2.11.2 Result    BMCOL B_block

#+bla: foo


Gives an example, which does not interfere with regular org-mode parsing.

content


Gives a simpler multiline example which can interfere.

## 3 Basic Configuration

<Title>

#+startup: beamer
#+LaTeX_CLASS: beamer
#+LaTeX_CLASS_OPTIONS: [bigger]
#+AUTHOR: <empty for none, if missing: inferred>
#+DATE: <empty for none, if missing: today>
#+BEAMER_FRAME_LEVEL: 2
#+TITLE: <causes <Title> to be regular content!>


### 3.2 .emacs config

Put these lines into your .emacs or in a file your .emacs pulls in - i.e. via (require 'mysettings) if the other file is named mysettings.el and ends in (provide 'mysettings).

(org-babel-do-load-languages ; babel, for executing
'((sh . t)
(emacs-lisp . t)))

(require 'org-latex) ; latex export
'org-export-latex-packages-alist '("" "minted"))
'org-export-latex-packages-alist '("" "color"))
(setq org-export-latex-listings 'minted)


### 3.3 .emacs variables

You can easily set these via M-x customize-variable.

(custom-set-variables ; in ~/.emacs, only one instance
'(org-export-latex-classes (quote ; in the init file!
(("beamer" "\\documentclass{beamer}"
org-beamer-sectioning))))
'(org-latex-to-pdf-process (quote
((concat "pdflatex -interaction nonstopmode"
"-shell-escape -output-directory %o %f")
"bibtex $(basename %b)" (concat "pdflatex -interaction nonstopmode" "-shell-escape -output-directory %o %f") (concat "pdflatex -interaction nonstopmode" "-shell-escape -output-directory %o %f")))))  (concat "…" "…") is used here to get nice, short lines. Use the concatenated string instead ("pdflatex…%f"). ### 3.4 Required programs #### 3.4.1 Emacs - (gnu.org/software/emacs) To get org-mode and edit .org files effortlessly. emerge emacs  #### 3.4.2 Beamer $$\LaTeX$$ - (bitbucket.org/rivanvx/beamer) To create the presentation. emerge dev-tex/latex-beamer app-text/texlive  #### 3.4.3 Pygments - (pygments.org) To color the source code (with minted). emerge dev-python/pygments  ## 4 Thanks and license ### 4.1 Thanks Thanks go to the writers of emacs and org-mode, and for this guide in particular to the authors of the org-beamer tutorial on worg. Thank you for your great work! This presentation is licensed under the GPL (v3 or later) with the additional permission to distribute it without the sources and the copy of the GPL if you give a link to those.1 ## Footnotes: 1 : \tiny As additional permission under GNU GPL version 3 section 7, you may distribute these works without the copy of the GNU GPL normally required by section 4, provided you include a license notice and a URL through which recipients can access the Corresponding Source and the copy of the GNU GPL.\normalsize # Sending email to many people with Emacs Wanderlust I recently needed to send an email to many people1. Putting all of them into the BCC field did not work (mail rejected by provider) and when I split it into 2 emails, many did not see my mail because it was flagged as potential spam (they were not in the To-Field)2. I did not want to put them all into the To-Field, because that would have spread their email-addresses around, which many would not want3. So I needed a different solution. Which I found in the extensibility of emacs and wanderlust4. It now carries the name wl-draft-send-to-multiple-receivers-from-buffer. You simply write the email as usual via wl-draft, then put all email addresses you want write to into a buffer and call M-x wl-draft-send-to-multiple-receivers-from-buffer. It asks you about the buffer with email addresses, then shows you all addresses and asks for confirmation. Then it sends one email after the other, with a randomized wait of 0-10 seconds between messages to avoid flagging as spam. If you want to use it, just add the following to your .emacs: (defun wl-draft-clean-mail-address (address) (replace-regexp-in-string "," "" address)) (defun wl-draft-send-to-multiple-receivers (addresses) (loop for address in addresses do (progn (wl-user-agent-insert-header "To" (wl-draft-clean-mail-address address)) (let ((wl-interactive-send nil)) (wl-draft-send)) (sleep-for (random 10))))) (defun wl-draft-send-to-multiple-receivers-from-buffer (&optional addresses-buffer-name) "Send a mail to multiple recipients - one recipient at a time" (interactive "BBuffer with one address per line") (let ((addresses nil)) (with-current-buffer addresses-buffer-name (setq addresses (split-string (buffer-string) "\n"))) (if (y-or-n-p (concat "Send this mail to " (mapconcat 'identity addresses ", "))) (wl-draft-send-to-multiple-receivers addresses))))  Happy Hacking! 1. The email was about the birth of my second child, and I wanted to inform all people I care about (of whom I have the email address), which amounted to 220 recipients. 2. Naturally this technique could be used for real spamming, but to be frank: People who send spam won’t need it. They will already have much more sophisticated methods. This little trick just reduces the inconvenience brought upon us by the measures which are necessary due to spam. Otherwise I could just send a mail with 1000 receivers in the BCC field - which is how it should be. 3. It only needs one careless friend, and your connections to others get tracked in facebook and the likes. For more information on Facebook, see Stallman about Facebook 4. Sure, there are also template mails and all such, but learning to use these would consume just as much time as extending emacs - and would be much less flexible: Should I need other ways to transform my mails, I’ll be able to just reuse my code. # Simple Emacs DarkRoom I just realized that I let myself be distracted by all kinds of not-so-useful stuff instead of finally getting to type the text I already wanted to transcribe from stenografic at the beginning of … last week. ## Screenshot! Let’s take a break for a screenshot of the final version, because that’s what we want from any program :) As you can see, the distractions are removed — the screenshot is completely full screen and only the text is left. If you switch to the minibuffer (i.e. via M-x), the status bar (modeline) is shown. ## Background To remove the distractions I looked again at WriteRoom and DarkRoom and similar which show just the text I want to write. More exactly: I thought about looking at them again, but at second thought I decided to see if I could not just customize emacs to do the same, backed with all the power you get from several decades of being THE editor for many great hackers. It took some googling and reading emacs wiki, and then some Lisp-hacking, but finally it’s 4 o’clock in the morning and I’m writing this in my own darkroom mode1, toggled on and off by just hitting F11. ## Implementation I build on hide-mode-line (livejournal post or webonastick) as well as the full-screen info in the emacs wiki. The whole code just takes 29 lines of code plus 10 lines comments and whitespace: ; hide mode line, from ; http://dse.livejournal.com/66834.html ; http://webonastick.com (autoload 'hide-mode-line "hide-mode-line" nil t) ; fullscreen, taken from ; http://www.emacswiki.org/emacs/FullScreen#toc1 (defun toggle-fullscreen (&optional f) (interactive) (let ((current-value (frame-parameter nil 'fullscreen))) (set-frame-parameter nil 'fullscreen (if (equal 'fullboth current-value) (if (boundp 'old-fullscreen) old-fullscreen nil) (progn (setq old-fullscreen current-value) 'fullboth))))) ; simple darkroom with fullscreen, ; fringe, mode-line, menu-bar and scroll-bar hiding. (defvar darkroom-enabled nil) (defun toggle-darkroom () (interactive) (if (not darkroom-enabled) (setq darkroom-enabled t) (setq darkroom-enabled nil)) (toggle-fullscreen) (hide-mode-line) (if darkroom-enabled (progn (fringe-mode 'both) (menu-bar-mode -1) (scroll-bar-mode -1) (set-fringe-mode 200)) (progn (fringe-mode 'default) (menu-bar-mode) (scroll-bar-mode t) (set-fringe-mode 8)))) ; Activate with F11 - enhanced fullscreen :) (global-set-key [f11] 'toggle-darkroom)  Also I now activated cua-mode to make it easier to interact with other programs: C-c and C-x now copy/cut when the mark is active. Otherwise they are the usual prefix keys. To force them to be the prefix keys, I can use control-shift-c/-x. I thought this would disturb me, but it does not. To make it faster, I also told cua-mode to have a maximum delay of 50ms, so I don’t feel the delay. Essentially I just put this in my ~/.emacs: (cua-mode t) (setq cua-prefix-override-inhibit-delay 0.005)  ## Epilog Well, did this get me to transcribe the text? Not really, since I spent the time building my own DarkRoom/WriteRoom, but I enjoyed the little hacking and it might help me get it done tomorrow - and get far more other stuff done. And it is really fun to write in DarkRoom mode ;) PS: If you like the simple darkroom, please leave a comment! I hereby declare that anyone is allowed to use this post and the screenshot under the same licensing as if it had been written in emacswiki. 1. Actually there already is a darkroom mode, but it only works for windows. If you use that platform, you might enjoy it anyway. So you might want to call this mode “simple darkroom”, or darkroom x11 :) # Wish: KDE with Emacs-style keyboard shortcuts I would love to be able to use KDE with emacs-style keyboard shortcuts, because Emacs offers a huge set of already clearly defined shortcuts for many different situations. Since its users tend to do very much with the keyboard alone, even more obscure tasks are available via shortcuts. I think that this would be useful, because Emacs is like a kind of nongraphical desktop environment itself (just look at emacspeak!). For all those who use Emacs in a KDE environment, it could be a nice timesaver to be able to just use their accustomed bindings. It also has a mostly clean structure for the bindings: • "C-x anything" does changes which affect things outside the content of the current buffer. • "C-anything but x" acts on the content you're currently editing. • "C-c anything" is kept for specific actions of programs. For example "C-c C-c" in an email sends the email, while "C-c C-c" in a version tracking commit message finishes the message and starts the actual commit. • "M-x" opens a 'command-selection-dialog' (just like alt-f2). • "M-anything but x" is a different flavor of "C-anything but x". For example "C-f" moves the cursor one character forward, while "M-f" moves one word forward. "C-v" moves one page forward, while "M-v" moves one page backwards. On the backend side, this would require being able to define multistep shortcuts. Everything else is just porting the emacs shortcuts to KDE actions. The actual porting of shortcuts would then require mapping of the Emacs commands to KDE actions. Some examples: • "C-s" searches in a file. Replaces C-f. • "C-r" searches backwards. • "C-x C-s" saves a file -> close. Replaces C-w. • "C-x C-f" opens a file -> Open. Replaces C-o. • "C-x C-c" closes the program -> quit. Replaces C-q. • "C-x C-b" switches between buffers/files/tabs -> switch the open file. Replaces alt-right_arrow and a few other (to my knowledge) inconsistent bindings. • "C-x C-2" splits a window (or part of a window) vertically. "C-x C-o" switches between the parts. "C-x C-1" undoes the split and keeps the currently selected part. "C-x C-0" undoes the split and hides the currently selected part. # Freenet “When free speech dies, we need a place to organize” Freenet is a censorship resistant, distributed p2p-publishing platform. It lets you anonymously share files, browse and publish “freesites”, chat on forums and even do microblogging, using a generic Web of Trust, shared by different plugins, to avoid spam. For really careful people it offers a “darknet” mode, where users only connect to their friends, with which it is very hard to detect that they are running freenet. The overarching design goal of freenet is to make censorship as hard as technically possible. That’s the reason for providing anonymity (else you could be threatened with repercussions - as seen in the case of the wikileaks informer from the army in the USA), building it as a decentral network (else you could just shut down the central website, as people tried with wikileaks), providing safe pseudonyms and caching of the content on all participating nodes (else people could censor by spamming or overloading nodes) and even the darknet mode and enhancements in usability (else freenet could be stopped by just prosecuting everyone who uses it, or it would reach too few people to be able to counter censorship in the open web). I don’t know anymore what triggered my use of freenet initially, but I know all too well what keeps me running it instead of other anonymizers: I see my country (Germany) turning more and more into a police-state, starting with attacks on p2p, continuing with censorship of websites (“we all know child-porn is bad, so it can’t be bad to censor it, right? Sure we could just make the providers delete it, so noone can access it, but… no, we have to censor it, so only people who can use google can find it – which luckily excludes us, because we are not pedocriminals.”) and leading into directions I really don’t like. And in case the right for freedom of speech dies, we need a place where we can organize to get it back and fight for the rights laid out in our constitution (the Grundgesetz). And that’s what Freenet is to me. A technical way to make sure we can always organize acting by section 20 of our constitution (german link — google translated version): the right to oppose everyone who wants to abolish our constitutional order. PS: New entries on my site are also available in freenet (via freereader: downloads RSS feeds and republishes them in freenet). PPS: If you like this text, please redent/retweet the associated identi.ca/twitter notices so it spreads: • https://identi.ca/notice/46221737 • https://twitter.com/ArneBab/status/21217822748 # 50€ for the Freenet Project - and against censorship As I pledged1, I just donated to freenet 50€ of the money I got back because I cannot go to FilkCONtinental. Thanks go to Nemesis, a proud member of the “FiB: Filkers in Black” who will take my place at the Freusburg and fill these old walls with songs of stars and dreams - and happy laughter. It’s a hard battle against censorship, and as I now had some money at hand, I decided to do my part (freenetproject.org/donate.html). 1. The pledge can be seen in identi.ca and in a Sone post in freenet (including a comment thread; needs a running freenet node (install freenet in a few clicks) and the Sone plugin). # A vision for a social Freenet with WoT, FreeTalk and Sone I let my thought wander a bit around the question how a social Freenet (2.0 ;) ) could look from the view of a newcomer. I imagine myself installing freenet. The first thing to come up after starting it is the node page. (italic Text in brackets is a comment. The links need a Freenet running on 127.0.0.1 to work) “Welcome to Freenet, where no one can tell you’re reading” “Freenet tries hard to project your privacy. Therefore we created a pseudonymous ID for you. Its name is Gandi Schmidt. Visit the [your IDs site] to see a legend we prepared for you. You can use this legend as fictional background for your ID, if you are really serious about staying anonymous.” (The name should be generated randomly for each ID. A starting point for that could be a list of scientists from around the world compiled from the wikipedia (link needs freenet). The same should be true for the legend, though it is harder to generate. The basic information should be a quote (people remember that), a job and sex, the country the ID comes from (maybe correlated with the name) and a hobby.) “During the next few restarts, Freenet will ask you to solve various captchas to prove that you are indeed human. Once enough other nodes successfully confirmed that you are human, you will be granted access to the forums and microblogging. This might take a few hours to a few days.” (as soon as the ID has sufficient trust, automatically activate the FreeTalk and Sone plugins) “Note that other nodes don’t know who you are. They don’t know your IP, nor your real identity. The only thing they know is that you exist, that you can solve captchas and how to send you a message.” “You can create additional IDs at any time and give them any name and legend you choose by adding it on the WebOfTrust-page. Each new ID has to verify for itself that it’s human, though. If you carefully keep them seperate, others can only find out with a lot of effort that your IDs are related. Mind your writing style. In doubt, keep your sentences short. To make it easier for you to stay anonymous, you can autogenerate Name and Legend at random. Don’t use the nicest from many random trials, else you can be traced by the kind of random IDs you select.” “While your humanity is being confirmed, you can find a wealth of content on the following indexes, some published anonymously, some not. If you want to publish your own anonymous site, see Upload a Freesite. The list of indexes uses dynamic bookmarks. You get notified whenever a bookmarked site (like the indexes below) gets updated.” “Note: If you download content from freenet, it is being cached by other nodes. Therefore popular content is faster than rare content and you cannot overload nodes by requesting their data over and over again.” “You are currently using medium security in the range from low to high.” “In this security level, seperated IDs are no perfect protection of your anonymity, though, since other members might not be able to see what you do in Freenet, but they can know that you use freenet in the first place, and corporations or governments with medium sized infrastructure can launch attacks which might make it possible to trace your contributions and accesses. If you want to disappear completely from the normal web and keep your freenet usage hidden, as well as make it very hard to trace your contributions, to be able to really exercise your right of free speech without fearing repercussions, you can use Freenet as Darknet — the more secure but less newcomer friendly way to use freenet; the current mode is Opennet.” “To enter the Darknet, you add people you know and trust personally as your darknet friends. As soon as you have enough trusted friends, you can increase the security level to high and freenet will only connect to your trusted friends, making you disappear from the regular internet. The only way to tell that you are using freenet will then be to force your ISP to monitor all traffic coming from your computer.” “And once transport plugins are integrated, steganography will come into reach and allow masking your traffic as regular internet usage, making it very hard to distinguish freenet from encrypted internet-telephony. If you want to help making this a reality in the near future, please consider contributing or donating to freenet.” “Welcome to the pseudonymous web where no one can know who you are, but only that you are always using the same ID — if you do so.” “To show this welcome message again, you can at any time click on Intro in the links.” What do you think? Would this be a nice way to integrate WoT, FreeTalk, Sone and general user education in a welcome message, while adding more incentive to keep the node running? PS: Also posted in Freetalk and in Sone – the links need a running Freenet to work. PPS: This vision is not yet a reality, but all the necessary infrastructure is already in place and working in Freenet. You can already do everything described in here, just without the nice guide and the level of integration (for example activating plugins once you have proven your humanity, which equals enough trust by others to be actually seen). # Anonymous code collaboration with Mercurial and Freenet There is now a new Mercurial extension called "infocalypse" (which should keep working after the information apocalypse). It offers "fn-push" and "fn-pull" as an optimized way to store code in freenet: bundles are inserted and pulled one after the other. An index tells infocalypse in which order to pull the bundles. It makes using Mercurial in freenet far more efficient and convenient. Also you can use it to publish collaborative anonymous websites like the freefaq and Technophob. And it is a perfect fit for the workflow automatic trusted group of committers. Otherwise it offers the same features as FreenetHG. Using FreenetHG you can collaborate anonymously without having to give everyone direct write access to your code. To work with others, you simply setup a local repository for your own work and use FreenetHG to upload your code automatically into Freenet under your private ID. Others can then access your code with the corresponding public ID, do their changes locally and publish them in their own anonymous repository. You then pull changes you like into your repository and publish them again under your key. FreenetHG uses freenet which offers the concept of pseudonymity to make anonymous communication more secure and Mercurial to allow for efficient distributed collaboration. With pseudonymity you can't find out whom you're talking to, but you know that it is the same person, and with distibuted collaboration you don't need to let people write to your code directly, since every code repository is a full clone of the main repository. Even if the main repository should go down, every contributor can still work completely unhindered, and if someone else breaks things in his repository, you can simply decide not to pull the changes from him. ## What you need To use FreenetHG you obviously need a running freenet node and a local Mercurial installation. Also you need the FreenetHG plugin for Mercurial and PyFCP which provides Python bindings for Freenet. • get FreenetHG (the link needs a running freenet node on 127.0.0.1) • alternatively just do hg clone static-http://127.0.0.1:8888/USK@fQGiK~CfI8zO4cuNyhPRLqYZ5TyGUme8lMiRnS9TCaU,E3S1MLoeeeEM45fDLdVV~n8PCr9pt6GMq0tuH4dRP7c,AQACAAE/freenethg/1/ ## Setup a simple anonymous workflow To guide you through the steps, let's assume we want to create the anonymous repository "AnoFoo". After you got all dependencies, you need to activate the FreenetHG plugin in your ~/.hgrc file [extensions] freenethg = path/to/FreenetHG.py  You can get the FreenetHG.py from the freenethg website or from the Mercurial repository you cloned. Now you setup your anofoo Mercurial repository: hg init AnoFoo  As a next step we create some sections in the .hg/hgrc file in the repository: [ui] [freenethg] [hooks]  Now we enter the repository and use the setup wizard cd AnoFoo hg fcp-setupwitz  The setup wizard asks us for your username to use for this repository (to avoid accidently breaking our anonymity), the address to our freenet instance and for the path to our repository on freenet. The default answers should fit. The only one where we have to set something else is the project name. There we enter AnoFoo. Since we don't yet have a freenet URI for the repository, we just answer '.' to let FreenetHG generate one for us. That's also the default answer. The commit hook makes sure that we don't commit with another but the selected username. Also the wizard will print a line like the following: Request uri is: USK@xlZb9yJbGaKO1onzwawDvt5aWXd9tLZRoSoE17cjXoE,zFqFxAk15H-NvVnxo69oEDFNyU9uNViyNN5ANtgJdbU,AQACAAE/freenethg_test/1/  This is the line others can use to clone your project and pull from it. And with this we finished setting up our anonymous collaboration repository. When we commit, every commit will directly be uploaded into Freenet. So now we can pass the freenet Request uri to others who can clone our repository and setup their own repositories in freenet. When they add something interesting, we then pull the data from their Request uri and merge their code with ours. ## Setup a more convenient anonymous workflow This workflow is already useful, but it's a bit inconvenient to have to wait after each commit until your changes have been uploaded. So we'll now change this basic workflow a bit to be able to work more conveniently. First step: clone our repositories to a backup location: hg clone AnoFoo BackFoo  Second step: change our .hg/hgrc to only update when we push to the backup repository, and add the default-push path to the backup repository: [paths] default-push = ../BackFoo [hooks] pretxncommit = python:freenethg.username_checker outgoing = python:freenethg.updatestatic_hook [ui] username = anonymuse [freenethg] commitusername = anonymuse inserturi = USK@VERY_LONG_PRIVATE_KEY/AnoFoo/1/  Changes: We now have a default-push path, and we changed the "commit" hook to an "outgoing" hook which is evoked everytime changes leave this repository. It will also be evoked when someone pulls from this repo, but not when we clone it locally. Now our commits roll as fast as we're used to from other Mercurial repositories and freenethg will make sure we don't use the wrong username. When we want to anonymously publish the repository we then simply use hg push  This will push the changes to the backup and then upload it to your anonymous repository. And now we finished setting up our reopsitory and can begin using an anonymous and almost infinitely scaleable workflow which only requires our freenet installation to be running when we push the code online. One last touch: If an upload should chance to fail, you can always repeat it manually with hg fcp-uploadstatic  ## Time to go ...out there and do some anonymous coding (Maybe with the workflow automatic trusted group of committers). Happy hacking! And if this post caught your interest or you want to say anything else about it, please write a comment. Also please have a look at and vote for the wish to add a way to contribute anonymously to freenet, to make it secure against attacks on developers. And last but not least: vote for this article on digg and on yigg. # Background of Freenet Routing and the probes project (GSoC 2012) The probes project is a google summer of code project of Steve Dougherty intended to optimize the network structure of freenet. Here I will give the background of his project very briefly: ## The Small World Structure Freenet organizes nodes by giving them locations - like coordinates. The nodes know some others and can send data only to those, to which they are connected directly. If your node wants to contact someone it does not know directly, it sends a message to one of the nodes it knows and asks that one to forward the message. The decision whom to ask to forward the message is part of the routing. And the routing algorithm in Freenet assumes a small world network: Your node knows many people who are close to you and a few who are far away. Imagine that as knowing many people in your home town and few in other towns. There is mathematical proof, that the routing is very efficient and scales to billions of users - if it really operates on a small world network. So each freenet node tries to organize its connections in such a way, that it is connected to many nodes close by and some from far away.⁽¹⁾ The structure of the local connections of your own node can be characterized by the link length distribution: “How many short and how many long connections do you have?” ## Probes and their Promise The probes project from Steve is to analyze the structure of the network and the structure of the local connections of nodes in an anonymous way to improve the self-organization algorithm in freenet. The reason is that if the structure of the network is no small world network, the routing algorithm becomes much less efficient. That in turn means that if you want to get some data on the network, that data has to travel over far more intermediate nodes, because freenet cannot determine the shortest route. And if the data has to travel over more nodes, it consumes more bandwidth and takes longer to reach you. In the worst case it could happen that freenet does not find the data at all. To estimate the effect of that, you can look at the bar chart The Seeker linked to: Low is an ideal structure with 16 connections per node, Conforming is the measured structure with about 17 connections per node (a cluster with 12, one with ~25). Ideally we would want Normal with 26 connections per node and an ideal structure. High is 86 connections. The simulated network sizes are 6000 nodes (Small), 18 000 (Normal, as measured), 36 000 (Large). Fewer hops is better. It shows how many steps a request has to take to find some content. “Conforming” is the actually measured structure. “low”, “normal” and “high” shows the number of connections per node in an optimal network: 16, 26 and 86. The actually measured mean number of connections in freenet is similar to “low”, so that’s the bar with which we need to compare the “confirming” bar to see the effect of the suboptimal structure. And that effect is staggering: By default a request needs about two times as many steps in the real world than it would need in an optimally structured network. Practically: If freenet would manage to get closer to the optimal structure, it could double its speed and cut the reaction times by factor 2. Without changing anything else - and also without changing the local bandwidth consumption: You would simply get your content much faster. If we would manage to increase the mean number of connections to about 26 (that’s what a modern DSL connection can manage without too many ill effects), we could double the speed and half the reaction times again (but that requires more bandwidth in the nodes who currently have a low number of connections: Many have only about 12 connections, many have about 25 or so, few have something in between). Essentially that means we could gain factor 2 to factor 4 in speed and reaction times. And better scaleability (compare the normal and the large network). ## Note ⁽¹⁾: Network Optimization using Only Local Knowledge To achieve a good local connection-structure, the node can use different strategies for Opennet and Darknet (this section is mostly guessed, take it with a grain of salt. I did not read the corresponding code). In Opennet it can look if it finds nodes which would improve its local structure. If it finds one, it can replaces the local connection, which distorts its local structure the most, with the new connection. In Darknet on the other hand, where it can only connect to the folks it already knows, it looks for locations of nodes it hears about. It then checks if its local connection would be better if it had that other nodes location. In that case, it asks the other node if it would agree to swap its location with it (without changing any real connections: It only changes the notion where it lives. As if you would swap the flat with someone else but without changing who your friends are. Afterwards both the other one and you live closer to your respective friends). In short: In Opennet, Freenet changes to whom it is connected in order to achieve a small world structure: It selects its friends based on where it lives. In Darknet it swaps its location with stranges to live be closer to its friends. # Bootstrapping the Freenet WoT with GnuPG - and GnuPG with Freenet ## Intro When you enter the freenet Web of Trust, you first need to get some trust from people by solving captchas. And even when people trust you somehow, you have no way to prove your identity in an automatic way, so you can’t create identities which freenet can label as trusted without manual intervention from your side. ## Proposal To change this, we can use the Web of Trust used in GnuPG to infer trust relationships between freenet WoT IDs. Practically that means: • Write a message: “I am the WoT ID USK@” (replace with the public key of your WoT ID). • Sign that message with a GnuPG key you want to connect to the ID. The signature proves, that you control the GnuPG key. • Upload the signed message to your WoT key: USK@/bootstrap/0/gnupg.asc. To make this upload, you need the private key of the ID, so the upload proves, that you control the WoT ID. Now other people can download the file from you, and when they trust the GnuPG key, they can transfer their trust to the freenet WoT-ID. ## Automatic Ideally all this should be mostly automatic: • click a link in the freenet interface and select the WoT ID to have freenet create the file and run your local GnuPG program. • Then select your GnuPG key in the GnuPG program and enter your password. • Finally check the information to be inserted and press a button to start the upload. As soon as you have a GnuPG key connected with your WoT ID, freenet should scout all other WoT IDs for gnupg keys and check if the local GnuPG key you assigned to your WoT ID trusts the other key. If yes, give automatic trust (real person → likely no spammer). ## Anonymously To make the connection one-way (bootstrap the WoT from GnuPG, but not expose the key), you might be able to encrypt the message to all people who signed your GnuPG key. Then these can recognize you, but others cannot. This will lose you the indirect trust in the GnuPG web-of-trust, though. I hope this bootstrap-WoT draft sounded interesting :) Happy hacking! # Effortless password protected sharing of files via Freenet Often you want to exchange some content only with people who know a given password and make it accessible to everyone in your little group but invisible to the outside world. Until yesterday I thought that problem slightly complex, because everyone in your group needs a given encryption program, and you need a way to share the file without exposing the fact that you are sharing it. Then I learned two handy facts about Freenet: • <ArneBab> evanbd: If I insert a tiny file without telling anyone the key, can they get the content in some way? <evanbd> ArneBab: No. • <toad_> dogon: KSK@<any string of text> -> generate an SSK private key from the hash of the text <toad_> dogon: if you know the string, you can both insert and retrieve it In other words: Just inserting a file into freenet using the key KSK@<password> creates an invisible, password protected file which is shared over Freenet. The file is readable and (within limits1) writeable by everyone who knows the password, but invisible to everyone else. To upload a file as KSK, just go to the filesharing tab, click “upload a file”, switch to advanced mode and enter the KSK key. Or simply click here (requires freenet to be running on your computer with default settings). It’s strange to think that I only learned this after more than 7 years of using Freenet. How many more nuggets might be hidden there, just waiting for someone to find them and document them in a style which normal users understand? Freenet is a distributed datastore which can find and transfer data efficiently on restricted routes (search for meshnet scaling to see why that type of routing is really hard), and it uses a WebOfTrust for real-life spam-resistance without the need for a central authority (look at your mailbox to see how hard that is, even with big money). How many more complex problems might it already have solved as byproduct in the search for censorship resistance? So, what’s still to be said? Well, if Freenet sounds interesting: Join in! 1. A KSK is writeable with the limit, that you cannot replace the file if people still have it in their stores: You have to wait till it has been displaced or be aware that now two states for the file exist: One with your content and one with the old. Better just define a series of KSKs: Add a number to the KSK and if you want to write, simply insert the next one. # Exploring the probability of successfully retrieving a file in freenet, given different redundancies and chunk lifetimes In this text I want to explore the behaviour of the degrading yet redundant anonymous file storage in Freenet. It only applies to files which were not subsequently retrieved. Every time you retrieve a file, it gets healed which effectively resets its timer as far as these calculations here are concerned. Due to this, popular files can and do live for years in freenet. ## 1 Static situation Firstoff we can calculate the retrievability of a given file with different redundancy levels, given fixed chunk retrieval probabilities. Files in Freenet are cut into segments which are again cut into up to 256 chunks each. With the current redundancy of 100%, only half the chunks of each segment have to be retrieved to get the whole file. I call that redundancy “2x”, because it inserts data 2x the size of the file (actually that’s just what I used in the code and I don’t want to force readers - or myself - to make mental jumps while switching from prose to code). We know from the tests done by digger3, that after 31 days about 50% of the chunks are still retrievable, and after 30 days about 30%. Let’s look how that affects our retrieval probabilities. # encoding: utf-8 from spielfaehig import spielfähig from collections import defaultdict data = [] res = [] for chunknumber in range(5, 105, 5):... byred = defaultdict(list) for num, prob, red, retrieval in data:... csv = "; num prob retrieval" for red in byred:... # now plot the files plotcmd = """ set term png set width 15 set xlabel "chunk probability" set ylabel "retrieval probability" set output freenet-prob-redundancy-2.png plot "2.csv" using 2:3 select ($1 == 5) title "5 chunks", "" using 2:3 select ($1 == 10) title "10 chunks", "" using 2:3 select ($1 == 30) title "30 chunks", "" using 2:3 select ($1 == 100) title "100 chunks" set output freenet-prob-redundancy-3.png plot "3.csv" using 2:3 select ($1 == 5) title "5 chunks", "" using 2:3 select ($1 == 10) title "10 chunks", "" using 2:3 select ($1 == 30) title "30 chunks", "" using 2:3 select ($1 == 100) title "100 chunks" set output freenet-prob-redundancy-4.png plot "4.csv" using 2:3 select ($1 == 5) title "5 chunks", "" using 2:3 select ($1 == 10) title "10 chunks", "" using 2:3 select ($1 == 30) title "30 chunks", "" using 2:3 select ($1 == 100) title "100 chunks" """ with open("plot.pyx", "w") as f:... from subprocess import Popen Popen(["pyxplot", "plot.pyx"])  So what does this tell us? This looks quite good. After all, we can push the lifetime as high as we want by just increasing redundancy. Sadly it is also utterly wrong :) Let’s try to get closer to the real situation. ## 2 Dynamic Situation: The redundancy affects the replacement rate of chunks To find a better approximation of the effects of increasing the redundancy, we have to stop looking at freenet as a fixed store and have to start seeing it as a process. More exactly: We have to look at the replacement rate. ### 2.1 Math A look on the stats from digger3 shows us, that after 4 weeks 50% of the chunks are gone. Let’s call this the dropout rate. The dropout rate consists of churn and chunk replacement: dropout = churn + replacement Since after one day the dropout rate is about 10%, I’ll assume that the churn is lower than 10%. So for the following parts, I’ll just ignore the churn (naturally this is wrong, but since the churn is not affected by redundancy, I just take it as constant factor. It should reduce the negative impacts of increasing redundancy). So we will only look at replacement of blocks. Replacement consists of new inserts and healing of old files. replacement = insert + healing If we increase the redundancy from 2 to 3, the insert and healing rate should both increase by 50%, so the replacement rate should increase by 50%, too. The healing rate might increase a bit more, because healing can now restore 66% of the file as long as at least 33% are available. I’ll ignore that, too, for the time being (which is wrong again. We will need to keep this in mind when we look at the result). redundancy 2 → 3 ⇒ replacement rate × 1.5 Increasing the replacement rate by 50% should decrease the lifetime of chunks by 1/1.5, or: chunk lifetime × 2/3 So we will be at the 50% limit not after 4 weeks, but after 10 days. But on the other hand, redundancy 3 only needs 33% chunk probability, which has × the lifetime of 50% chunk probability. So the file lifetime should change by 2×2/3 = 4/3: file lifetime × 4/3 = file lifetime +33% Now doesn’t that look good? As you can imagine, this pretty picture hides a clear drawback: The total storage capacity of Freenet gets reduced by 33%, too, because now every file requires 1.5× as much space as before. ### 2.2 Caveats (whoever invented that name? :) ) We ignored churn, so the chunk lifetime reduction should be a bit less than the estimated 33%%. That’s good and life is beautiful, right? :) NO. We also ignored the increase in the healing rate. This should be higher, because every retrieved file can now insert more of itself in the healing process. If we had no new inserts, I would go as far as saying that the healing-rate might actually double with the increased redundancy. So in a network completely filled network without new data, the effects of the higher redundancy and the higher replacement rate would exactly cancel. But the higher redundancy would be able to store less files. Since we are constantly pushing new data into the network (for example via discussions in Sone), this should not be the case. ### 2.3 Dead space Aside from hiding some bad effects, this simple model also hides a nice effect: A decreased amount of dead space. Firstoff, lets define it: ### 2.4 What is dead space? Dead space is the part of the storage space which cannot be used for retrieving files. With any redundancy, that dead space is just about the size of the original file without redundancy multiplier. So for redundancy 2, the storage space occupied by the file is dead, when less than 50% are available. With redundancy 3, it is dead when less than 33% are available. ### 2.5 Effect That dead space is replaced like any other space, but it is never healed. So the higher replacement rate means that dead space is recovered more quickly. So, while a network with higher redundancy can store less files overall, those files which can no longer be retrieved take up less space. I won’t add the math for that, here, though (because I did not do that yet). ### 2.6 Closing So, as closing remark, we can say that increasing the redundancy will likely increase the lifetime of files. It will also reduce the overall storage space in Freenet, though. I think it would be worthwhile. It might also be possible to give probability estimates in the GUI which show how likely it is that we can retrieve a given file after a few percent were downloaded: If more than 1/redundancy chunks succeed, the probability to get the file is high. if close to 1/redundancy succeed, the file will be slow, because we might have to wait for nodes which went online and will come back at some point. Essentially we will have to hope for churn. If much less than 1/redundancy of the chunks succeed, we can stop trying to get the file. Just use the code in here for that :) ## 3 Background and deeper look Why redundancy after all redundancy 1: 1 chunk fails ⇒ file fails. redundancy 2: 50% redundancy 3: 33% ### 3.1 No redundancy Let’s start with redundancy 1. If one chunk fails, the whole file fails. Compared to freenet today the replacement rate would be halved, because each file takes up only half the current space. So the 50% dead chunks rate would be reached after 8 weeks instead of after 4 weeks. And 90% would be after 2 days instead of after 1 day. We can guess that 99% would be after a few hours. Let’s take a file with 100 chunks as example. That’s 100× 32 kiB, or about 3 Megabyte. After a few hours the chance will be very high that it will have lost one chunk and will be irretrievable. Freenet will still have 99% of the chunks, but they will be wasted space, because the file cannot be recovered anymore. The average lifetime of a file will just be a few hours. With 99% probability of retrieving a chunk, the probability of retrieving a file will be only about 37%. from spielfaehig import spielfähig return spielfähig(0.99, 100, 100) → 0.366032341273  To achieve 90% retrievability of the file, we need a chunk availability of 99,9%! The file is essentially dead directly after the insert finishes. from spielfaehig import spielfähig return spielfähig(0.999, 100, 100) → 0.904792147114  ### 3.2 1% redundancy Now, lets add one redundant chunk. Almost nothing will have changed for inserting and replacing, but now the probability of retrieving the file when the chunks have 99% availability is 73%! from spielfaehig import spielfähig return spielfähig(0.99, 101, 100) → 0.732064682546  The replacement rate is increased by 1%, as is the storage space. To achieve 90% retrievability, we actually need a chunk availability of 99,5%. So we might have 90% retrievability one hour after the insert. from spielfaehig import spielfähig return spielfähig(0.995, 101, 100) → 0.908655654736  Let’s check for 50%: We need a chunk probability of about 98,4% from spielfaehig import spielfähig return spielfähig(0.984, 101, 100) → 0.518183035909  The mean lifetime of a file changed from about zero to a few hours. ### 3.3 50% redundancy Now, let’s take a big step: redundancy 1.5. Now we need 71,2% block retrievability to have a 90% chance of retrieving one file. from spielfaehig import spielfähig return spielfähig(0.712, 150, 100) → 0.904577767501  for 50% retrievability we need 66,3% chunk availability. from spielfaehig import spielfähig return spielfähig(0.663, 150, 100) → 0.500313163333  66% would be reached in the current network after about 20 days (between 2 weeks and 4 weeks), and in a zero redundancy network after 40 days. fetch-pull-stats At the same time, though, the chunk replacement rate increased by 50%, so the mean chunk lifetime decreased by factor 2/3. So the lifetime of a file would be 4 weeks. ### 3.4 Generalize this So, now we have calculations for redundancy 1, 1.5, 2 and 3. Let’s see if we can find a general (if approximate) rule for redundancy. From the fetch-pull-graph from digger3 we see empirically, that between one week and 18 weeks each doubling of the lifetime corresponds to a reduction of the chunk retrieval probability of 15% to 20%. Also we know that 50% probability corresponds to 4 weeks lifetime. And we know that redundancy x has a minimum required chunk probability of 1/x. With this, we can model the required chunk lifetime as a function of redundancy: chunk lifetime = 4 * 2**((0.5-1/x)/0.2) with x as redundancy. Note: this function is purely empirical and approximate. Having the chunk lifetime, we can now model the lifetime of a file as a function of its redundancy: file lifetime = (2/x) * 4 * (2**((0.5-1/x)/0.2)) We can now use this function to find an optimum of the redundancy if we are only concerned about file lifetime. Naturally we could get the trusty wxmaxima and get the derivative of it to find the maximum. But that is not installed right now, and my skills in getting the derivatives by hand are a bit rusty (note: install running). So we just do it graphically. The function is not perfectly exact anyway, so the errors introduced by the graphic solution should not be too big compared to the errors in the model. Note however, that this model is only valid in the range between 20% and 90% chunk retrieval probability, because the approximation for the chunk lifetime does not hold anymore for values above that. Due to this, redundancy values close to or below 1 won’t be correct. Also keep in mind that it does not include the effect due to the higher rate of removing dead space - which is space that belongs to files which cannot be recovered anymore. This should mitigate the higher storage requirement of higher redundancy. # encoding: utf-8 plotcmd = """ set term png set width 15 set xlabel "redundancy" set ylabel "lifetime [weeks]" set output "freenet-prob-function.png" set xrange [0:10] plot (2/x) * 4 * (2**((0.5-1/x)/0.2)) """ with open("plot.pyx", "w") as f:... from subprocess import Popen Popen(["pyxplot", "plot.pyx"])  ## 4 Summary: Merit and outlook Now, what do we make of this? Firstoff: If the equations are correct, an increase in redundancy would improve the lifetime of files by a maximum of almost a week. Going further reduces the lifetime, because the increased replacement of old data outpaces the improvement due to the higher redundancy. Also higher redundancy needs a higher storage capacity, which reduces the overall capacity of freenet. This should be partially offset by the faster purging of dead storage space. The results support an increase in redundancy from 2 to 3, but not to 4. Well, and aren’t statistics great? :) Additional notes: This exploration ignores: • healing creates less insert traffic than new inserts by only inserting failed segments, and it makes files which get accessed regularly live much longer, • inter-segment redundancy improves the retrieving of files, so they can cope with a retrievability of 50% of any chunks of the file, even if the distribution might be skewed for a single segment, • Non-uniformity of the network which makes it hard to model effects with global-style math like this, • Seperate stores for SSK and CHK keys, which improve the availability of small websites and • Usability and security impact of increased insert times (might be reduced by only inserting 2/3rd of the file data and letting healing do the rest when the first downloader gets the file) Due to that, the findings can only provides clues for improvements, but cannot perfectly predict the best path of action. Thanks to evanb for pointing them out! If you are interested in other applications of the same theory, you might enjoy my text Statistical constraints for the design of roleplaying games (RPGs) and campaigns (german original: Statistische Zwänge beim Rollenspiel- und Kampagnendesign). The script spielfaehig.py I used for the calculations was written for a forum discussion which evolved into that text :) This text was written and checked in emacs org-mode and exported to HTML via org-export-as-html-to-buffer. The process integrated research and documentation. In hindsight, that was a pretty awesome experience, especially the inline script evaluation. I also attached the org-mode file for your leisure :) # Freenet anonymity: Best case and Worst case As the i2p people say, anynomity is no boolean. Freenet allows you to take it a good deal further than i2p or tor, though. If you do it right. • Worst case: If all of Apple would want to find you, because you declared that you would post the videos of the new iDing - and already sent them your videos as teaser before starting to upload them from an Apple computer (and that just after they lost their beloved dictator), you might be in problems if you use Opennet. You are about as safe as with tor or i2p. • Best case: If a local politician would want to find you, after you uploaded proof that he takes bribes, and you compressed these files along with some garbage data and used Freenet in Darknet-mode with connections only to friends who would rather die than let someone take over their computer, there’s no way in hell, you’d get found due to freenet (the file data could betray you, or they could find you by other means, but Freenet won’t be your weak spot). Naturally real life is somewhere in-between. Things which improve anonymity a lot in the best case: • Don’t let others know the data you are going to upload before the upload finished (would allow some attacks). • Use only Darknet with trusted friends (Darknet means that you connect only to people you know personally. For that it is necessary to know other people who use Freenet). • Upload small files so the time in which you are actively uploading is short. Implied are: • Use an OS without trojans. So no Windows. (Note: Linux can be hacked, too, but it is far less likely to already have been compromised) • Use no Apple devices. You don’t control them yourself and can’t know what they have under the hood. (You are compromised from the time you buy them) • If you use Android, flash it yourself to give it an OS you control (Freenet is not yet available for Android. That would be a huge task). • Know your friends. Important questions to ask: • Who would want to find you? • How much would they invest to find you? • Do they already try to monitor Freenet? (in that case uploading files with known content would be dangerous) • Do they already know you personally? If yes and if they might have already compromised your computer or internet connection, you can’t upload anything anonymously anywhere. In that case, never let stuff get onto your computer in the first place. Let someone else upload it, who is not monitored (yet). • Can they eavesdrop on your internet connection? Then they might guess that you use Freenet from the amount of encrypted communication you do and might want to bug your computer just in case you want to use freenet against them some day. See the Security Summary (mostly possible attacks) in the freenet wiki for details. # Freenet: WoT, database error, recovery patch I just had a database error in WoT (the Freenet generic Web of Trust plugin) and couldn’t access one of my identities anymore (plus I didn’t have a backup of its private keys though it told me to keep backups – talk about carelessness :) ). I asked p0s on IRC and he helped me patch together a WoT which doesn’t access the context for editing the ID (and in turn misses some functionality). This allowed me to regain my IDs private key and with that redownload my ID from freenet. I didn’t want that patch rotting on my drive, so I uploaded it here: disable-context-checks-regain-keys.path Essentially it just comments out some stuff. # Infocalypse - Make your code survive the information apocalypse Anonymous DVCS in the Darknet. This is a mirror of the documentation of the infocalypse extension for Mercurial written by djk - published here with his permission. It is licensed solely under the GPLv2 or later. ## Introduction The Infocalypse 2.0 hg extension is an extension for Mercurial that allows you to create, publish and maintain incrementally updateable repositories in Freenet. Your code is then hosted decentrally and anonymously, making it just as censorship-resistant as all other content in Freenet. It works better than the other DVCS currently available for Freenet. Most of the information you will find in this document can also be found in the extension's online help. i.e.: hg help infocalypse  # HOWTO: Infocalypse 2.0 hg extension updated: 20090927 Note: Contains Freenet only links ## Table of Contents ## Requirements The extension has the following dependencies: • Freenet You can more information on Freenet here: http://freenetproject.org/ [HTTP Link!] • Python I test on Python 2.5.4 and 2.6.1. Any 2.5.x or later version should work. Earlier versions may work. You probably won't have to worry about installing Python. It's included in the Windows binary Mercurial distributions and most *nix flavor OS's should have a reasonably up to date version of Python installed. • Mercurial You can find more information on Mercurial here: http://mercurial.selenic.com/ [HTTP Link!] Version 1.0.2 won't work. I use version 1.2.1 (x86 Gentoo) on a daily basis. Later versions should work. I've smoke tested 1.1.2 (on Ubuntu Jaunty Jackalope) and 1.3 (on Widows XP) without finding any problems. • FMS Installation of the Freenet Messaging System (FMS) is optional but highly recommended. The hg fn-fmsread and hg fn-fmsnotify commands won't work without FMS. Without fn-fmsread it is extremely difficult to reliably detect repository updates. The official FMS freesite is here: USK@0npnMrqZNKRCRoGojZV93UNHCMN-6UU3rRSAmP6jNLE,~BG-edFtdCC1cSH4O3BWdeIYa8Sw5DfyrSV-TKdO5ec,AQACAAE/fms/106/  ## Installation You checked the requirements and understandthe risks right? Here are step-by-step instructions on how to install the extension. • Download the bootstrap hg bundle: CHK@S~kAIr~UlpPu7mHNTQV0VlpZk-f~z0a71f7DlyPS0Do,IB-B5Hd7WePtvQuzaUGrVrozN8ibCaZBw3bQr2FvP5Y,AAIC--8/infocalypse2_1723a8de6e7c.hg  You'll get a Potentially Dangerous Content warning from fproxy because the mime type isn't set. Choose 'Click here to force your browser to download the file to disk.'. I'll refer to the directory that you saved the bundle file to as DOWNLOAD_DIR. • Create an empty directory where you want to install the extension. I'll refer to that directory as INSTALL_DIR in the rest of these instructions. • Create an empty hg repository there. i.e.: cd INSTALL_DIR hg init  • Unbundle the bootstrap bundle into the new repository. i.e: hg pull DOWNLOAD_DIR/infocalypse2_1723a8de6e7c.hg hg update  • Edit the '[extensions]' section of your .hgrc/mercurial.ini file to point to the infocalypse directory in the unbundled source. # .hgrc/mercurial.ini snippet [extensions] infocalypse = INSTALL_DIR/infocalypse  where INSTALL_DIR is the directory you unbundled into. If you don't known where to find/create your .hgrc/mercurial.ini file this link may be useful: http://www.selenic.com/mercurial/hgrc.5.html [HTTP Link!] • Run fn-setup to create the config file and temp dir. i.e. hg fn-setup  If you run your Freenet node on another machine or on a non-standard port you'll need to use the --fcphost and/or --fcpport parameters to set the FCP host and port respectively. By default fn-setup will write the configuration file for the extension (.infocalype on *nix, infocalypse.ini on Windows) into your home directory and also create a temp directory called infocalypse_tmp there. You can change the location of the temp directory by using the --tmpdir argument. If you want to put the config file in a different location set the cfg_file option in the [infocalypse] section of your .hgrc/mercurial.ini file before running fn-setup. Example .hgrc entry: # Snip, from .hgrc [infocalypse] cfg_file = /mnt/usbkey/s3kr1t/infocalypse.cfg • Edit the fms_id and possibly fms_host/fms_port information in the.infocalyse/infocalypse.ini file. i.e.: # Example .infocalypse snippet fms_id = YOUR_FMS_ID fms_host = 127.0.0.1 fms_port = 1119  where YOUR_FMS_ID is the part of your fms id before the '@' sign. If you run FMS with the default settings on the same machine you are running Mercurial on you probably won't need to adjust the fcp_host or fcp_port. You can skip this step if you're not running fms. • Read the latest know version of the extension's repository USK index from FMS. hg fn-fmsread -v  You can skip this step if you're not running fms. • Pull the latest changes to the extension from Freenet for the first time. Don't skip this step! i.e.: hg fn-pull --aggressive --debug --uri USK@kRM~jJVREwnN2qnA8R0Vt8HmpfRzBZ0j4rHC2cQ-0hw,2xcoQVdQLyqfTpF2DpkdUIbHFCeL4W~2X1phUYymnhM,AQACAAE/infocalypse.hgext.R1/41 hg update  You may have trouble finding the top key if you're not using fn-fmsread. Just keep retrying. If you know the index has increased, use the new index in the URI. After the first pull, you can update without the URI. ## Updating This extension is under active development. You should periodically update to get the latest bug fixes and new features. Once you've installed the extension and pulled it for the first time, you can get updates by cd'ing into the initial INSTALL_DIRand typing: hg fn-fmsread -vhg fn-pull --aggressive hg update  If you're not running FMS you can skip the fn-fmsread step. You may have trouble getting the top key. Just keep retrying. If you're having trouble updating and you know the index has increased, use the full URI with the new index as above. ## Background Here's background information that's useful when using the extension.See the Infocalypse 2.0 hg extension page on my freesite for a more detailed description of how the extension works. ### Repositories are collections of hg bundle files An Infocalypse repository is just a collection of hg bundle files which have been inserted into Freenet as CHKs and some metadata describing how to pull the bundles to reconstruct the repository that they represent. When you 'push' to an infocalypse repository a new bundleCHK is inserted with the changes since the last update. When you 'pull', only the CHKs for bundles for changesets not already in the local repository need to be fetched. ### Repository USKs The latest version of the repository's metadata is stored on a Freenet Updateable Subspace Key (USK) as a small binary file. You'll notice that repository USKs end with a number without a trailing '/'. This is an important distinction. A repository USK is not a freesite. If you try to view one with fproxy you'll just get a 'Potentially Dangerous Content' warning. This is harmless, and ugly but unavoidable at the current time because of limitation in fproxy/FCP. ### Repository top key redundancy Repository USKs that end in *.R1/<number> are inserted redundantly, with a second USK insert done on *.R0/<number>. Top key redundancy makes it easier for other people to fetch your repository. Inserting to a redundant repository USK makes the inserter more vulnerable to correlation attacks. Don't use '.R1' USKs if you're worried about this. ### Repository Hashes Repository USKs can be long and cumbersome. A repository hash is the first 12 bytes of the Sha1 hash of the zero index version of a repository USK. e.g.: SHA1( USK@kRM~jJVREwnN2qnA8R0Vt8HmpfRzBZ0j4rHC2cQ-0hw,2xcoQVdQLyqfTpF2DpkdUIbHFCeL4W~2X1phUYymnhM,AQACAAE/infocalypse.hgext.R1/0 ) == 'be68e8feccdd'  You can get the repository hash for a repository USK using: hg fn-info  from a directory the repository USK has been fn-pull'd into. You can get the hashes of repositories that other people have announced via fms with: hg fn-fmsread --listall  Repository hashes are used in the fms update trust map. ### The default private key When you run fn-setup, it creates a default SSK private key, which it stores inthe default_private_key parameter in your .infocalypse/infocalypse.ini file. You can edit the config file to substitute any valid SSK private key you want. If you specify an Insert URI without the key part for an infocalypse command the default private key is filled in for you. i.e hg fn-create --uri USK@/test.R1/0  Inserts the local hg repository into a new USK in Freenet, using the private key in your config file. ### USK <--> Directory mappings The extension's commands 'remember' the insert and request repository USKs they were last run with when run again from the same directory. This makes it unnecessary to retype cumbersome repository USK values once a repository has been successfully pulled or pushed from a directory. ### Aggressive top key searching fn-pull and fn-push have an --aggressive command line argument which causes them to search harder for the latest request URI. This can be slow, especially if the USK index is much lower than the latest index in Freenet. You will need to use it if you're not using FMS update notifications. ## Basic Usage Here are examples of basic commands. ### Generating a new private key You can generate an new private key with: hg fn-genkey  This has no effect on the stored default private key. Make sure to change the 'SSK' in the InsertURI to 'USK' when supplying the insert URI on the command line. ### Creating a new repository hg fn-create --uri USK@/test.R1/0  Inserts the local hg repository into a new USK in Freenet, using the privatekey in your config file. You can use a full insert URI value if you want. If you see an "update -- Bundle too big to salt!" warning message when you run this command you should consider running fn-reinsert --level 4. ### Pushing to a repository hg fn-push --uri USK@/test.R1/0  Pushes incremental changes from the local directory into an existing Infocalypse repository. The <keypart>/test.R1/0 repository must already exist in Freenet.In the example above the default private key is used. You could have specified a full Insert URI. The URI must end in a number but the value doesn't matter because fn-push searches for the latest unused index. You can ommit the --uri argument whenyou run from the same directory the fn-create (or a previous fn-push)was run from. ### Pulling from a repository hg fn-pull --uri <request uri>  pulls from an Infocalypse repository in Freenet intothe local repository. Here's an example with a fully specified uri. You can ommit the --uri argument whenyou run from the same directory a previous fn-pull was successfully run from. For maximum reliability use the --aggressive argument. ## Using FMS to send and receive update notifications The extension can send and receive repository update notifications via FMS. It is highly recommended that you setup this feature. ### The update trust map There's a trust map in the .infocalypse/infocalypse.ini config file which determines which fms ids can update the index values for which repositories. It is purely local and completely separate from the trust values which appear in the FMS web of trust. The format is: <number> = <fms_id>|<usk_hash0>|<usk_hash1>| ... |<usk_hashn> The number value must be unique, but is ignored. The fms_id values are the full FMS ids that you are trusting to update the repositories with the listed hashes. The usk_hash* values are repository hashes. Here's an example trust map config entry: # Example .infocalypse snippet [fmsread_trust_map] 1 = test0@adnT6a9yUSEWe5p8J-O1i8rJCDPqccY~dVvAmtMuC9Q|55833b3e6419 0 = djk@isFiaD04zgAgnrEC5XJt1i4IE7AkNPqhBG5bONi6Yks|be68e8feccdd|5582404a9124 2 = test1@SH1BCHw-47oD9~B56SkijxfE35M9XUvqXLX1aYyZNyA|fab7c8bd2fc3  You must update the trust map to enable index updating for repos other than the one this code lives in (be68e8feccdd). You can edit the config file directly if you want. However, the easiest way to update the trust map is by using the--trust and --untrust options on fn-fmsread. For example to trust falafel@IxVqeqM0LyYdTmYAf5z49SJZUxr7NtQkOqVYG0hvITwto notify you about changes to the repository with repo hash 2220b02cf7ee,type: hg fn-fmsread --trust --hash 2220b02cf7ee --fmsid falafel@IxVqeqM0LyYdTmYAf5z49SJZUxr7NtQkOqVYG0hvITw  And to stop trusting that FMS id for updates to 2220b02cf7ee, you would type: hg fn-fmsread --untrust --hash 2220b02cf7ee --fmsid falafel@IxVqeqM0LyYdTmYAf5z49SJZUxr7NtQkOqVYG0hvITw  To show the trust map type: hg fn-fmsread --showtrust  ### Reading other people's notifications hg fn-fmsread -v  Will read update notifications for all the repos in the trust map and locally cache the new latest index values. If you run with -vit prints a message when updates are available which weren't used because the sender(s) weren't in the trust map. hg fn-fmsread --list  Displays announced repositories from fms ids that appear inthe trust map. hg fn-fmsread --listall  Displays all announced repositories including ones from unknown fms ids. ### Pulling an announced repository You can use the --hash option with fn-pull to pull any repository you see in the fn-read --list or fn-read --listall lists. For example to pull the latest version of the infocalypse extension code, cd to an empty directory and type: hg inithg fn-pull --hash be68e8feccdd --aggressive  ### Posting your own notifications hg fn-fmsnotify -v  Posts an update notification for the current repository to fms. You MUST set the fms_id value in the config fileto your fms id for this to work. Use --dryrun to double check before sending the actual fms message. Use --announce at least once if you want your USK to show up in the fmsread --listall list. By default notifications are written to and read from the infocalypse.notify fms group. The read and write groups can be changed by editing the following variables in the config file: fmsnotify_group = <group> fmsread_groups = <group0>[|<group1>|...] fms can have pretty high latency. Be patient. It may take hours (sometimes a day!) for your notification to appear. Don't send lots of redundant notifications. ## Reinserting and 'sponsoring' repositories hg fn-reinsert  will re-insert the bundles for the repository that was last pulled into the directory. The exact behavior is determined by the level argument. level: • 1 - re-inserts the top key(s) • 2 - re-inserts the top keys(s), graphs(s) and the most recent update. • 3 - re-inserts the top keys(s), graphs(s) and all keys required to bootstrap the repo. This is the default level. • 4 - adds redundancy for big (>7Mb) updates. • 5 - re-inserts existing redundant big updates. Levels 1 and 4 require that you have the privatekey for the repository. For other levels, the top key insert is skipped if you don't have the private key. DO NOT use fn-reinsert if you're concerned about correlation attacks. The risk is on the order of re-inserting a freesite, but may be worse if you use redundant(i.e. USK@<line noise>/name.R1/0) top keys. ## Forking a repository onto a new USK hg fn-copy --inserturi USK@/name_for_my_copy.R1/0  copies the Infocalypse repository which was fn-pull'd intothe local directory onto a new repository USK under your default private key. You can use a full insert URI if you want. This only requires copying the top key data (a maximum of 2 SSK inserts). ## Sharing private keys It is possible for multiple people to collaborate anonymously over Freenet by sharing the private key to a single Infocalypse repository. The FreeFAQ is an example of this technique. Here are some things to keep in mind when sharing private keys. • There is no (explict) key revocation in Freenet If you decide to share keys, you should generate a special key on a per repo basis with fn-genkey. There is no way to revoke a private key once it has been shared. This could be mitigated with an ad-hoc convention. e.g. if I find any file named USK@<public_key>/revoked.txt, I stop using the key. • Non-atomic top key inserts Occasionally, you might end up overwriting someone elses commits because the FCP insert of the repo top key isn't atomic. I think you should be able to merge and re fn-push to resolve this. You can fn-pull a specific version of the repo by specify the full URI including the version number with --uri and including the --nosearch option. • All contributors should be in the fn-fmsread trust map ## Inserting a freesite hg fn-putsite --index <n>  inserts a freesite based on the configuration inthe freesite.cfg file in the root of the repository. Use: hg fn-putsite --createconfig  to create a basic freesite.cfg file that you can modify. Look at the comments in it for an explanation of the supported parameters. The default freesite.cfg file inserts using the same private key as the repo and a site name of 'default'. Editing the name is highly recommended. You can use --key CHK@ to insert a test version of the site to a CHK key before writing to the USK. Limitations: • You MUST have fn-pushed the repo at least once in order to insert using the repo's private key. If you haven't fn-push'd you'll see this error: "You don't have the insert URI for this repo. Supply a private key with --key or fn-push the repo." • Inserts all files in the site_dir directory in the freesite.cfg file. Run with --dryrun to make sure that you aren't going to insert stuff you don't want too. • You must manually specify the USK edition you want to insert on. You will get a collision error if you specify an index that was already inserted. • Don't use this for big sites. It should be fine for notes on your project. If you have lots of images or big binary files use a tool like jSite instead. • Don't modify site files while the fn-putsite is running. ## Risks I don't believe that using this extension is significantly more dangerous that using any other piece of Freenet client code, but here is a list of the risks which come to mind: • Freenet is beta software The authors of Freenet don't pretend to guarantee that it is free of bugs that could that could compromise your anonymity or worse. While written in Java, Freenet loads native code via JNI (FEC codecs, bigint stuff, wrapper, etc.) that makes it vulnerable to the same kinds of attacks as any other C/C++ code. • FMS == anonymous software FMS is published anonymously on Freenet and it is written in C++ with dependencies on large libraries which could contain security defects. I personally build FMS from source and run it in a chroot jail. Somedude, the author of FMS, seems like a reputable guy and has conducted himself as such for more than a year. • correlation attacks There is a concern that any system which inserts keys that can be predicted ahead of time could allow an attacker with control over many nodes in the network to eventually find the IP of your node. Any system which has this property is vulnerable. e.g. fproxy Freesite insertion,Freetalk, FMS, FLIP. This extension's optional use of redundant top keys may make it particularly vulnerable. If you are concerned don't use '.R1' keys. Running your node in pure darknet mode with trusted peers may somewhat reduce the risk of correlation attacks. • Bugs in my code, Mercurial or Python I do my best but no one's perfect. There are lots of eyes over the Mercurial and Python source. ## Advocacy Here are some reasons why I think the Infocalypse 2.0 hg extension is better than pyFreenetHg and egit-freenet: • Incremental You only need to insert/retrieve what has actually changed. Changes of up to 32kof compressed deltas can be fetched in as little as one SSK fetch and one CHK fetch. • Redundant The top level SSK and the CHK with the representation of the repository state are inserted redundantly so there are no 'critical path' keys. Updates of up to ~= 7Mbare inserted redundantly by cloning the splitfile metadata at the cost of a single32k CHK insert. • Re-insertable Anyone can re-insert all repository data except for the top level SSKs with a simple command (hg fn-reinsert). The repository owner can re-insert the top levelSSKs as well. • Automatic rollups Older changes are automatically 'rolled up' into large splitfiles, such that the entire repository can almost always be fetched in 4 CHK fetches or less. • Fails explictly REDFLAG DCI ## Source Code The authoritative repository for the extension's code is hosted in Freenet: hg inithg fn-fmsread -vhg fn-pull --aggressive --debug --uri USK@kRM~jJVREwnN2qnA8R0Vt8HmpfRzBZ0j4rHC2cQ-0hw,2xcoQVdQLyqfTpF2DpkdUIbHFCeL4W~2X1phUYymnhM,AQACAAE/infocalypse.hgext.R1/41hg update  It is also mirrored on bitbucket.org: hg clone http://bitbucket.org/dkarbott/infocalypse_hgext/   ## Fixes and version information • hg version: c51dc4b0d282 Fixed abort: <bundle_file> not found! problem on fn-pull when hg-git plugin was loaded. • hg version: 0c5ce9e6b3b4 Fixed intermittent stall when bootstrapping from an empty repo. • hg version: 7f39b20500f0 Fixed bug that kept fn-pull --hash from updating the initial USK index. • hg version: 7b10fa400be1 Added fn-fmsread --trust and --untrust and fn-pull --hash support. fn-pull --hash isn't really usable until 7f39b20500f0 • hg version: ea6efac8e3f6 Fixed a bug that was causing the berkwood binary 1.3 Mercurial distribution (http://mercurial.berkwood.com/binaries/Mercurial-1.3.exe [HTTP Link!]) not to work. ## Freenet-only links This document is meant to inserted into Freenet. It contains links (starting with 'CHK@' and 'USK@')to Freenet keys that will only work from within fproxy [HTTP link!]. You can find reasonably up to date version of this document on my freesite: USK@-bk9znYylSCOEDuSWAvo5m72nUeMxKkDmH3nIqAeI-0,qfu5H3FZsZ-5rfNBY-jQHS5Ke7AT2PtJWd13IrPZjcg,AQACAAE/feral_codewright/15/infocalypse_howto.html   ## Contact FMS: djk@isFiaD04zgAgnrEC5XJt1i4IE7AkNPqhBG5bONi6Yks I lurk on the freenet and fms boards. If you really need to you can email me at d kar bott at com cast dot net but I prefer FMS. freesite: USK@-bk9znYylSCOEDuSWAvo5m72nUeMxKkDmH3nIqAeI-0,qfu5H3FZsZ-5rfNBY-jQHS5Ke7AT2PtJWd13IrPZjcg,AQACAAE/feral_codewright/15/  [TOC] # Install and setup infocalypse on GNU/Linux (script) Install and setup infocalypse on GNU/Linux: setup_infocalypse_on_linux.sh Just download and run1 it via wget http://draketo.de/files/setup_infocalypse_on_linux.sh_1_0.txt bash setup_infocalypse*  This script needs a running freenet node to work! In-Freenet-link: CHK@RZjy7Whe3vT3aEdox3pEG4fRbmRGsyuybPPhdvr7MoQ,g8YZO1~FAJM5suS7Uch06ugblVPE4YJd1rl15DxAwkY,AAMC--8/setup_infocalypse_on_linux.sh The script allows you to get and setup the infocalypse extension with a few keystrokes to be able to instantly use the Mercurial DVCS for decentral, anonymous code-sharing over freenet. This gives you code hosting like a minimal version of BitBucket, Gitorious or GitHub but without the central control. Additionally the Sone plugin for freenet supplies anonymous communication and the site extension allows creating static sites with information about the repo, recent commits and such without the need of a dedicated hoster. ## Basic Usage Clone a repo into freenet with a new key: hg clone localrepo USK@/repo  (Write down the insert key and request key after the upload! Localrepo is an existing Mercurial repository) Clone a repo into or from freenet (respective key known): hg clone localrepo freenet://USK@<insert key>/repo.R1/0 hg clone freenet://USK@<request key>/repo.R1/0 [localpath]  Push or pull new changes: hg push freenet://USK@<insert key>/repo.R1/0 hg pull freenet://USK@<request key>/repo.R1/0  For convenient copy-pasting of freenet keys, you can omit the “freenet://” here, or use freenet:USK@… instead. Also, as shown in the first example, you can let infocalypse generate a new key for your repo: hg clone localrepo USK@/repo  mind the “USK@/” (slash after @ == missing key). Also see the missing .R1/0 after the repo-name and the missing freenet://. Being able to omit those on repository creation is just a convenience feature - but one which helps me a lot. You can also add the keys to the <repo>/.hg/hgrc: [paths] example = freenet://USK@<request key>/repo.R1/0 example-push = freenet://USK@<insert key>/repo.R1/0 # here you need the freenet:// !  then you can simply use hg push example-push  and hg pull example  ## Contribute This script is just a quick sketch, feel free to improve it and upload improved versions (for example with support for more GNU/Linux distros). If you experience any problems, please contact me! (i.e. write a comment) If you want to contribute more efficiently to this script, get the repo via hg clone freenet://USK@73my4fc2CLU3cSfntCYDFYt65R4RDmow3IT5~gTAWFk,Fg9EAv-Hut~9NCJKtGaGAGpsn1PjA0oQWTpWf7b1ZK4,AQACAAE/setup_infocalypse/1  Then hack on it, commit and upload it again via hg clone setup_infocalypse freenet://USK@/setup_infocalypse  Finally share the request URI you got. Alternate repo: http://draketo.de/proj/setup_infocalypse 1. On systems based on Debian or Gentoo - including Ubuntu and many others - this script will install all needed software except for freenet itself. You will have to give your sudo password in the process. Since the script is just a text file with a set of commands, you can simply read it to make sure that it won’t do anything evil with those sudo rights # Spread Freenet: A call for action on identi.ca and twitter “Daddy, where were you, when they took the freedom of the press away from the internet?” — Mike Godwin, Electronic Frontier Foundation Reposted from Freetalk, the distributed pseudonymous forum in Freenet. For all those among you, who use twitter1 and/or identi.ca2, this is a call to action. Go to your identi.ca or twitter accounts and post about freenet. Tell us in 140 letters why freenet is your tool of choice, and remember to use the !freenet group (identi.ca) or #freenet hashtag (twitter), so we can resend your posts! I use !freenet because we might soon need it as safe harbour to coordinate the fight against censorship → freenetproject.org !zensur — ArneBab The broader story is the emerging concept of a right to freely exchange arbitrary data — Toad (the main freenet developer) ## Background There are still very many people out there who don’t know what freenet is. Just today a coder came into the #freenet IRC channel, asked what it did and learned that it already does everything he had thought about. And I still remember someone telling me “It would be cool if we had something like X-net from Cory Doctorow’s ‘Little Brother’” — he did not know that freenet already offers that with much improved security. So we need to get the word out about freenet. And we have powerful word to choose from, beginning with Mike Godwin’s cite above but going much further. To just name a few buzz-words: Freenet is a crowdfunded distributed and censorship resistant freesoftware cloud publishing system. And different from info about corporate PR-powered projects, all these buzz words are true. But to make us effective, we need to achieve critical mass. And to reach that, we need to coordinate and cross promote heavily. ## Call to action So I want to call to you to go to your identi.ca or twitter accounts and post about freenet. Tell us in 140 letters why freenet is your tool of choice, and remember to use the !freenet group or #freenet hashtag, so we can find and retweet your posts! If you use identi.ca, join the !freenet group, so you get informed about new freenet-posts automatically. We can make a difference, if we fight together. And if you always wanted to get an identi.ca account, here’s the opportunity to get it and do something good at the same time :) If you already have a twitter-account, you can connect your identi.ca account to your twitter account, then post to identi.ca and have your post forwarded to twitter automatically. ## Additional info Besides: My accounts are: But no need to tell me your account and connect your Freetalk ID with it. Just use identi.ca or twitter and remember to tell your friends to talk about freenet, too (so we can’t find out who read this post and who decided to join in because he learned about the action from a friend). As second line of defense, I also posted this message to my website and hereby allow anyone to reuse it in any form and under any license (up to the example tweets), so I can’t know who saw it here and who saw it elsewhere. I hope I’ll soon see floods of entusiastic tweets and dents about Freenet! ## Some example tweets and/or dents I’ll gladly post and link yours here, if you allow it! !Freenet: #crowdfunded distributed and censorship resistant !freesoftware cloud publishing → http://freenetproject.org — rightful buzz! — ArneBab #imhappiestwhen when the internet is free. I hope it will remain so thanks to projects like #Freenet http://t.co/GMRXmDtGaming4JC #freenet: freedom to publish that you may have to rely on, because censorship and ©ensorship are on the rise — Ixoliva 1. Twitter is a service for sending small text messages to people who “follow” you (up to 140 letters), so it works like a newsticker of journalists. Sadly it is no free software, so you can’t trust them to keep your data or even just the service available. It’s distinctive features are hashtags (#blafoo) for marking and searching messages and retweeting for passing a message on towards people who read your messages. 2. identi.ca is like twitter and offers the same features and a few more advanced ones, but as a decentral free software system where everyone can create his own server and connect it to others. When using identi.ca, you make yourself independent from any single provider and can even run the system yourself. And it is free to stay due to using the AGPL (v3 or later). # What can Freenet do well already? this just happened to me in the #freenet IRC channel at freenode.net (somewhat edited): • toad_1: what can freenet do well already? [18:38] • sharing and retrieving files asynchronously, freemail, IRC2, publishing sites without need of a central server, sharing code repositories [18:39] • I can simply go online, upload a file, send the key to a friend and go offline. the friend can then retrieve the file, even though I am already offline without needing a central server. [18:40] • it might be kinda slow, but it actually makes it easy to publish stuff: via jSite, floghelper and others. [18:42] • floghelper is cool: spam-resistant anonymous blogging without central server • and freereader is, too (even though it needs lots of polish): forward RSS feeds into freenet • you can actually exchange passwords in a safe way via freemail: anonymous email with an intergrated web-interface and imap access. • Justus and me coordinated the upload of the social networking site onto my FTP solely over freemail, and I did not have any fear of eavesdropping - different from any other mail I write. [18:44] … I think I should store this conversation somewhere which I hereby did - I hope you enjoyed this little insight into the #freenet channel :) And if you grew interested, why not install freenet yourself? It only takes a few clicks via webstart and you’re part of the censorship-resistant web. 1. toad alias Matthew Toseland is the main developer of freenet. He tends to see more of the remaining challenges and fewer of the achievements than me - which is a pretty good trait for someone who builds a system to which we might have to entrust our basic right of free speech if the worls goes on like this. From a PR perspective it is a pretty horrible trait, though, because he tends to forget to tell people what freenet can already do well :) 2. To setup the social networking features of Freenet, have a look at the social networking guide # Wrapup: Make Sone scale - fast, anonymous, decentral microblogging over freenet Sone1 allows fast, identi.ca-style microblogging in Freenet. This is my wrapup on a discussion on the steps to take until Sone can become an integral part of Freenet. ## Current state • Is close to realtime. • Downloads all IDs and all their posts and replies → polling which won’t scale; short term local breakage. • Uploads all posts on every update → Can displace lots of content. Effective Size: X*M, X = revisions which did not drop out, M = total number of your messages. Long term self-DDoS of freenet. ## Future • Is close to realtime for those you follow and your usual discussion group. • Uploads only recent posts directly and bundles older posts → much reduced storage need: Effective size: B * Z + Y*M; B = posts per bundle, Z = number of bundles which did not drop out, Y = numbers of not yet bundled messages; Z << Y, B << X, Y << X. • Downloads only the ones you follow + ones you get told about. Telling others means that you need to include info about people you follow, because you only get information from them. ## Telling others about replies, options • Include all replies to anyone which I see in my own Sone → size rises massively, since you include all replies of all people you follow in your own Sone. • Include all IDs from which you saw replies along with the people they replied to → needs to poll more IDs. Optionally forward that info for several hops → for efficient routing it needs knowledge about the full follower topology, which is a privacy risk. • Discovering replies from people you don’t know yet: Add a WoT info: replies. Updated only when you reply to someone you did not reply to before. Poll people’s reply lists based on their WoT rating. Keep a list of people who answered one of your posts and poll these more often. Maybe poll people instantly who solve one of your captchas (your general captcha queue) → new users can enter quickly. When you solve captchas in WoT, preferably solve those from people you follow. → four ways to discover a reply: 1. poll those you follow, 2. poll the people who posted the latest replies to you (your usual discussion-group), 3. poll those who solve one of your captchas (get new people in as fast as possible) and 4. poll the replies-info from everyone with the polling frequency based on their WoT rating. 1. You can find Sone in Freenet using the key USK@nwa8lHa271k2QvJ8aa0Ov7IHAV-DFOCFgmDt3X6BpCI,DuQSUZiI~agF8c-6tjsFFGuZ8eICrzWCILB60nT8KKo,AQACAAE/sone/38/ # “regarding B.S. like SOPA, PIPA, … freenet seems like a good idea after all!” “Some years ago, I had a look at freenet and wasn't really convinced, now I'm back - a lot has changed, it grew bigger and insanely fast (in freenet terms), like it a lot, maybe this time I'll keep it. Especially regarding B.S. like SOPA, PIPA and other internet-crippling movements, freenet seems like a good idea after all!” — sparky in Sone So, if you know freenet and it did not work out for you in the past, it might be time to give it another try: freenetproject.org This quote just grabbed me, and sparky gave me permission to cite it. # Mercurial Mercurial is a distributed source control management tool. Mercurial links: - Mercurial Website. - bitbucket.org - Easy repository publishing. - Hg Init - A very nice Mercurial tutorial for newcomers. With it you can save snapshots of your work on documents and go back to these at all times. Also you can easily collaborate with other people and use Mercurial to easily merge your work. Someone changes something in text file you also worked on? No problem. If you didn't work on the same line, you can simply let Mercurial do an automatic merge and your work will be joined. (If you worked on the same line you'll naturally have to select how you want to merge these two changes). It doesn't need a network connection for normal operation, except when you want to push your changes over the internet or pull changes of others from the web, so its commands are very fast. The time to do a commit is barely noticeable which makes atomic commits easy to do. And if you already know subversion, the switch to Mercurial will be mostly painless. But its most important strength is not its speed. It is that Mercurial just works. No hassle with complicated setup. No arcane commands. Almost everything I ever wanted to do with it just worked out of the box, and that's a rare and precious feature today. I wish you much fun with Mercurial! # A complete Mercurial branching strategy This is a complete branching strategy for Mercurial with optional adaptions for maintaining multiple releases1. It shows you all the actions you may need to take, except for those already described in the guide Mercurial in workflows. For examples it uses the command-line UI, but it can easily be used with graphical Mercurial interfaces like TortoiseHG, too. ## Summary Firstoff, any model to be used by people should boil down to simple rules. Programming is complex enough without having to worry about elaborate branching rules. This model uses 3 simple rules: (1) you do all the work on default2 - except for hotfixes. (2) on stable you only do hotfixes, merges for release3 and tagging for release. Only maintainers4 touch stable. (3) you can use arbitrary feature-branches5, as long as you don’t call them default or stable. They always start at default (since you do all the work on default). ## Diagram To visualize the structure, here’s a 3-tiered diagram. To the left are the actions of developers (commits and feature branches) and in the center the tasks for maintainers (release and hotfix). The users to the right just use the stable branch.6 An overview of the branching strategy. Click the image to get the emacs org-mode ditaa-source. ## Practial Actions Now we can look at all the actions you will ever need to do in this model:7 • Initialize (only needed once) • create the repo: hg init reponame; cd reponame • first commit: (edit); hg ci -m "message" • create the stable branch and do the first release: hg branch stable; hg tag tagname; hg up default; hg merge stable; hg ci -m "merge stable into default: ready for more development" • Regular development • commit changes: (edit); hg ci -m "message" • continue development after a release: hg update; (edit); hg ci -m "message" • Feature Branches • start a larger feature: hg branch feature-x; (edit); hg ci -m "message" • continue with the feature: hg update feature-x; (edit); hg ci -m "message" • merge the feature: hg update default; hg merge feature-x; hg ci -m "merged feature x into default" • close and merge the feature when you are done: hg update feature-x; hg ci --close-branch -m "finished feature x"; hg update default; hg merge feature-x; hg ci -m "merged finished feature x into default" • Tasks for Maintainers • apply a hotfix8: hg up stable; (edit); hg ci -m "message"; hg up default; hg merge stable; hg ci -m "merge stable into default: ready for more development" • do a release9: hg up stable; hg merge default; hg ci -m "merged default into stable for release" ; hg tag tagname; hg up default ; hg merge stable ; hg ci -m "merged stable into default: ready for more development" ## Example This is the output of a complete example run 10 of the branching model, including all complications you should ever hit. We start with the full history. In the following sections, we will take it apart to see what the commands do. So just take a glance, take in the basic structure and then move on for the details. hg log -G @ changeset: 15:855a230f416f |\ tag: tip | | parent: 13:e7f11bbc756c | | parent: 14:79b616e34057 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:49 2013 +0100 | | summary: merged stable into default: ready for more development | | | o changeset: 14:79b616e34057 |/| branch: stable | | parent: 7:e8b509ebeaa9 | | parent: 13:e7f11bbc756c | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:48 2013 +0100 | | summary: merged default into stable for release | | o | changeset: 13:e7f11bbc756c |\ \ parent: 11:e77a94df3bfe | | | parent: 12:aefc8b3a1df2 | | | user: Arne Babenhauserheide <bab@draketo.de> | | | date: Sat Jan 26 15:39:47 2013 +0100 | | | summary: merged finished feature x into default | | | | o | changeset: 12:aefc8b3a1df2 | | | branch: feature-x | | | parent: 9:1dd6209b2a71 | | | user: Arne Babenhauserheide <bab@draketo.de> | | | date: Sat Jan 26 15:39:46 2013 +0100 | | | summary: finished feature x | | | o | | changeset: 11:e77a94df3bfe |\| | parent: 10:8c423bc00eb6 | | | parent: 9:1dd6209b2a71 | | | user: Arne Babenhauserheide <bab@draketo.de> | | | date: Sat Jan 26 15:39:45 2013 +0100 | | | summary: merged feature x into default | | | o | | changeset: 10:8c423bc00eb6 | | | parent: 8:dc61c2731eda | | | user: Arne Babenhauserheide <bab@draketo.de> | | | date: Sat Jan 26 15:39:44 2013 +0100 | | | summary: 3 | | | | o | changeset: 9:1dd6209b2a71 |/ / branch: feature-x | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:43 2013 +0100 | | summary: x | | o | changeset: 8:dc61c2731eda |\| parent: 5:4c57fdadfa26 | | parent: 7:e8b509ebeaa9 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:43 2013 +0100 | | summary: merged stable into default: ready for more development | | | o changeset: 7:e8b509ebeaa9 | | branch: stable | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:42 2013 +0100 | | summary: Added tag v2 for changeset 089fb0af2801 | | | o changeset: 6:089fb0af2801 |/| branch: stable | | tag: v2 | | parent: 4:d987ce9fc7c6 | | parent: 5:4c57fdadfa26 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:41 2013 +0100 | | summary: merge default into stable for release | | o | changeset: 5:4c57fdadfa26 |\| parent: 3:bc625b0bf090 | | parent: 4:d987ce9fc7c6 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:40 2013 +0100 | | summary: merge stable into default: ready for more development | | | o changeset: 4:d987ce9fc7c6 | | branch: stable | | parent: 1:a8b7e0472c5b | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:39 2013 +0100 | | summary: hotfix | | o | changeset: 3:bc625b0bf090 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:38 2013 +0100 | | summary: 2 | | o | changeset: 2:3e8df435bcb0 |\| parent: 0:f97ea6e468a1 | | parent: 1:a8b7e0472c5b | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:38 2013 +0100 | | summary: merged stable into default: ready for more development | | | o changeset: 1:a8b7e0472c5b |/ branch: stable | user: Arne Babenhauserheide <bab@draketo.de> | date: Sat Jan 26 15:39:36 2013 +0100 | summary: Added tag v1 for changeset f97ea6e468a1 | o changeset: 0:f97ea6e468a1 tag: v1 user: Arne Babenhauserheide <bab@draketo.de> date: Sat Jan 26 15:39:36 2013 +0100 summary: 1  ## Action by action Let’s take the log apart to show the actions contributors will do. ### Initialize Initializing and doing the first commit creates the first changeset: o changeset: 0:f97ea6e468a1 tag: v1 user: Arne Babenhauserheide <bab@draketo.de> date: Sat Jan 26 15:39:36 2013 +0100 summary: 1  Nothing much to see here. Commands: hg init test-branch; cd test-branch (edit); hg ci -m "message"  ### Stable branch and first release With the stable branch and first release, we add the tagging commit and merge back into default: o changeset: 2:3e8df435bcb0 |\ parent: 0:f97ea6e468a1 | | parent: 1:a8b7e0472c5b | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:38 2013 +0100 | | summary: merged stable into default: ready for more development | | | o changeset: 1:a8b7e0472c5b |/ branch: stable | user: Arne Babenhauserheide <bab@draketo.de> | date: Sat Jan 26 15:39:36 2013 +0100 | summary: Added tag v1 for changeset f97ea6e468a1 | o changeset: 0:f97ea6e468a1 tag: v1 user: Arne Babenhauserheide <bab@draketo.de> date: Sat Jan 26 15:39:36 2013 +0100 summary: 1  Mind the tag field which is now shown in changeset 0 and the branchname for changeset 1. Commands: hg branch stable hg tag tagname hg up default hg merge stable hg ci -m "merged stable into default: ready for more development"  ### Further development Now we just chuck along. The one commit shown here could be an arbitrary number of commits. o changeset: 3:bc625b0bf090 | user: Arne Babenhauserheide <bab@draketo.de> | date: Sat Jan 26 15:39:38 2013 +0100 | summary: 2 | o changeset: 2:3e8df435bcb0 |\ parent: 0:f97ea6e468a1 | | parent: 1:a8b7e0472c5b | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:38 2013 +0100 | | summary: merged stable into default: ready for more development  Commands: (edit) hg ci -m "message"  ### Hotfix If a hotfix has to be applied to the release out of order, we just update to the stable branch, apply the hotfix and then merge the stable branch into default11. This gives us changesets 4 for the hotfix and 5 for the merge (2 and 3 are shown as reference). o changeset: 5:4c57fdadfa26 |\ parent: 3:bc625b0bf090 | | parent: 4:d987ce9fc7c6 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:40 2013 +0100 | | summary: merge stable into default: ready for more development | | | o changeset: 4:d987ce9fc7c6 | | branch: stable | | parent: 1:a8b7e0472c5b | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:39 2013 +0100 | | summary: hotfix | | o | changeset: 3:bc625b0bf090 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:38 2013 +0100 | | summary: 2 | | o | changeset: 2:3e8df435bcb0 |\| parent: 0:f97ea6e468a1 | | parent: 1:a8b7e0472c5b | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:38 2013 +0100 | | summary: merged stable into default: ready for more development  Commands: hg up stable (edit) hg ci -m "message" hg up default hg merge stable hg ci -m "merge stable into default: ready for more development"  ### Regular release To do a regular release, we just merge the default branch into the stable branch and tag the merge. Then we merge stable back into default. This gives us changesets 6 to 812. o changeset: 8:dc61c2731eda |\ parent: 5:4c57fdadfa26 | | parent: 7:e8b509ebeaa9 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:43 2013 +0100 | | summary: merged stable into default: ready for more development | | | o changeset: 7:e8b509ebeaa9 | | branch: stable | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:42 2013 +0100 | | summary: Added tag v2 for changeset 089fb0af2801 | | | o changeset: 6:089fb0af2801 |/| branch: stable | | tag: v2 | | parent: 4:d987ce9fc7c6 | | parent: 5:4c57fdadfa26 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:41 2013 +0100 | | summary: merge default into stable for release | | o | changeset: 5:4c57fdadfa26 |\| parent: 3:bc625b0bf090 | | parent: 4:d987ce9fc7c6 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:40 2013 +0100 | | summary: merge stable into default: ready for more development  Commands: hg up stable hg merge default hg ci -m "merge default into stable for release" hg tag tagname hg up default hg merge stable hg ci -m "merged stable into default: ready for more development"  ### Feature branches Now we want to do some larger development, so we use a feature branch. The one feature-commit shown here (x) could be an arbitrary number of commits, and as long as you stay in your branch, the development of your colleagues will not disturb your own work. Once the feature is finished, we merge it into default. That gives us changesets 9 to 13. o changeset: 13:e7f11bbc756c |\ parent: 11:e77a94df3bfe | | parent: 12:aefc8b3a1df2 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:47 2013 +0100 | | summary: merged finished feature x into default | | | o changeset: 12:aefc8b3a1df2 | | branch: feature-x | | parent: 9:1dd6209b2a71 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:46 2013 +0100 | | summary: finished feature x | | o | changeset: 11:e77a94df3bfe |\| parent: 10:8c423bc00eb6 | | parent: 9:1dd6209b2a71 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:45 2013 +0100 | | summary: merged feature x into default | | o | changeset: 10:8c423bc00eb6 | | parent: 8:dc61c2731eda | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:44 2013 +0100 | | summary: 3 | | | o changeset: 9:1dd6209b2a71 |/ branch: feature-x | user: Arne Babenhauserheide <bab@draketo.de> | date: Sat Jan 26 15:39:43 2013 +0100 | summary: x | o changeset: 8:dc61c2731eda |\ parent: 5:4c57fdadfa26 | | parent: 7:e8b509ebeaa9 | | user: Arne Babenhauserheide <bab@draketo.de> | | date: Sat Jan 26 15:39:43 2013 +0100 | | summary: merged stable into default: ready for more development  Commands: • Start the feature hg branch feature-x (edit) hg ci -m "message"  • Do an intermediate commit on default hg update default (edit) hg ci -m "message"  • Continue working on the feature hg update feature-x (edit) hg ci -m "message"  • Merge the feature hg update default hg merge feature-x hg ci -m "merged feature x into default"  • Close and merge a finished feature hg update feature-x hg ci --close-branch -m "finished feature x" hg update default; hg merge feature-x hg ci -m "merged finished feature x into default"  Note: Closing the feature branch hides that branch in the output of hg branches (except when using --closed) to make the repository state lean and simple while still keeping the feature branch information in history. It shows your collegues, that they no longer have to keep the feature in mind as soon as they merge the most recent changes from the default branch into their own feature branches. Note: To make the final merge of your feature into default easier, you can regularly merge the default branch into the feature branch. Note: We use feature branches to ensure that new clones start at a revision which other developers can directly use. With bookmarks you could get trapped on a feature-head which might not be merged to default for quite some time. For more reasons, see the bookmarks footnote. The final action is a regular merge to stable to get into a state from which we could safely do a release. Since we already showed how to do that, we are finished here. ## Extensions This realizes the successful Git branching model13 with Mercurial while maintaining one release at any given time. If you have special needs, this model can easily be extended to fullfill your requirements. Useful extensions include: • multiple releases - if you need to provide maintenance for multiple releases side-by-side. • grafted micro-releases - if you need to segment the next big changes into smaller releases while leaving out some potentially risky changes. • explicit review - if you want to ensure that only reviewed changes can get into a release, while making it possible to leave out some already reviewed changes from the next releases. Review gets decoupled from releasing. All these extensions are orthogonal, so you can use them together without getting side-effects. ### Multiple maintained releases To use the branching model with multiple simultaneously maintained releases, you only need to change the hotfix procedure: When applying a hotfix, you go back to the old release with hg update tagname, fix there, add a new tag for the fixed release and then update to the next release. There you merge the new fix-release and do the same for all other releases. If the most recent release is not the head of the stable branch, you also merge into stable. Then you merge the stable branch into default, as for a normal hotfix.14 With this merge-chain you don’t need special branches for releases, but all changesets are still clearly recorded. This simplification over git is a direct result of having real anonymous branching in Mercurial. hg update release-1.0 (edit) hg ci -m "message" hg tag release-1.1 hg update release-2.0 hg merge release-1.1 hg ci -m "merged changes from release 1.1" hg tag release-2.1 … and so on  In the Diagram this just adds a merge path from the hotfix to the still maintained releases. An overview of the branching strategy with maintained releases. Click the image to get the emacs org-mode ditaa-source. ### Graft changes into micro-releases If you need to test parts of the current development in small chunks, you can graft micro releases. In that case, just update to stable and merge the first revision from default, whose child you do not want, and graft later changes15. Example for the first time you use micro-releases16: You have changes 1, 2, 3, 4 and 5 on default. First you want to create a release which contains 1 and 4, but not 2, 3 or 5. hg update 1 hg branch stable hg graft 4  As usual tag the release and merge stable back into default: hg tag rel-14 hg update default hg merge stable hg commit -m "merge stable into default. ready for more development"  Example for the second and subsequent releases: Now you want to release the change 2 and 5, but you’re still not ready to release 3. So you merge 2 and graft 5. hg update stable hg merge 2 hg commit -m "merge all changes until 2 from default" hg graft 5  As usual tag the release and finally merge stable back into default: hg tag rel-1245 hg update default hg merge stable hg commit -m "merge stable into default. ready for more development"  The history now looks like this17: @ merge stable into default. ready for more development (default) |\ | o Added tag rel-1245 for changeset 4e889731c6ca (stable) | | | o 5 (stable) | | | o merge all changes until 2 from default (stable) | |\ o---+ merge stable into default. ready for more development (default) | | | | | o Added tag rel-14 for changeset cc2c95dd3f27 (stable) | | | | | o 4 (stable) | | | o | | 5 (default) | | | o | | 4 (default) | | | o | | 3 (default) |/ / o / 2 (default) |/ o 1 (default) | o 0 (default)  In the Diagram this just adds graft commits to stable: An overview of the branching strategy with grafted micro-releases. Click the image to get the emacs org-mode ditaa-source. Grafted micro-releases add another layer between development and releases. They can be necessary in cases where testing requires actually deploying a release, as for example in Freenet. ### Explicit review branch If you want to add a separate review stage, you can use a review branch1819 into which you only merge or graft reviewed changes. The review branch then acts as a staging area for all changes which might go into a release. To use this extension of the branching model, just create a branch on default called review in which you merge or graft reviewed changes. The first time you do that, you update to the first commit whose children you do not want to include. Then create the review branch with hg branch review and use hg graft REV to pull in all changes you want to include. On subsequent reviews, you just update to review with hg update nextrelease, merge the first revision which has a child you do not want with hg merge REV and graft additional later changes with hg graft REV. In both cases you create the release by merging the review branch into stable or grafting changes from it as you would do it for micro-releases. A special condition when using a review branch is that you always have to merge hotfixes into the review branch, too, because the review branch does not automatically contain all changes from the default branch. In the Diagram this just adds the review branch between default and stable instead of the release merge. Also it adds the hotfix merge to the review branch. An overview of the branching strategy with a review branch. Click the image to get the emacs org-mode ditaa-source. ## Simple Summary We now have nice graphs, examples, potential extensions and so on. But since this strategy uses Mercurial instead of git, we don’t actually need all the graphics, descriptions and branch categories in the git version - or in this post. Instead we can boil all of this down to 3 simple rules: (1) you do all the work on default - except for hotfixes. (2) on stable you only do hotfixes, merges for release and tagging for release. Only maintainers touch stable. (3) you can use arbitrary feature-branches, as long as you don’t call them default or stable. They always start at default (since you do all the work on default). They are the rules you already know from the starting summary. That’s it. Happy hacking! 1. if you need to maintain multiple very different releases simultanously, see or 20 for adaptions 2. default is the default branch. That’s the named branch you use when you don’t explicitely set a branch. Its alias is the empty string, so if no branch is shown in the log (hg log), you’re on the default branch. Thanks to John for asking! 3. If you want to release the changes from default in smaller chunks, you can also graft specific changes into a release preparation branch and merge that instead of directly merging default into stable. This can be useful to get real-life testing of the distinct parts. For details see the extension Graft changes into micro-releases 4. Maintainers are those who do releases, while they do a release. At any other time, they follow the same patterns as everyone else. If the release tasks seem a bit long, keep in mind that you only need them when you do the release. Their goal is to make regular development as easy as possible, so you can tell your non-releasing colleagues “just work on default and everything will be fine”. 5. This model does not use bookmarks, because they don’t offer benefits which outweight the cost of introducing another concept, and because named branches for feature branches offer the advantage, that new programmers never get the code from a feature-branch when they clone the repository. For local work and small features, bookmarks can be used quite well, though, and since this model does not define their use, it also does not limit it. Additionally bookmarks could be useful for feature branches, if you use many of them (in that case reusing names is a real danger and not just a rare annoyance, and if you have a recent Mercurial, you can use the @ bookmark to signify the entry point for new clones) or if you use release branches: “What are people working on right now?” → hg bookmarks “Which lines of development do we have in the project?” → hg branches 6. Those users who want external verification can restrict themselves to the tagged releases - potentially GPG signed by trusted 3rd-party reviewers. GPG signatures are treated like hotfixes: reviewers sign on stable (via hg sign without options) and merge into default. Signing directly on stable reduces the possibility of signing the wrong revision. 7. hg pull and hg push to transfer changes and hg merge when you have multiple heads on one branch are implied in the actions: you can use any kind of repository structure and synchronization scheme. The practical actions only assume that you synchronize your repositories with the other contributors at some point. 8. Here a hotfix is defined as a fix which must be applied quickly out-of-order, for example to fix a security hole. It prompts a bugfix-release which only contains already stable and tested changes plus the hotfix. 9. If your project needs a certain release preparation phase (like translations), then you can simply assign a task branch. Instead of merging to stable, you merge to the task branch, and once the task is done, you merge the task branch to stable. An Example: Assume that you need to update translations before you release anything. (next part: init: you only need this once) When you want to do the first release which needs to be translated, you update to the revision from which you want to make the release and create the “translation” branch: hg update default; hg branch translation; hg commit -m "prepared the translation branch". All translators now update to the translation branch and do the translations. Then you merge it into stable: hg update stable; hg merge translation; hg ci -m "merged translated source for release". After the release you merge stable back into default as usual. (regular releases) If you want to start translating the next time, you just merge the revision to release into the translation branch: hg update translation; hg merge default; hg commit -m "prepared translation branch". Afterwards you merge “translation” into stable and proceed as usual. 10. To run the example and check the output yourself, just copy-paste the following your shell: LC_ALL=C sh -c 'hg init test-branch; cd test-branch; echo 1 > 1; hg ci -Am 1; hg branch stable; hg tag v1 ; hg up default; hg merge stable; hg ci -m "merged stable into default: ready for more development"; echo 2 > 2; hg ci -Am 2; hg up stable; echo 1.1 > 1; hg ci -Am hotfix; hg up default; hg merge stable; hg ci -m "merge stable into default: ready for more development"; hg up stable; hg merge default; hg ci -m "merge default into stable for release" ; hg tag v2; hg up default ; hg merge stable ; hg ci -m "merged stable into default: ready for more development" ; hg branch feature-x; echo x > x ; hg ci -Am x; hg up default; echo 3 > 3; hg ci -Am 3; hg merge feature-x; hg ci -m "merged feature x into default"; hg update feature-x; hg ci --close-branch -m "finished feature x"; hg update default; hg merge feature-x; hg ci -m "merged finished feature x into default"; hg up stable ; hg merge default; hg ci -m "merged default into stable for release"; hg up default; hg merge stable ; hg ci -m "merged stable into default: ready for more development"; hg log -G' 11. We merge the hotfix into default to define the relevance of the fix for general development. If the hotfix also affects the current line of development, we keep its changes in the merge. If the current line of development does not need the hotfix, we discard its changes in the merge. We do this to ensure that it is clear in future how to treat the hotfix when merging new changes: let the merge record the decision. 12. We can also merge to stable regularly as soon as some set of changes is considered stable, but without making an actual release (==tagging). That way we always have a stable branch which people can test without having to create releases right away. The releases are those changesets on the stable branch which carry a tag. 13. If you look at the Git branching model which inspired this Mercurial branching model, you’ll note that its diagram is a lot more complex than the diagram of this Mercurial version. The reason for that is the more expressive history model of Mercurial. In short: The git version has 5 types of branches: feature, develop, release, hotfix and master (for tagging). With Mercurial you can reduce them to 3: default, stable and feature branches: • Tags are simple in-history objets, so we need no special branch for them: a tag signifies a release (down to 4 branch-types - and no more duplication of information, since in the git-model a release is shown by a tag and a merge to master). • Hotfixes are simple commits on stable followed by a merge to default, so we also need no branch for them (down to 3 branch-types). And if we only maintain one release at a time, we only need one branch for them: stable (down from branch-type to single branch). • And feature branches are not required for clean separation since mercurial can easily cope with multiple heads in a branch, so developers only have to worry about them if they want to use them (down to 2 mandatory branches). • And since the default branch is the branch to which you update automatically when you clone a repository, new developers don’t have to worry about branches at all. So we get down from 5 mandatory branches (2 of them are categories containing multiple branches) to 2 simple branches without losing functionality. And new developers only need to know two things about our branching model to contribute: “If you use feature branches, don’t call them default or stable. And don’t touch stable”. 14. Merging old releases into new ones sounds like a lot of work. If you get that feeling, then have a look how many releases you really maintain right now. In my Gentoo tree most programs actually have only one single release, so using actual release branches would incur an additional burden without adding real value. You can also look at the rule of thumb whether to choose feature branches instead 15. If you want to make sure that every changeset on stable is production-ready, you can also start a new release-branch on stable, then merge the first revision, whose child you do not want, into that branch and graft additional changes. Then close the branch and merge it into stable. You can achieve the same with much lower overhead (unneeded complexity) by changing the requirement to “every tagged revision on stable is production-ready”. To only see tagged revisions on stable, just use hg log -r "branch(stable) and tag()". This also works for incoming and outgoing, so you can use it for triggering a build system. 16. To test this workflow yourself, just create the test repository with hg init 12345; cd 12345; for i in {0..5}; do echo$i > $i; hg ci -Am$i; done

17. The short graphlog for the grafted micro-releases was created via hg glog --template "{desc} ({branch})"

18. The review branch is a special preparation-branch, because it can get discontinous changes, if maintainers decide to graft some changes which have ancestors they did not review yet.

19. We use one single review branch which gets reused at every review to ensure that there are no changes in stable which we did not have in the review. As alternative, you could use one branch per review. In that case, ensure that you start the review-* branches from stable and not from default. Then merge and graft the changes from default which you want to review for inclusion in your next release.

20. If you want to adapt the model to multiple very distinct releases, simply add multiple release-branches (i.e. release-x). Then hg graft the changes you want to use from default or stable into the releases and merge the releases into stable to ensure that the relationship of their changes to current changes is clear, recorded and will be applied automatically by Mercurial in future merges21. If you use multiple tagged releases, you need to merge the releases into each other in order - starting from the oldest and finishing by merging the most recent one into stable - to record the same information as with release branches. Additionally it is considered impolite to other developers to keep multiple heads in one branch, because with multiple heads other developers do not know the canonical tip of the branch which they should use to make their changes - or in case of stable, which head they should merge to for preparing the next release. That’s why you are likely better off creating a branch per release, if you want to maintain many very different releases for a long time. If you only use tags on stable for releases, you need one merge per maintained release to create a bugfix version of one old release. By adding release branches, you reduce that overhead to one single merge to stable per affected release by stating clearly, that changes to old versions should never affect new versions, except if those changes are explicitely merged into the new versions. If the bugfix affects all releases, release branches require two times as many actions as tagged releases, though: You need to graft the bugfix into every release and merge the release into stable.22

21. If for example you want to ignore that change to an old release for new releases, you simply merge the old release into stable and use hg revert --all -r stable before committing the merge.

22. A rule of thumb for deciding between tagged releases and release branches is: If you only have a few releases you maintain at the same time, use tagged releases. If you expect that most bugfixes will apply to all releases, starting with some old release, just use tagged releases. If bugfixes will only apply to one release and the current development, use tagged releases and merge hotfixes only to stable. If most bugfixes will only apply to one release and not to the current development, use release branches.

# A short introduction to Mercurial with TortoiseHG (GNU/Linux and Windows)

After installing TortoiseHG, you can download a repository to your computer by right-clicking in a folder and selecting the menu "TortoiseHG" and then "Clone" in there (currently you still need Windows for that - all other dialogs can be evoked in GNU/Linux on the commandline via "hgtk").

Create Clone, GNU/Linux:

In the dialog you just enter the url of the repository, for example:

http://www.bitbucket.org/ArneBab/md-esw-2009

(that's also the address of the repository in the internet - just try clicking the link.

When you log in to bitbucket.org you will find a clone-address directly on the site. You can also use that clone address to upload changes (it contains your login-name, and I can give you "push" access on that site).

## Workflow with TortoiseHG

This gives you two basic abilities:

• Save and view changes locally, and
• synchronize changes with others.

(I assume that part of what I say is redundant, but I'd rather write a bit too much than omit a crucial bit)

To save changes, you can simlply select "HG Commit" in the right-click-menu. If some of your files aren't known to HG yet (the box before the file isn't ticked), you have to add them (tick the box) to be able to commit them.

To go back to earlier changes, you can use "Checkout Revision" in the "TortoiseHG" menu. In that dialog you can then select the revision you want to see and use the icon on the upper left to get all files to that revision.

You can synchronize by right-clicking in the folder and selecting "Synchronize" in the "TortoiseHG" menu (inside the right-click menu). In the opening dialog you can "push" (upload changes - arrow up with the bar above it), "pull" (download changes to your computer - arrow down with bar below), and check what you would pull or push (arrows iwthout bars). I thing that using dialog will soon became second nature for you, too :)

Have fun with TortoiseHG! :) - Arne

PS: There's also a longer intro to TortoiseHG and an overview to DVCS.

PPS: md-esw-2009 is a repository in which Baddok and I planned a dual-gm roleplaying session Mechanical Dream.

PPPS: There's also a german version of this article on my german pages.

# Basic usecases for DVCS: Workflow Failures

Update (2013-04-18): In #mercurial @ irc.freenode.net there were discussions yesterday for improving the help output if you do not have your username setup, yet.

## 1 Intro

I recently tried contributing to a new project again, and I was quite surprised which hurdles can be in your way, when you did not setup your environment, yet.

So I decided to put together a small test for the basic workflow: Cloning a project, doing and testing a change and pushing it back.

I did that for Git and Mercurial, because both break at different points.

I’ll express the basic usecase in Subversion:

• svn checkout [project]
• (hack, test, repeat)
• (request commit rights)
• svn commit -m "added X"

You can also replace the request for commit rights with creating a patch and sending it to a mailing list. But let’s take the easiest case of a new contributor who is directly welcomed into the project as trusted committer.

A slightly more advanced workflow adds testing in a clean tree. In Subversion it looks almost like the simple commit:

## 2Git

Let’s start with Linus’ DVCS. And since we’re using a DVCS, let’s also try it out in real life

### 2.1 Setup the test

LC_ALL=C
LANG=C
PS1="$" rm -rf /tmp/gitflow > /dev/null mkdir -p /tmp/gitflow > /dev/null cd /tmp/gitflow > /dev/null # init the repo git init orig > /dev/null cd orig > /dev/null echo 1 > 1 # add a commit git add 1 > /dev/null git config user.name upstream > /dev/null git config user.email up@stream > /dev/null git commit -m 1 > /dev/null # checkout another branch but master. YES, YOU SHOULD DO THAT on the shared repo. We’ll see later, why. git checkout -b never-pull-this-temporary-useless-branch master 2> /dev/null cd .. > /dev/null echo # purely cosmetic and implementation detail: this adds a new line to the output ls  wolf, n.: A man who knows all the ankles. arne@fluss ~/.emacs.d/private/journal$ arne@fluss ~/.emacs.d/private/journal $ orig  git --version  git version 1.8.1.5  ### 2.2 Simplest case #### 2.2.1 Get the repo First I get the repo git clone orig mine echo$ ls
ls

Cloning into 'mine'...
done.
$ls mine orig  #### 2.2.2 Hack a bit cd mine echo 2 > 1 git commit -m "hack"  $# On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified:   1
no changes added to commit (use "git add" and/or "git commit -a")


ARGL… but let’s paste the commands into the shell. I do not use –global, since I do not want to shoot my test environment here.

git config user.email "contributor"
git config user.name "con@tribut.or"


and try again

git commit -m "hack"


On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified:   1
no changes added to commit (use "git add" and/or "git commit -a")


ARGL… well, paste it in again…

git add 1
git commit -m "hack"


[master aba911a] hack
1 file changed, 1 insertion(+), 1 deletion(-)


Finally I managed to commit my file. Now, let’s push it back.

#### 2.2.3 Push it back

git push

warning: push.default is unset; its implicit value is changing in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the current behavior after the default changes, use:

git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

git config --global push.default simple

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Counting objects: 5, done.
(1/3)
Writing objects:  66% (2/3)
Writing objects: 100% (3/3)
Writing objects: 100% (3/3), 222 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To /tmp/gitflow/orig
master


HA! It’s in.

#### 2.2.4 Overview

In short the required commands look like this:

• git clone orig mine
• cd mine; (hack)
• git config user.email "contributor"
• git config user.name "con@tribut.or"
• git commit -m "hack"
• (request permission to push)
• git push

compare Subversion:

Now let’s see what that initial setup with setting a non-master branch was about…

### 2.3 With testing

#### 2.3.1 Test something

I want to test a change and ensure, that it works with a fresh clone. So I just clone my local repo and commit there.

cd ..
git clone mine test
cd test
# setup the user locally again. Normally you do not need that again, since you’d use --global.
git config user.email "contributor"
git config user.name "con@tribut.or"
# hack and commit
echo test > 1
echo # cosmetic
git commit -m "change to test" >/dev/null
# (run the tests)


#### 2.3.2 Push it back

git push

warning: push.default is unset; its implicit value is changing in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the current behavior after the default changes, use:

git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

git config --global push.default simple

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Counting objects: 5, done.
(1/3)
Writing objects:  66% (2/3)
Writing objects: 100% (3/3)
Writing objects: 100% (3/3), 234 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error:
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error:
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
To /tmp/gitflow/mine
master (branch is currently checked out)
error: failed to push some refs to '/tmp/gitflow/mine'


Uh… what? If I were a real first time user, at this point I would just send a patch…

The simple local test clone does not work: You actually have to also checkout a different branch if you want to be able to push back (needless duplication of information - and effort). And it actually breaks this simple workflow.

(experienced git users will now tell me that you should always checkout a work branch. But that would mean that I would have to add the additional branching step to the simplest case without testing repo, too, raising the bar for contribution even higher)

git checkout -b testing master
git push ../mine testing

Switched to a new branch 'testing'
Counting objects: 5, done.
(1/3)

Writing objects: 66% (2/3) Writing objects: 100% (3/3) Writing objects: 100% (3/3), 234 bytes, done. : Total 3 (delta 0), reused 0 (delta 0) : To ../mine : testing

Since I only pushed to mine, I now have to go there, merge and push.

cd ../mine
git merge testing
git push

Updating aba911a..820dea8
Fast-forward
1 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
warning: push.default is unset; its implicit value is changing in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the current behavior after the default changes, use:

git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

git config --global push.default simple

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Counting objects: 5, done.
(1/3)
Writing objects:  66% (2/3)
Writing objects: 100% (3/3)
Writing objects: 100% (3/3), 234 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To /tmp/gitflow/orig
master


#### 2.3.3 Overview

In short the required commands for testing look like this:

• git clone mine test
• cd test; (hack)
• git checkout -b testing master
• git commit -m "hack"
• git push ../mine testing
• cd ../mine
• git merge testing
• git push

Compare to Subversion

### 2.4 Wrapup

The git workflows broke at several places:

Simplest:

• Set the username (minor: it’s just pasting shell commands)
• Add every change (==staging. Minor: paste shell commands again - or use commit -a)

• Cannot push to the local clone (major: it spews about 20 lines of error messages which do not tell me how to actually get my changes into the local clone)
• Have to use a temporary branch in a local clone to be able to push back (annoyance: makes using clean local clones really annoying).

## 3Mercurial

Now let’s try the same

### 3.1 Setup the test

LC_ALL=C
LANG=C
PS1="$" rm -rf /tmp/hgflow > /dev/null mkdir -p /tmp/hgflow > /dev/null cd /tmp/hgflow > /dev/null # init the repo hg init orig > /dev/null cd orig > /dev/null echo 1 > 1 > /dev/null # add a commit hg add 1 > /dev/null hg commit -u upstream -m 1 > /dev/null cd .. >/dev/null echo # purely cosmetic and implementation detail: this adds a new line to the output ls  The most happy marriage I can imagine to myself would be the union of a deaf man to a blind woman. -- Samuel Taylor Coleridge arne@fluss ~/.emacs.d/private/journal$ arne@fluss ~/.emacs.d/private/journal $ orig  hg --version  Mercurial Distributed SCM (version 2.5.2) (see http://mercurial.selenic.com for more information) Copyright (C) 2005-2012 Matt Mackall and others This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  ### 3.2 Simplest case #### 3.2.1 Get the repo hg clone orig mine echo$ ls
ls

updating to branch default
1 files updated, 0 files merged, 0 files removed, 0 files unresolved


#### 3.3.2 Push it back

hg push

pushing to /tmp/hgflow/mine
searching for changes
added 1 changesets with 1 changes to 1 files


It’s in mine now, but I still need to push it from there.

cd ../mine
hg push


pushing to /tmp/hgflow/orig
searching for changes
added 1 changesets with 1 changes to 1 files


Done.

If I had worked on mine in the meantime, I would have to merge there, too - just as with git with the exception that I would not have to give a branch name. But since we’re in the simplest case, we don’t need to do that.

#### 3.3.3 Overview

In short the required commands for testing look like this:

• hg clone mine test
• cd test; (hack)
• hg commit -m "hack"
• hg push ../mine
• cd ../mine
• hg push

Compare to Subversion

and to git

### 3.4 Wrapup

The Mercurial workflow broke only ONCE, but there it broke HARD: To commit you actually have to READ THE HELP PAGE on config to find out how to set your username.

So, to wrap it up: ARE YOU SERIOUS?

That’s a really nice workflow, disturbed by a devastating user experience for just one of the commands.

This is a place where hg should learn from git: The initial setup must be possible from the commandline, without reading a help page and without changing to an editor and then back into the commandline.

## 4 Summary

• Git broke at several places, and in one place it broke hard: Pushing between local clones is a huge hassle, even though that should be a strong point of DVCSs.
• Mercurial broke only once, but there it broke hard: Setting the username actually requires reading help output and hand-editing a text file.

Also the workflows for a user who gets permission to push always required some additional steps compared to Subversion.

One of the additional steps cannot be avoided without losing offline-commits (which are a major strength of DVCS), because those make it necessary to split svn commit into commit and push: That separates storing changes from sharing them.

But git actually requires additional steps which are only necessary due to implementation details of its storage layer: Pushing to a repo with the same branch checked out is not allowed, so you have to create an additional branch in your local clone and merge it in the other repo, even if all your changes are siblings of the changes in the other repository, and it requires either a flag to every commit command or explicit adding of changes. That does not amount to the one unavoidable additional command, but actually further three commands, so the number of commands to get code, hack on it and share it increases from 5 to 9. And if you work in a team where people trust you to write good code, that does not actually reduce the required effort to share your changes.

On the other hand, both Mercurial and Git allow you to work offline, and you can do as many testing steps in between as you like, without needing to get the changes from the server every time (because you can simply clone a local repo for that).

### 4.1 Visually

#### 4.1.3 Git

Date: 2013-04-17T20:39+0200

Org version 7.9.2 with Emacs version 24

Validate XHTML 1.0

# Creating nice logs with revsets in Mercurial

In the mercurial list Stanimir Stamenkov asked how to get rid of intermediate merges in the log to simplify reading the history (and to not care about missing some of the details).

Update: Since Mercurial 2.4 you can simply use
hg log -Gr "branchpoint()"

I did some tests for that and I think the nicest representation I found is this:

hg log -Gr "(all() - merge()) or head()"


## The result

It showed that in the end the revisions converged again - and it shows the actual states of the development.

$hg log -Gr "(all() - merge()) or head()" @ Änderung: 7:52fe4a8ec3cc |\ Marke: tip | | Vorgänger: 6:7d3026216270 | | Vorgänger: 5:848c390645ac | | Nutzer: Arne Babenhauserheide <bab@draketo.de> | | Datum: Tue Aug 14 15:09:54 2012 +0200 | | Zusammenfassung: merge | | | \ | |\ | | o Änderung: 3:55ba56aa8299 | | | Vorgänger: 0:385d95ab1fea | | | Nutzer: Arne Babenhauserheide <bab@draketo.de> | | | Datum: Tue Aug 14 15:09:40 2012 +0200 | | | Zusammenfassung: 4 | | | | o | Änderung: 2:b500d0a90d40 | |/ Vorgänger: 0:385d95ab1fea | | Nutzer: Arne Babenhauserheide <bab@draketo.de> | | Datum: Tue Aug 14 15:09:39 2012 +0200 | | Zusammenfassung: 3 | | o | Änderung: 1:8cc66166edc9 |/ Nutzer: Arne Babenhauserheide <bab@draketo.de> | Datum: Tue Aug 14 15:09:38 2012 +0200 | Zusammenfassung: 2 | o Änderung: 0:385d95ab1fea Nutzer: Arne Babenhauserheide <bab@draketo.de> Datum: Tue Aug 14 15:09:38 2012 +0200 Zusammenfassung: 1  ## Even shorter, but not quite correct The shortest representation is without the heads, though. It does not represent the current state of development if the last commit was a merge or if some branches were not merged. Otherwise it is equivalent. $ hg log -Gr "(all() - merge())"

o  Änderung:        3:55ba56aa8299
|  Vorgänger:       0:385d95ab1fea
|  Nutzer:          Arne Babenhauserheide <bab@draketo.de>
|  Datum:           Tue Aug 14 15:09:40 2012 +0200
|  Zusammenfassung: 4
|
| o  Änderung:        2:b500d0a90d40
|/   Vorgänger:       0:385d95ab1fea
|    Nutzer:          Arne Babenhauserheide <bab@draketo.de>
|    Datum:           Tue Aug 14 15:09:39 2012 +0200
|    Zusammenfassung: 3
|
| o  Änderung:        1:8cc66166edc9
|/   Nutzer:          Arne Babenhauserheide <bab@draketo.de>
|    Datum:           Tue Aug 14 15:09:38 2012 +0200
|    Zusammenfassung: 2
|
o  Änderung:        0:385d95ab1fea
Nutzer:          Arne Babenhauserheide <bab@draketo.de>
Datum:           Tue Aug 14 15:09:38 2012 +0200
Zusammenfassung: 1


## The basic log For reference

The vanilla-log looks like this:

#### Add files and track them

$cd project$ (add files)
$hg add$ hg commit
(enter the commit message)

Note:

You can also go into an existing directory with files and init the repository there.

$cd project$ hg init

Alternatively you can add only specific files instead of all files in the directory. Mercurial will then track only these files and won't know about the others. The following tells mercurial to track all files whose names begin with "file0" as well as file10, file11 and file12.

$hg add file0* file10 file11 file12 #### Save changes $ (do some changes)

see which files changed, which have been added or removed, and which aren't tracked yet

$hg status see the exact changes $ hg diff

commit the changes.

$hg commit now an editor pops up and asks you for a commit message. Upon saving and closing the editor, your changes have been stored by Mercurial. Note: You can also supply the commit message directly via hg commit -m 'MESSAGE'. #### Move and copy files When you copy or move files, you should tell Mercurial to do the copy or move for you, so it can track the relationship between the files. Remember to commit after moving or copying. From the basic commands only commit creates a new revision $ hg cp original copy
$hg commit (enter the commit message)$ hg mv original target
$hg commit (enter the commit message) Now you have two files, "copy" and "target", and Mercurial knows how they are related. Note: Should you forget to do the explicit copy or move, you can still tell Mercurial to detect the changes via hg addremove --similarity 100. Just use hg help addremove for details. #### Check your history $ hg log

This prints a list of changesets along with their date, the user who committed them (you) and their commit message.

To see a certain revision, you can use the -r switch (--revision). To also see the diff of the displayed revisions, there's the -p switch (--patch)

$hg log -p -r 3 ## Lone developer with nonlinear history ### Use case The second workflow is still very easy: You're a lone developer and you want to use Mercurial to keep track of your own changes. It works just like the log keeping workflow, with the difference that you go back to earlier changes at times. To start a new project, you initialize a repository, add your files and commit whenever you finished a part of your work. Also you check your history from time to time, so see how you progressed. ### Workflow #### Basics from log keeping Init your project, add files, see changes and commit them. $ hg init project
$cd project$ (add files)
$hg add # tell Mercurial to track all files$ (do some changes)
$hg diff # see changes$ hg commit # save changes
$hg cp # copy files or folders$ hg mv # move files or folders
$hg log # see history #### Seeing an earlier revision Different from the log keeping workflow, you'll want to go back in history at times and do some changes directly there, for example because an earlier change introduced a bug and you want to fix it where it occurred. To look at a previous version of your code, you can use update. Let's assume that you want to see revision 3. $ hg update 3

Now your code is back at revision 3, the fourth commit (Mercurial starts counting at 0).
To check if you're really at that revision, you can use identify -n.

$hg identify -n Note: identify without options gives you the short form of a unique revision ID. That ID is what Mercurial uses internally. If you tell someone about the version you updated to, you should use that ID, since the numbers can be different for other people. If you want to know the reasons behind that, please read up Mercurials [basic concepts]. When you're at the most recent revision, hg identify -n will return "-1". To update to the most recent revision, you can use "tip" as revision name. $ hg update tip

Note:

Note:

Instead of hg update you can also use the shorthand hg up. Similarly you can abbreviate hg commit to hg ci.

Note:

To get a revision devoid of files, just update to "null" via hg update null. That's the revision before any files were added.

#### Fixing errors in earlier revisions

When you find a bug in some earlier revision you have two options: either you can fix it in the current code, or you can go back in history and fix the code exactly where you did it, which creates a cleaner history.

To do it the cleaner way, you first update to the old revision, fix the bug and commit it. Afterwards you merge this revision and commit the merge. Don't worry, though: Merging in mercurial is fast and painless, as you'll see in an instant.

Let's assume the bug was introduced in revision 3.

$hg update 3$ (fix the bug)
$hg commit Now the fix is already stored in history. We just need to merge it with the current version of your code. $ hg merge

If there are conflicts use hg resolve - that's also what merge tells you to do in case of conflicts.

First list the files with conflicts

$hg resolve --list Then resolve them one by one. resolve attempts the merge again $ hg resolve conflicting_file
(fix it by hand, if necessary)

Mark the fixed file as resolved

$hg resolve --mark conflicting_file Commit the merge, as soon as you resolved all conflicts. This step is also necessary when there were no conflicts! $ hg commit

At this point, your fix is merged with all your other work, and you can just go on coding. Additionally the history shows clearly where you fixed the bug, so you'll always be able to check where the bug was.

Note:

Most merges will just work. You only need resolve, when merge complains.

So now you can initialize repositories, save changes, update to previous changes and develop in a nonlinear history by committing in earlier changesets and merging the changes into the current code.

Note:

If you fix a bug in an earlier revision, and some later revision copied or moved that file, the fix will be propagated to the target file(s) when you merge. This is the main reason why you should always use hg cp and hg mv.

## Separate features

### Use Case

At times you'll be working on several features in parallel. If you want to avoid mixing incomplete code versions, you can create clones of your local repository and work on each feature in its own code directory.

After finishing your feature you then pull it back into your main directory and merge the changes.

### Workflow

#### Work in different clones

First create the feature clone and do some changes

$hg clone project feature1$ cd feature1
$(do some changes and commits) Now check what will come in when you pull from feature1, just like you can use diff before committing. The respective command for pulling is incoming $ cd ../project
$hg incoming ../feature1 Note: If you want to see the diffs, you can use hg incoming --patch just as you can do with hg log --patch for the changes in the repository. If you like the changes, you pull them into the project $ hg pull ../feature1

Now you have the history of feature1 inside your project, but the changes aren't yet visible. Instead they are only stored inside a ".hg" directory of the project (more information on the store).

Note:

From now on we'll use the name "repository" for a directory which has a .hg directory with Mercurial history.

If you didn't do any changes in the project, while you were working on feature1, you can just update to tip (hg update tip), but it is more likely that you'll have done some other changes in between changes. In that case, it's time for merging.

Merge feature1 into the project code

$hg merge If there are conflicts use hg resolve - that's also what merge tells you to do in case of conflicts. After you merge, you have to commit explicitly to make your merge final $ hg commit
(enter commit message, for example "merged feature1")

You can create an arbitrary number of clones and also carry them around on USB sticks. Also you can use them to synchronize your files at home and at work, or between your desktop and your laptop.

Note:

You also have to commit after a merge when there are no conflicts, because merging creates new history and you might want to attach a specific message to the merge (like "merge feature1").

#### Rollback mistakes

Now you can work on different features in parallel, but from time to time a bad commit might sneak in. Naturally you could then just go back one revision and merge the stray error, keeping all mistakes out of the merged revision. However, there's an easier way, if you realize your error before you do another commit or pull: rollback.

Rolling back means undoing the last operation which added something to your history.

Imagine you just realized that you did a bad commit - for example you didn't see a spelling error in a label. To fix it you would use

hg rollback

And then redo the commit

hg commit -m "message"

If you can use the command history of your shell and you added the previous message via commit -m "message", that following commit just means two clicks on the arrow-key "up" and one click on "enter".

But beware, that a rollback itself can't be undone. If you rollback and then forget to commit, you can't just say "give me my old commit back". You have to create a new commit.

Note:

Rollback is possible, because Mercurial uses transactions when recording changes, and you can use the transaction record to undo the last transaction. This means that you can also use rollback to undo your last pull, if you didn't yet commit anything new.

## Sharing changes

### Use Case

Now we go one step further: You are no longer alone, and you want to share your changes with others and include their changes.

The basic requirement for that is that you have to be able to see the changes of others.

Mercurial allows you to do that very easily by including a simple webserver from which you can pull changes just as you can pull changes from local clones.

Note:

There are a few other ways to share changes, though. Instead of using the builtin webserver, you can also send the changes by email or setup a shared repository, to where you push changes instead of pulling them. We'll cover one of those later.

### Workflow

#### Using the builtin webserver

This is the easiest way to quickly share changes.

First the one who wants to share his changes creates the webserver

$hg serve Now all others can point their browsers to his IP address (for example 192.168.178.100) at port 8000. They will then see all his history there and can decide if they want to pull his changes. $ firefox http://192.168.178.100:8000

If they decide to include the changes, they just pull from the same URL

$hg pull http://192.168.178.100:8000 At this point you all can work as if you had pulled from a local repository. All the data is now in your individual repositories and you can merge the changes and work with them without needing any connection to the served repository. #### Sending changes by email Often you won't have direct access to the repository of someone else, be it because he's behind a restrictive firewall, or because you live in different timezones. You might also want to keep your changes confidential and prefer internal email (if you want additional protection, you can also encrypt the emails, for example with GnuPG). In that case, you can easily export your changes as patches and send them by email. Another reason to send them by email can be that your policy requires manual review of the changes when the other developers are used to reading diffs in emails. I'm sure you can think of more reasons. Sending the changes via email is pretty straightforward with Mercurial. You just export your changes and attach (or copy paste) it in your email. Your colleagues can then just import them. First check which changes you want to export $ cd project
$hg log We assume that you want to export changeset 3 and 4 $ hg export 3 > change3.diff
$hg export 4 > change4.diff Now attach them to an email and your colleagues can just run import on both diffs to get your full changes, including your user information. To be careful, they first clone their repository to have an integration directory as sandbox $ hg clone project integration
$cd integration$ hg import change3.diff
$hg import change4.diff That's it. They can now test your changes in feature clones. If they accept them, they pull the changes into the main repository $ cd ../project
$hg pull ../integration Note: The patchbomb extension automates the email-sending, but you don't need it for this workflow. Note: You can also send around bundles, which are snippets of your actual history. Just create them via $ hg bundle --base FIRST_REVISION_TO_BUNDLE changes.bundle

Others can then get your changes by simply pulling them, as if your bundle were an actual repository

$hg pull path/to/changes.bundle #### Using a shared repository Sending changes by email might be the easiest way to reach people when you aren't yet part of the regular development team, but it creates additional workload: You have to bundle the changes, send mails and then import the bundles manually. Luckily there's an easier way which works quite well: The shared push repository. Till now we transferred all changes either via email or via pull. Now we use another option: pushing. As the name suggests it's just the opposite of pulling: You push your changes into another repository. But to make use of it, we first need something we can push to. By default hg serve doesn't allow pushing, since that would be a major security hole. You can allow pushing in the server, but that's no solution when you live in different timezones, so we'll go with another approach here: Using a shared repository, either on an existing shared server or on a service like BitBucket. Doing so has a bit higher starting cost and takes a bit longer to explain, but it's well worth the effort spent. If you want to use an existing shared server, you can use serve there and allow pushing. Also there are some other nice ways to allow pushing to a Mercurial repository, including simple access via SSH. Otherwise you first need to setup a BitBucket Account. Just signup at BitBucket. After signing up (and login) hover your mouse over "Repositories". There click the item at the bottom of the opening dialog which say "Create new". Give it a name and a description. If you want to keep it hidden from the public, select "private" $ firefox http://bitbucket.org

Now your repository is created and you see instructions for pushing to it. For that you'll use a command similar to the following (just with a different URL)

$hg push https://bitbucket.org/ArneBab/hello/ (Replace the URL with the URL of your created repository. If your username is "Foo" and your repository is named "bar", the URL will be https://bitbucket.org/Foo/bar/) Mercurial will ask for your BitBucket name and password, then push your code. Voilà, your code is online. Note: You can also use SSH for pushing to BitBucket. Now it's time to tell all your colleagues to sign up at BitBucket, too. After that you can click the "Admin" tab of your created repository and add the usernames of your colleagues on the right side under "Permission: Writers". Now they are allowed to push code to the repository. (If you chose to make the repository private, you'll need to add them to "Permission: Readers", too) If one of you now wants to publish changes, he'll simply push them to the repository, and all others get them by pulling. Publish your changes $ hg push https://bitbucket.org/ArneBab/hello/

Pull others changes into your local repository

$hg pull https://bitbucket.org/ArneBab/hello/ People who join you in development can also just clone this repository, as if one of you were using hg serve $ hg clone https://bitbucket.org/ArneBab/hello/ hello

That local repository will automatically be configured to pull/push from/to the online repository, so new contributors can just use hg push and hg pull without an URL.

Note:

To make this workflow more scalable, each one of you can have his own BitBucket repository and you can simply pull from the others repositories. That way you can easily establish workflows in which certain people act as integrators and finally push checked code to a shared pull repository from which all others pull.

Note:

You can also use this workflow with a shared server instead of BitBucket, either via SSH or via a shared directory. An example for an SSH URL with Mercurial is be ssh://user@example.com/path/to/repo. When using a shared directory you just push as if the repository in the shared directory were on your local drive.

## Summary

Now let's take a step back and look where we are.

With the commands you already know, a bit reading of hg help <command> and some evil script-fu you can already do almost everything you'll ever need to do when working with source code history. So from now on almost everything is convenience, and that's a good thing.

First this is good, because it means, that you can now use most of the concepts which are utilized in more complex workflows.

Second it aids you, because convenience lets you focus on your task instead of focusing on your tool. It helps you concentrate on the coding itself. Still you can always go back to the basics, if you want to.

A short summary of what you can do which can also act as a short check, if you still remember the meaning of the commands.

### create a project

$hg init project$ cd project
$(add some files)$ hg add
$hg commit (enter the commit message) ### do nonlinear development $ (do some changes)
$hg commit (enter the commit message)$ hg update 0
$(do some changes)$ hg commit
(enter the commit message)
$hg merge$ (optionally hg resolve)
$hg commit (enter the commit message) ### use feature clones $ cd ..
$hg clone project feature1$ cd feature1
$(do some changes)$ hg commit
(enter the commit message)
$cd ../project$ hg pull ../feature1

### share your repository via the integrated webserver

$hg serve &$ cd ..
$hg clone http://127.0.0.1:8000 project-clone ### export changes to files $ cd project-clone
$(do some changes)$ hg commit
(enter the commit message)
$hg pull http://127.0.0.1:8000 ### Use shared repositories on BitBucket $ (setup bitbucket repo)
$hg push https://bitbucket.org/USER/REPO (enter name and password in the prompt)$ hg pull https://bitbucket.org/USER/REPO

Let's move on towards useful features and a bit more advanced workflows.

### Use Case

When you routinely pull code from others, it can happen that you overlook some bad change. As soon as others pull that change from you, you have little chance to get completely rid of it.

To resolve that problem, Mercurial offers you the backout command. Backing out a change means, that you tell Mercurial to create a commit which reverses the bad change. That way you don't get rid of the bad code in history, but you can remove it from new revisions.

Note:

The basic commands don't directly rewrite history. If you want to do that, you need to activate some of the extensions which are shipped with mercurial. We'll come to that later on.

### Workflow

Let's assume the bad change was revision 3, and you already have one more revision in your
repository. To remove the bad code, you can just backout of it. This creates a new
change which reverses the bad change. After backing out, you can then merge that new change
into the current code.

$hg backout 3$ hg merge
(potentially resolve conflicts)
$hg commit (enter commit message. For example: "merged backout") That's it. You reversed the bad change. It's still recorded that it was once there (following the principle "don't rewrite history, if it's not really necessary"), but it doesn't affect future code anymore. ## Collaborative feature development Now that you can share changes and reverse them if necessary, you can go one step further: Using Mercurial to help in coordinating the coding. The first part is an easy way to develop features together, without requiring every developer to keep track of several feature clones. ### Use Case When you want to split your development into several features, you need to keep track of who works on which feature and where to get which changes. Mercurial makes this easy for you by providing named branches. They are a part of the main repository, so they are available to everyone involved. At the same time, changes committed on a certain branch don't get mixed with the changes in the default branch, so features are kept separate, until they get merged into the default branch. Note: Cloning a repository always puts you onto the default branch at first. ### Workflow When someone in your group wants to start coding on a feature without disturbing the others, he can create a named branch and commit there. When someone else wants to join in, he just updates to the branch and commits away. As soon as the feature is finished, someone merges the named branch into the default branch. #### Working in a named branch Create the branch $ hg branch feature1
(do some changes)
$hg commit (write commit message) Update into the branch and work in it $ hg update feature1
(do some changes)
$hg commit (write commit message) Now you can commit, pull, push and merge (and anything else) as if you were working in a separate repository. If the history of the named branch is linear and you call "hg merge", Mercurial asks you to specify an explicit revision, since the branch in which you work doesn't have anything to merge. #### Merge the named branch When you finished the feature, you merge the branch back into the default branch. $ hg update default
$hg merge feature1$ hg commit
(write commit message)

And that's it. Now you can easily keep features separate without unnecessary bookkeeping.

Note:

Named branches stay in history as permanent record after you finished your work. If you don't like having that record in your history, please have a look at some of the advanced workflows.

## Tagging revisions

### Use Case

Since you can now code separate features more easily, you might want to mark certain revisions as fit for consumption (or similar). For example you might want to mark releases, or just mark off revisions as reviewed.

For this Mercurial offers tags. Tags add a name to a revision and are part of the history. You can tag a change years after it was committed. The tag includes the information when it was added, and tags can be pulled, pushed and merged just like any other committed change.

Note:

A tag must not contain the char ":", since that char is used for specifying multiple revisions - see "hg help revisions".

Note:

To securely mark a revision, you can use the gpg extension to sign the tag.

### Workflow

Let's assume you want to give revision 3 the name "v0.1".

$hg tag -r 3 v0.1 See all tags $ hg tags

When you look at the log you'll now see a line in changeset 3 which marks the Tag. If someone wants to update to the tagged revision, he can just use the name of your tag

$hg update v0.1 Now he'll be at the tagged revision and can work from there. ## Removing history ### Use Case At times you will have changes in your repository, which you really don't want in it. There are many advanced options for removing these, and most of them use great extensions (Mercurial Queues is the most often used one), but in this basic guide, we'll solve the problem with just the commands we already learned. But we'll use an option to clone which we didn't yet use. This workflow becomes inconvenient when you need to remove changes, which are buried below many new changes. If you spot the bad changes early enough, you can get rid of them without too much effort, though. ### Workflow Let's assume you want to get rid of revision 2 and the highest revision is 3. The first step is to use the "--rev" option to clone: Create a clone which only contains the changes up to the specified revision. Since you want to keep revision 1, you only clone up to that $ hg clone -r 1 project stripped

Now you can export the change 3 from the original repository (project) and import it into the stripped one

$cd project$ hg export 3 > ../changes.diff
$cd ../stripped$ hg import ../changes.diff

If a part of the changes couldn't be applied, you'll see that part in *.rej files. If you have *.rej files, you'll have to include or discard changes by hand

$cat *.rej (apply changes by hand)$ hg commit
(write commit message)

That's it. hg export also includes the commit message, date, committer and similar metadata, so you are already done.

Note:

removing history will change the revision IDs of revisions after the removed one, and if you pull from someone else who still has the revision you removed, you will pull the removed parts again. That's why rewriting history should most times only be done for changes which you didn't yet publicise.

## Summary

So now you can work with Mercurial in private, and also share your changes in a multitude of ways.

Additionally you can remove bad changes, either by creating a change in the repository which reverses the original change, or by really rewriting history, so it looks like the change never occurred.

And you can separate the work on features in a single repository by using named branches and add tags to revisions which are visible markers for others and can be used to update to the tagged revisions.

With this we can conclude our practical guide.

# More Complex Workflows

If you now want to check some more complex workflows, please have a look at the general workflows wikipage.

To deepen your understanding, you should also check the basic concept overview.

Have fun with Mercurial!

Learning Mercurial in Workflows - A practical guide to version tracking / source code management with Mercurial
Copyright © 2011 Arne Babenhauserheide (main author), David Soria Parra, Augie Fackler, Benoit Boissinot, Adrian Buehlmann, Nicolas Dumazet and Steve Losh.

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.

# Mercurial for two Programmers who are (mostly) new to SCM

Written in the Mercurial mailing list

Hi Bernard,

Am Dienstag 03 Februar 2009 20:19:14 schrieb ... ...:
> Most of the docs I can find seem to assume the reader is familiar with
> existing software developemnt tools and methodologies.
>
> This is not the case for me.

It wasn't for me either, and I can assure you that using Mercurial becomes
natural quite quickly.

> Now, I need to coordinate with a second (also SCM clueless) programmer.
...
> I envision us both working the main trunk for many small day-to-day
> changes, and our own isolated repo for larger additions that we will each
> be working on.

I don't know about a HOWTO, but I can give you a short description about basic
usage and the workflow I'd use:

Basic usage

• Just commit as you'd have done in SVN via "hg commit".
• To get changes from others, do "hg pull -u".
The "-u" says 'update my files'.
• If you already committed and then pull changes from someone else, you merge
the changes with yours via "hg merge". Merging is quite painless in Mercurial, so you can easily do it often.
• Once you want to share your changes, do "hg push".
Should that complain about "adding heads", pull and merge, then do the push again. If you really want to create new remote heads, you can use "hg push -f".

Workflow

• Firstoff: Create a main repository you both can push changes to. If you have ssh access to a shared machine, that's as simple as creating a repository on that machine via "hg init project".
• Now both of you clone from that repository via

(ADDRESS can be either a host or an IP).

That's your repository for the small day to day changes.

• If you want to do bigger changes, you create a feature clone via
hg clone project feature1

In that clone you simply work, pull and commit as usual, but you only push after you finished the feature.

Once you finished the feature, you push the changes from the feature clone via "hg push" in feature1 (which gets them into your main working clone) and then push then onward into the shared repository.

That's it - or rather that's what I'd do. It might be right for you, too, and
if it isn't, don't be shy of experimenting. As long as you have a backup clone
lying around (for example cloned to a USB stick via "hg clone project
path/to/stick/project"), you can't do too much damage :)

I hope I could provide a bit of help :)

# Mercurial Workflow: Feature seperation via named branches

Also published on Mercurials Workflows wikipage. Originally written for PyHurd: Python bindings for the GNU Hurd.

## For Whom?

If you

1. want to develop features collaboratively and you want to be able to see later for which feature a given change was added or
2. want to do changes concurrently which would likely affect each other negatively while they are not finished, but which need to be developed in a group with minimal overhead,

then this workflow might be right for you.

Note: If you have a huge number of small features (2000 and upwards), the number of persistent named branches can create certain performance problems. For features which need no collaboration or need only a few commits, this workflow also has much unnecessary overhead. It is best used for features which will be developed side by side with default for some time (and many commits), so tracking the default branch against the feature is relevant. To mark single-commit features as belonging to a feature, just use the commit message.

Note: The difference between Mercurial named branches and git branches is that git branches don’t stay in history. They don’t allow you to find out later in which branch a certain commit was added. If you want git-style branching, just use bookmarks.

## What you need

Just vanilla Mercurial.

## Workflow

The workflow is 6-stepped:

1. create the new feature,
2. Implement and share,
3. merge other changes into it,
4. merge stable features,
5. close finished features and
6. reopen features.

Let’s see the steps in detail.

#### 1. New feature

First start a new branch with the name of the feature (starting from default).

hg branch feature-x
# do some changes
hg commit -m "Started implemented feature-x"


#### 2. Implement and share

Then commit away and push whenever you finish something which might be of interest to others, regardless how marginal.

You can push to a shared repository, or to your own clone or even send the changes via email to other contributors (for example via the mailbomb extension).

#### 3. Merge in default

Merge changes in the default branch into your feature as often as possible to reduce the work necessary when you want to merge the feature later on.

hg update feature-x
hg merge default
hg commit -m "merged default into feature-x"


#### 4. Merge stable features

When your feature is stable, merge it into default.

hg update default
hg merge feature-x
hg commit -m "merged feature-x"


#### 5. Close the branch when it’s done

And when the feature needs no more work, close the branch.

# start from default, automatic when using a fresh clone
hg update default
hg branch feature-x
# do some changes
hg commit -m "started feature X"
hg push

# commit and push as you like
hg update default
hg merge feature-x
hg ci -m "merged feature X into default"
hg commit --close-branch -m "finished feature X"


This hides the branch from the output of hg branches, so you don’t clutter your history.

#### 6. Reopen the feature

To improve a feature after it was officially closed, first merge default into the feature branch (to get it up to date), then work just as if you had started it.

hg up feature-x
hg merge default
hg ci -m "merged default into feature X"
# commit, push, repeat, finish


Generally merge default into your feature as often as possible.

## Epilog

If this workflow helps you, I’d be glad to hear from you!

# Test of the hg evolve extension for easier upstreaming

## 1 Rationale

PDF-version (for printing)

orgmode-version (for editing)

repository (for forking)

Currently I rework my code extensively before I push it into upstream SVN. Some of that is inconvenient and it would be nicer to have easy to use refactoring tools.

hg evolve might offer that.

This test uses the mutable-hg extension in revision c70a1091e0d8 (24 changesets after 2.1.0). It will likely be obsolete, soon, since mutable-hg is currently moved into Mercurial core by Pierre-Yves David, its main developer. I hope it will be useful for you, to assess the future possibilities of Mercurial today. This is not (only) a pun on “obsolete”, the functionality at the core of evolve which allows safe, collaborative history rewriting ☺

## 2 Tests

# Tests for refactoring history with the evolve extension
export LANG=C # to get rid of localized strings
export PS1="$" rm -r testmy testother testpublic  ### 2.1 Init Initialize the repos I need for the test. We have one public repo and 2 nonpublishing repos. # Initialize the test repo hg init testpublic # a public repo hg init testmy # my repo hg init testother # other repo # make the two private repos nonpublishing for i in my other do echo "[ui] username =$i
[phases]
publish = False" > test${i}/.hg/hgrc done  note: it would be nice if we could just specify nonpublishing with the init command. ### 2.2 Prepare Prepare the content of the repos. cd testmy echo "Hello World" > hello.txt hg ci -Am "Hello World" hg log -G cd ..   adding hello.txt @ changeset: 0:c19ed5b17f4f tag: tip user: my date: Sat Jan 12 00:17:40 2013 +0100 summary: Hello World  ### 2.3 Amend Add a bad change and amend it. cd testmy sed -i s/World/Evoluton/ hello.txt hg ci -m "Hello Evolution" echo hg log -G cat hello.txt # FIX this up sed -i s/Evoluton/Evolution/ hello.txt hg amend -m "Hello Evolution" # pass the message explicitely again to avoid having the editor pop up echo hg log -G cd ..   @ changeset: 1:83a5e89adea6 | tag: tip | user: my | date: Sat Jan 12 00:17:41 2013 +0100 | summary: Hello Evolution | o changeset: 0:c19ed5b17f4f user: my date: Sat Jan 12 00:17:40 2013 +0100 summary: Hello World Hello Evoluton @ changeset: 3:129d59901401 | tag: tip | parent: 0:c19ed5b17f4f | user: my | date: Sat Jan 12 00:17:42 2013 +0100 | summary: Hello Evolution | o changeset: 0:c19ed5b17f4f user: my date: Sat Jan 12 00:17:40 2013 +0100 summary: Hello World  ### 2.4 …together Add a bad change. Followed by a good change. Pull both into another repo and amend it. Do a good change in the other repo. Then amend the bad change in the original repo, pull it into the other and evolve. #### 2.4.1 Setup Now we change the format to planning a roleplaying session to have a more complex task. We want to present this as coherent story on how to plan a story, so we want clean history. First I do my own change. cd testmy # Now we add the bad change echo "Wishes: - The Solek wants Action - The Judicator wants Action " >> plan.txt hg ci -Am "What the players want" # show what we did echo hg log -G -r tip # and the good change echo "Places: - The village - The researchers cave " >> plan.txt hg ci -m "The places" echo hg log -G -r 1: cd ..   adding plan.txt @ changeset: 4:b170dda0a4a7 | tag: tip | user: my | date: Sat Jan 12 00:17:44 2013 +0100 | summary: What the players want | @ changeset: 5:2a37053027cc | tag: tip | user: my | date: Sat Jan 12 00:17:45 2013 +0100 | summary: The places | o changeset: 4:b170dda0a4a7 | user: my | date: Sat Jan 12 00:17:44 2013 +0100 | summary: What the players want | o changeset: 3:129d59901401 | parent: 0:c19ed5b17f4f | user: my | date: Sat Jan 12 00:17:42 2013 +0100 | summary: Hello Evolution |  Now my file contains the wishes of the players as well as the places. We pull the changes into the repo of another gamemaster with whom we plan this game. hg -R testother pull -u testmy hg -R testother log -G -r 1:  pulling from testmy requesting all changes adding changesets adding manifests adding file changes added 4 changesets with 4 changes to 2 files 2 files updated, 0 files merged, 0 files removed, 0 files unresolved @ changeset: 3:2a37053027cc | tag: tip | user: my | date: Sat Jan 12 00:17:45 2013 +0100 | summary: The places | o changeset: 2:b170dda0a4a7 | user: my | date: Sat Jan 12 00:17:44 2013 +0100 | summary: What the players want | o changeset: 1:129d59901401 | user: my | date: Sat Jan 12 00:17:42 2013 +0100 | summary: Hello Evolution |  note: the revisions numbers are different because the other repo only gets those obsolete revisions which are ancestors to non-obsolete revisions. That way evolve slowly cleans out obsolete revisions from the history without breaking repositories which already have them (but giving them a clear and easy path for evolution). He then adds the important people: cd testother echo "People: - The Lost - The Specter " >> plan.txt hg ci -m "The people" echo hg log -G -r 1: cd ..   @ changeset: 4:65cc97fc774a | tag: tip | user: other | date: Sat Jan 12 00:17:48 2013 +0100 | summary: The people | o changeset: 3:2a37053027cc | user: my | date: Sat Jan 12 00:17:45 2013 +0100 | summary: The places | o changeset: 2:b170dda0a4a7 | user: my | date: Sat Jan 12 00:17:44 2013 +0100 | summary: What the players want | o changeset: 1:129d59901401 | user: my | date: Sat Jan 12 00:17:42 2013 +0100 | summary: Hello Evolution |  #### 2.4.2 Fix my side And I realize too late, that my estimate of the wishes of the players was wrong. So I simply amend it. cd testmy hg up -r -2 sed -i "s/The Solek wants Action/The Solek wants emotionally intense situations/" plan.txt hg amend -m "The wishes of the players" hg log -G -r 1: cd ..  1 files updated, 0 files merged, 0 files removed, 0 files unresolved 1 new unstable changesets @ changeset: 7:86e7a5305c9e | tag: tip | parent: 3:129d59901401 | user: my | date: Sat Jan 12 00:17:50 2013 +0100 | summary: The wishes of the players | | o changeset: 5:2a37053027cc | | user: my | | date: Sat Jan 12 00:17:45 2013 +0100 | | summary: The places | | | x changeset: 4:b170dda0a4a7 |/ user: my | date: Sat Jan 12 00:17:44 2013 +0100 | summary: What the players want | o changeset: 3:129d59901401 | parent: 0:c19ed5b17f4f | user: my | date: Sat Jan 12 00:17:42 2013 +0100 | summary: Hello Evolution |  Now I amended my commit, but my history does not look good, yet. Actually it looks evil, since I have 2 heads, which is not so nice. The changeset under which we just pulled away the bad change has become unstable, because its ancestor has been obsoleted, so it has no stable foothold anymore. In other DVCSs, this means that we as users have to find out what was changed and fix it ourselves. Changeset evolution allows us to evolve our repository to get rid of dependencies on obsolete changes. cd testmy hg evolve hg log -G -r 1: cd ..  move:[5] The places atop:[7] The wishes of the players merging plan.txt @ changeset: 8:0980732d20e0 | tag: tip | user: my | date: Sat Jan 12 00:17:45 2013 +0100 | summary: The places | o changeset: 7:86e7a5305c9e | parent: 3:129d59901401 | user: my | date: Sat Jan 12 00:17:50 2013 +0100 | summary: The wishes of the players | o changeset: 3:129d59901401 | parent: 0:c19ed5b17f4f | user: my | date: Sat Jan 12 00:17:42 2013 +0100 | summary: Hello Evolution |  Now I have nice looking history without any hassle - and without having to resort to low-level commands. #### 2.4.3 Be a nice neighbor But I rewrote history. What happens if my collegue pulls this? hg -R testother pull testmy hg -R testother log -G  pulling from testmy searching for changes adding changesets adding manifests adding file changes added 2 changesets with 2 changes to 1 files (+1 heads) (run 'hg heads' to see heads, 'hg merge' to merge) 1 new unstable changesets o changeset: 6:0980732d20e0 | tag: tip | user: my | date: Sat Jan 12 00:17:45 2013 +0100 | summary: The places | o changeset: 5:86e7a5305c9e | parent: 1:129d59901401 | user: my | date: Sat Jan 12 00:17:50 2013 +0100 | summary: The wishes of the players | | @ changeset: 4:65cc97fc774a | | user: other | | date: Sat Jan 12 00:17:48 2013 +0100 | | summary: The people | | | x changeset: 3:2a37053027cc | | user: my | | date: Sat Jan 12 00:17:45 2013 +0100 | | summary: The places | | | x changeset: 2:b170dda0a4a7 |/ user: my | date: Sat Jan 12 00:17:44 2013 +0100 | summary: What the players want | o changeset: 1:129d59901401 | user: my | date: Sat Jan 12 00:17:42 2013 +0100 | summary: Hello Evolution | o changeset: 0:c19ed5b17f4f user: my date: Sat Jan 12 00:17:40 2013 +0100 summary: Hello World  As you can see, he is told that his changes became unstable, since they depend on obsolete history. No need to panic: He can just evolve his repo to be state of the art again. But the unstable change is the current working directory, so evolve does not change it. Instead it tells us, that we might want to call it with –any. And as it is the case with most hints in hg, that is actually the case. hg -R testother evolve  nothing to evolve here (1 troubled changesets, do you want --any ?)  note: that message might be a candidate for cleanup. hg -R testother evolve --any hg -R testother log -G -r 1:  move:[4] The people atop:[6] The places merging plan.txt @ changeset: 7:058175606243 | tag: tip | user: other | date: Sat Jan 12 00:17:48 2013 +0100 | summary: The people | o changeset: 6:0980732d20e0 | user: my | date: Sat Jan 12 00:17:45 2013 +0100 | summary: The places | o changeset: 5:86e7a5305c9e | parent: 1:129d59901401 | user: my | date: Sat Jan 12 00:17:50 2013 +0100 | summary: The wishes of the players | o changeset: 1:129d59901401 | user: my | date: Sat Jan 12 00:17:42 2013 +0100 | summary: Hello Evolution |  And as you can see, everything looks nice again. ### 2.5 …safely Publishing the changes into a public repo makes them immutable. Now imagine, that my co-gamemaster publishes his work. Mercurial will then store that his changes were published and warn us, if we try to change them. cd testother hg up > /dev/null echo "current phase" hg phase . hg push ../testpublic echo "phase after publishing" hg phase . cd ..  current phase 7: draft pushing to ../testpublic searching for changes adding changesets adding manifests adding file changes added 5 changesets with 5 changes to 2 files phase after publishing 7: public  Now trying to amend history will fail (except if we first change the phase to draft with hg phase –force –draft .). cd testother hg amend -m "change published history" # change to draft hg phase -fd . hg phase . # now we could amend, but that would defeat the point of this section, so we go to public again. hg phase -p . cd ..   abort: can not rewrite immutable changeset 058175606243 7: draft  Once I pull from that repo, the revisions which are in there will also switch phase to public in my repo. So by pushing the changes into a publishing repo, you can get the Mercurial of all contributors to track which revisions are safe to change - and which are not. An alternative is using hg phase -p REV. ### 2.6 Fold Do multiple commits to create a patch, then fold them into one commit. Now I go into a bit of a planning spree. cd testmy echo "Scenes:" >> plan.txt hg ci -m "we need scenes" echo "- Lost appears" >> plan.txt hg ci -m "scene" echo "- People vanish" >> plan.txt hg ci -m "scene" echo "- Portals during dreamtime" >> plan.txt hg ci -m "scene" echo hg log -G -r 9: cd ..   @ changeset: 12:fbcce7ad7369 | tag: tip | user: my | date: Sat Jan 12 00:18:06 2013 +0100 | summary: scene | o changeset: 11:189c0362a80f | user: my | date: Sat Jan 12 00:18:05 2013 +0100 | summary: scene | o changeset: 10:715a31ac9dee | user: my | date: Sat Jan 12 00:18:05 2013 +0100 | summary: scene | o changeset: 9:dfa4c226150b | user: my | date: Sat Jan 12 00:18:05 2013 +0100 | summary: we need scenes |  Yes, I tend to do that… But we actually only need one change, so make it one by folding the last 4 changes changes into a single commit. Since fold needs an interactive editor (it does not take -m, yet), we will leave that out. The commented commands allow you to fold the changesets. cd testmy # hg fold -r "-1:-4" # hg log -G -r 9: cd ..  ### 2.7 Split Do one big commit, then split it into two atomic commits. Now I apply the scenes to wishes, places and people. Which is not useful: First I should apply them to the wishes and check if all wishes are fullfilled. But while writing I forgot that, and anxious to show my co-gamemaster, I just did one big commit. cd testmy sed -i "s/The Judicator wants Action/The Judicator wants Action - portals/" plan.txt sed -i "s/The village/The village - lost, vanish, portals/" plan.txt hg ci -m "Apply Scenes to people and places." echo hg log -G -r 12: cd ..   @ changeset: 13:5c83a3540c37 | tag: tip | user: my | date: Sat Jan 12 00:18:10 2013 +0100 | summary: Apply Scenes to people and places. | o changeset: 12:fbcce7ad7369 | user: my | date: Sat Jan 12 00:18:06 2013 +0100 | summary: scene |  Let’s fix that: uncommit it and commit it as separate changes. Normally I would just use hg record to interactively select changes to record. Since this is a non-interactive test, I manually undo and redo changes instead. cd testmy hg uncommit --all # to undo all changes, not just those for specified files hg diff sed -i "s/The village - lost, vanish, portals/The village/" plan.txt hg amend -m "Apply scenes to wishes" sed -i "s/The village/The village - lost, vanish, portals/" plan.txt hg commit -m "Apply scenes to places" echo hg log -G -r 12: cd ..  new changeset is empty (use "hg kill ." to remove it) diff --git a/plan.txt b/plan.txt --- a/plan.txt +++ b/plan.txt @@ -1,10 +1,10 @@ Wishes: - The Solek wants emotionally intense situations -- The Judicator wants Action +- The Judicator wants Action - portals Places: -- The village +- The village - lost, vanish, portals - The researchers cave Scenes: @ changeset: 17:f8cc86f681ac | tag: tip | user: my | date: Sat Jan 12 00:18:13 2013 +0100 | summary: Apply scenes to places | o changeset: 16:6c8918a352e2 | parent: 12:fbcce7ad7369 | user: my | date: Sat Jan 12 00:18:12 2013 +0100 | summary: Apply scenes to wishes | o changeset: 12:fbcce7ad7369 | user: my | date: Sat Jan 12 00:18:06 2013 +0100 | summary: scene |  ### 2.8 …as afterthought Do one big commit, add an atomic commit. Then split the big commit. Let’s get the changes from our co-gamemaster and apply people to wishes, places and scenes. Then add a scene we need to fullfill the wishes and clean the commits afterwards. First get the changes: cd testmy hg pull ../testother hg merge --tool internal:merge tip # the new head from our co-gamemaster # fix the conflicts sed -i "s/<<<.*local//" plan.txt sed -i "s/====.*/\n/" plan.txt sed -i "s/>>>.*other//" plan.txt # mark them as solved. hg resolve -m hg commit -m "merge people" echo hg log -G -r 12: cd ..  pulling from ../testother searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files (+1 heads) (run 'hg heads .' to see heads, 'hg merge' to merge) merging plan.txt warning: conflicts during merge. merging plan.txt incomplete! (edit conflicts, then use 'hg resolve --mark') 0 files updated, 0 files merged, 0 files removed, 1 files unresolved use 'hg resolve' to retry unresolved file merges or 'hg update -C .' to abandon @ changeset: 19:8bf8d55739fa |\ tag: tip | | parent: 17:f8cc86f681ac | | parent: 18:058175606243 | | user: my | | date: Sat Jan 12 00:18:16 2013 +0100 | | summary: merge people | | | o changeset: 18:058175606243 | | parent: 8:0980732d20e0 | | user: other | | date: Sat Jan 12 00:17:48 2013 +0100 | | summary: The people | | o | changeset: 17:f8cc86f681ac | | user: my | | date: Sat Jan 12 00:18:13 2013 +0100 | | summary: Apply scenes to places | | o | changeset: 16:6c8918a352e2 | | parent: 12:fbcce7ad7369 | | user: my | | date: Sat Jan 12 00:18:12 2013 +0100 | | summary: Apply scenes to wishes | | o | changeset: 12:fbcce7ad7369 | | user: my | | date: Sat Jan 12 00:18:06 2013 +0100 | | summary: scene | |  Now we have all changes in our repo. We begin to apply people to wishes, places and scenes. cd testmy sed -i "s/The Solek wants emotionally intense situations/The Solek wants emotionally intense situations | specter, Lost/" plan.txt sed -i "s/Lost appears/Lost appears | Lost/" plan.txt sed -i "s/People vanish/People vanish | Specter/" plan.txt hg commit -m "apply people to wishes, places and scenes" echo hg log -G -r 19: cat plan.txt cd ..   @ changeset: 20:c00aa6f24c3f | tag: tip | user: my | date: Sat Jan 12 00:18:18 2013 +0100 | summary: apply people to wishes, places and scenes | o changeset: 19:8bf8d55739fa |\ parent: 17:f8cc86f681ac | | parent: 18:058175606243 | | user: my | | date: Sat Jan 12 00:18:16 2013 +0100 | | summary: merge people | | Wishes: - The Solek wants emotionally intense situations | specter, Lost - The Judicator wants Action - portals Places: - The village - lost, vanish, portals - The researchers cave Scenes: - Lost appears | Lost - People vanish | Specter - Portals during dreamtime People: - The Lost - The Specter  As you can see, the specter only applies to the wishes, and we miss a person for the action. Let’s fix that. cd testmy sed -i "s/- The Specter/- The Specter\n- Wild Memories/" plan.txt sed -i "s/- Portals during dreamtime/- Portals during dreamtime\n- Unconnected Memories/" plan.txt hg ci -m "Added wild memories to fullfill the wish for action" echo hg log -G -r 19: cd ..   @ changeset: 21:5393327d2d3f | tag: tip | user: my | date: Sat Jan 12 00:18:20 2013 +0100 | summary: Added wild memories to fullfill the wish for action | o changeset: 20:c00aa6f24c3f | user: my | date: Sat Jan 12 00:18:18 2013 +0100 | summary: apply people to wishes, places and scenes | o changeset: 19:8bf8d55739fa |\ parent: 17:f8cc86f681ac | | parent: 18:058175606243 | | user: my | | date: Sat Jan 12 00:18:16 2013 +0100 | | summary: merge people | |  Now split the big change into applying people first to wishes, then to places and scenes. cd testmy # go back to the big change hg up -r -2 # uncommit it hg uncommit --all # Now rework it into two commits sed -i "s/- Lost appears | Lost/- Lost appears/" plan.txt sed -i "s/- People vanish | Specter/- People vanish/" plan.txt hg amend -m "Apply people to wishes" sed -i "s/- Lost appears/- Lost appears | Lost/" plan.txt sed -i "s/- People vanish/- People vanish | Specter/" plan.txt hg commit -m "Apply people to scenes" # let’s mark this for later use hg book splitchanges # and evolve to get rid of the obsoletes echo hg evolve --any hg log -G -r 19: cd ..  1 files updated, 0 files merged, 0 files removed, 0 files unresolved new changeset is empty (use "hg kill ." to remove it) 1 new unstable changesets move:[21] Added wild memories to fullfill the wish for action atop:[24] Apply people to wishes merging plan.txt @ changeset: 26:ab48ecaceb01 | tag: tip | parent: 24:909bb640d4fc | user: my | date: Sat Jan 12 00:18:20 2013 +0100 | summary: Added wild memories to fullfill the wish for action | | o changeset: 25:76083662b263 |/ bookmark: splitchanges | user: my | date: Sat Jan 12 00:18:23 2013 +0100 | summary: Apply people to scenes | o changeset: 24:909bb640d4fc | parent: 19:8bf8d55739fa | user: my | date: Sat Jan 12 00:18:23 2013 +0100 | summary: Apply people to wishes | o changeset: 19:8bf8d55739fa |\ parent: 17:f8cc86f681ac | | parent: 18:058175606243 | | user: my | | date: Sat Jan 12 00:18:16 2013 +0100 | | summary: merge people | |  You can see the additional commit sticking out. We want to get the history easy to follow, so we just graft the last last change atop the split changes. note: We seem to have the workdir on the new changeset instead of on the one we did before the evolve. I assume that’s a bug to fix. cd testmy hg up splitchanges hg graft -O tip hg log -G -r 19: cd ..  1 files updated, 0 files merged, 0 files removed, 0 files unresolved grafting revision 26 merging plan.txt @ changeset: 27:4d3a40c254b4 | bookmark: splitchanges | tag: tip | parent: 25:76083662b263 | user: my | date: Sat Jan 12 00:18:20 2013 +0100 | summary: Added wild memories to fullfill the wish for action | o changeset: 25:76083662b263 | user: my | date: Sat Jan 12 00:18:23 2013 +0100 | summary: Apply people to scenes | o changeset: 24:909bb640d4fc | parent: 19:8bf8d55739fa | user: my | date: Sat Jan 12 00:18:23 2013 +0100 | summary: Apply people to wishes | o changeset: 19:8bf8d55739fa |\ parent: 17:f8cc86f681ac | | parent: 18:058175606243 | | user: my | | date: Sat Jan 12 00:18:16 2013 +0100 | | summary: merge people | |  note: We use graft here, because using a second amend would just change the changeset in between but not add another change. If there had been more changes after the single followup commit, we would simply have called evolve to fix them, because graft -O left an obsolete marker on the grafted changeset, so evolve would have seen how to change all its children. That’s it. All that’s left is finishing plan.txt, but I’ll rather do that outside this guide :) ## 3 Conclusion Evolve does a pretty good job at making it convenient and safe to rework history. If you’re an early adopter, I can advise testing it yourself. Otherwise, it might be better to wait until more early adopters tested it and polished its rough edges. note: hg amend was subsumed into hg commit –amend, so the dedicated command will likely disappear. note: This guide was created by Arne Babenhauserheide with emacs org-mode and turned to html via M-x org-export-as-html - including results of the evaluation of the code snippets. Date: 2013-01-12T00:18+0100 Org version 7.9.2 with Emacs version 24 Validate XHTML 1.0 # Track your scientific scripts with Mercurial If you want to publish your scientific scripts, as Nick Barnes advises in Nature, you can very easily do so with Mercurial. All my stuff (not just code), excempting only huge datasets, is in a Mercurial source repository1. Whenever I change something and it does anything new, I commit the files with a simple commit (even if it’s only “it compiles!”). With that I can always check “which were the last things I did” (look into the log) or “when did I change this line, and why?” (annotate the file). Also I can easily share my scripts folder with others and Mercurial can merge my work and theirs, so if they fix a line and I fix another line, both fixes get integrated without having to manually copy-paste them around. For all that it doesn’t need much additional expertise: The basics can be learned in just 15 minutes — and you’ll likely never need more than these for your work2. 1. Mercurial is free software for versiontracking: http://mercurial.selenic.com 2. You can use Mercurial in three main ways: # workflow concept: automatic trusted group of committers ## Goal A workflow where the repository gets updated only from repositories whose heads got signed by at least a certain percentage or a certain number of trusted committers. ## Requirements Mercurial, two hooks for checking and three special files in the repo. The hooks do all the work - apart from them, the repo is just a normal Mercurial repository. After cloning it, you only need to setup the hooks to activate the workflow. Extensions: gpg Hooks: prechangegroup and pretxnchangegroup Files: .hgtrustedkeys , .hgbackuprepos , .hgtrustminimum ## concept ### Hooks • prechangegroup: Copy the local versions of the files for access in the pretxnchangegroup hook (might be unnecessary by letting the pretxnchangegroup hook use the rollback-info). • pretxnchangegroup: • per head: check if the tipmost non-signature changeset has been GnuPG signed by enough trusted keys. • If not all heads have enough signatures, rollback, discard the current default repo and replace it with the backup repo which has the most changesets we lack. Continue discarding bad repos until you find one with enough signatures. ### Special Files .hgtrustedkeys contains a list of public GnuPG keys. .hgbackuprepos contains a list of (pull) links to backup repositories. .hgtrustminimum contains the percentage or number of keys from which a signature is needed for a head to be accepted. ## Notes With this workflow you can even do automatic updates from the repository. It should be ideal for release repositories of distributed projects. If you want to work on the project, a very worthwhile goal might be implementing it in infocalypse: anonymous code collaboration via Freenet and Mercurial, built to survive the informational apocalypse (and any kind of censorship). # Politics and Free Software Being unpolitical means being political without realizing it. — Arne Babenhauserheide Here you’ll find texts about politics and free software. Some of my creative works on the topic can be found under Songs, though. # For me, Gentoo is about *convenient* choice It's often said, that Gentoo is all about choice, but that doesn't quite fit what it is for me. After all, the highest ability to choose is Linux from scratch and I can have any amount of choice in every distribution by just going deep enough (and investing enough time). What really distinguishes Gentoo for me is that it makes it convenient to choose. Since we all have a limited time budget, many of us only have real freedom to choose, because we use Gentoo which makes it possible to choose with the distribution-tools. Therefore only calling it “choice” doesn't ring true in general - it misses the reason, why we can choose. So what Gentoo gives me is not just choice, but convenient choice. Some examples to illustrate the point: ## KDE 4 without qt3 I recently rebuilt my system after deciding to switch my disk layout (away from reiserfs towards a simple ext3 with reiser4 for the portage tree). When doing so I decided to try to use a "pure" KDE 4 - that means, a KDE 4 without any remains from KDE3 or qt3. To use kde without any qt3 applications, I just had to put "-qt3" and "-qt3support" into my useflags in /etc/make.conf and "emerge -uDN world" (and solve any arising conflicts). Imagine doing the same with a (K)Ubuntu... ## Emacs support Similarly to enable emacs support on my GentooXO (for all programs which can have emacs support), I just had to add the "emacs" useflag and "emerge -uDN world". ## Selecting which licenses to use Just add ACCEPT_LICENSE="-* @FSF-APPROVED @FSF-APPROVED-OTHER"  to your /etc/make.conf to make sure you only get software under licenses which are approved by the FSF. For only free licenses (regardless of the approved state) you can use: ACCEPT_LICENSE="-* @FREE"  All others get marked as masked by license. Default (no ACCEPT_LICENSE in /etc/make.conf) is “* -@EULA”: No unfree software. You can check your setting via emerge --info | grep ACCEPT_LICENSE. More information… ## One program (suite) in testing, but the main system rock stable Another part where choosing is made convenient in Gentoo are testing and unstable programs. I remember my pain with a Kubuntu, where I wanted to use the most recent version of Amarok. I either had to add a dedicated Amarok-only testing repository (which I'd need for every single testing program), or I had to switch my whole system into testing. I did the latter and my graphical package manager ceased to work. Just imagine how quickly I ran back to Gentoo. And then have a look at the ease of deciding to take one package into testing in Gentoo: • emerge --autounmask-write =cathegory/package-version • etc-update • emerge =cathegory/package-version EDIT: Once I had a note here “It would be nice to be able to just add the missing dependencies with one call”. This is now possible with --autounmask-write. And for some special parts (like KDE 4) I can easily say something like • ln -s /usr/portage/local/layman/kde-testing/Documentation/package.keywords/kde-4.3.keywords /etc/portage/package.keywords/kde-4.3.keywords (I don't have the kde-testing overlay on my GentooXO, where I write this post, so the exact command might vary slightly) ## Closing remarks So to finish this post: For me, Gentoo is not only about choice. It is about convenient choice. And that means: Gentoo gives everybody the power to choose. I hope you enjoy it as I do! # Automatic updates in Gentoo GNU/Linux To keep my Gentoo up to date, I use daily and weekly update scripts which also always run revdep-rebuild after the saturday night update :) My daily update is via pkgcore to pull in all important security updates: pmerge @glsa  That pulls in the Gentoo Linux Security Advisories - important updates with mostly short compile time. (You need pkgcore for that: "emerge pkgcore") Also I use two cron scripts. Note: It might be useful to add the lafilefixer to these scripts (source). The following is my daily update (in /etc/cron.daily/update_glsa_programs.cron ) ## Daily Cron #! /bin/sh ### Update the portage tree and the glsa packages via pkgcore # spew a status message echo$(date) "start to update GLSA" >> /tmp/cron-update.log

# Sync only portage
pmaint sync /usr/portage

# security relevant programs
pmerge -uDN @glsa > /tmp/cron-update-pkgcore-last.log || cat \
/tmp/cron-update-pkgcore-last.log >> /tmp/cron-update.log

# And keep everything working
revdep-rebuild

# Finally update all configs which can be updated automatically
cfg-update -au

echo $(date) "finished updating GLSA" >> /tmp/cron-update.log  And here's my weekly cron - executed every saturday night (in /etc/cron.weekly/update_installed_programs.cron ): ## Weekly Cron #!/bin/sh ### Update my computer using pgkcore, ### since that also works if some dependencies couldn't be resolved. # Sync all overlays eix-sync ## First use pkgcore # security relevant programs (with build-time dependencies (-B)) pmerge -BuD @glsa # system, world and all the rest pmerge -BuD @system pmerge -BuD @world pmerge -BuD @installed # Then use portage for packages pkgcore misses (inlcuding overlays) # and for EMERGE_DEFAULT_OPTS="--keep-going" in make.conf emerge -uD @security emerge -uD @system emerge -uD @world emerge -uD @installed # And keep everything working emerge @preserved-rebuild revdep-rebuild # Finally update all configs which can be updated automatically cfg-update -au  # pkgcore vs. eix → pix (find packages in Gentoo) For a long time it bugged me, that eix uses a seperate database which I need to keep up to date. But no longer: With pkgcore as fast as it is today, I set up pquery to replace eix. The result is pix: alias pix='pquery --raw -nv --attr=keywords'  (put the above in your ~/.bashrc) The output looks like this: $ pix pkgcore
* sys-apps/pkgcore
versions: 0.5.11.6 0.5.11.7
installed: 0.5.11.7
repo: gentoo
description: pkgcore package manager
homepage: http://www.pkgcore.org
keywords: ~alpha ~amd64 ~arm ~hppa ~ia64 ~ppc ~ppc64 ~s390 ~sh ~sparc ~x86

It’s still a bit slower than eix, but it operates directly on the portage tree and my overlays — and I no longer have to use eix-sync for syncing my overlays, just to make sure eix is updated.

## Some other treats of pkgcore

Aside from pquery, pkgcore also offers pmerge to install packages (almost the same syntax as emerge) and pmaint for synchronizing and other maintenance stuff.

From my experience, pmerge is hellishly fast for simple installs like pmerge kde-misc/pyrad, but it sometimes breaks with world updates. In that case I just fall back on portage. Both are Python, so when you have one, adding the other is very cheap (spacewise).

Also pmerge has the nice pmerge @glsa feature: Get Gentoo Linux security updates. Due to it’s almost unreal speed (compared to portage) checking for security updates now doesn’t hurt anymore.

$time pmerge -p @glsa * Resolving... Nothing to merge. real 0m1.863s user 0m1.463s sys 0m0.100s  It differs from portage in that you call world as set explicitely — either via a command like pmerge -aus world or via pmerge -au @world. pmaint on the other hand is my new overlay and tree synchronizer. Just call pmaint sync to sync all, or pmaint sync /usr/portage to sync only the given overlay (in this case the portage tree). ## Caveeats Using pix as replacement of eix isn’t yet perfect. You might hit some of the following: • pix always shows all packages in the tree and the overlays. The keywords are only valid for the highest version, though. marienz from #pkgcore on irc.freenode.net is working on fixing that. • If you only want to see the packages which you can install right away, just use pquery -nv. pix is intended to mimik eix as closely as possible, so I don’t have to change my habits ;) If it doesn’t fit your needs, just change the alias. • To search only in your installed packages, you can use pquery --vdb -nv. • Sometimes pquery might miss something in very broken overlay setups (like my very grown one). In that case, please report the error in the bugtracker or at #pkgcore on irc.freenode.net: 23:27 <marienz> if they're reported on irc they're probably either fixed pretty quickly or they're forgotten 23:27 <marienz> if they're reported in the tracker they're harder to forget but it may take longer before they're noticed I hope my text helps you in changing your Gentoo system further towards the system which fits you best! # How to make a million dollars in pay-what-you-want — thoughts on the Humble Indie Bundle Some thoughts1 on how the humble Indie Bundle managed to get more than 1.25 Million Dollars2 in one and a half weeks — more than one quarter of that from GNU/Linux users. Let me repeat that: One quarter of the money came from GNU/Linux users. And the average GNU/Linux user paid almost twice as much for the game as the average Windows user. How they did it? If I could give you a simple recipe which is certain to work for everyone, I might just hire up at Blizzard. But I think a big part is that (from my view — and obviously from the view of others, too) they did everything right. And I mean everything: • The games are great. • The message the name “humble indie bundle” conveys is great. • You could pay whatever you want. From 1 cent to a million. The highest single contribution was 3,333.33$, with an average contribution of $9.17 over all platforms and$14.52 from the average GNU/Linux user3.

• You could directly see how much money they made on the front page, along with an info about the average contribution, split by platform.

• Normally each game would have cost 20$, so the average payment for all games also was a significant price drop. • They donated about one third to charitable organizations. The buyers could decide how much should go to whom. • Payment was easy via Paypal and others. • All games work on GNU/Linux, MacOSX and Windows out of the box. • Each game already had a community. The bundle bundled their impact so it went viral on Twitter, identi.ca, facebook, etc. • They have clear and simple download links. Should I ever lose the games locally, I can just redownload them. If need be with wget. • They use no DRM or similar, so I can show the games to friends and won’t be troubled by use restrictions. • And on the last day they announced that for 4 of the 6 games the code would become free software if they would crack the 1 million dollar boundary. It took just over 16 more hours to raise additional 200,000$. And they followed up on their pledge with 2 games already freed and 2 more to follow as soon as the code is cleaned up.

To wrap it up: They did everything right, so almost everybody who saw it was delighted and there was nothing to break the viral network effects.

And I think that getting any one of these points wrong would have killed a major part of the network effect, because the naysayers are far stronger in the networking game than the fans.

Any foul trick would have cost them many fans, because someone would have been bound to find out and go viral with it.

1. Originally written as comment to Why Games don't get ported to Linux...A game dev speaks

2. Stats directly from the Website of the Humble Indie Bundle

3. More exactly:

• Total revenue: $1,273,593 • Number of contributions: 138,812 • Average contribution:$9.17
• Windows: $8.05 • MacOSX:$10.18
• GNU/Linux: 14.52 # Motivation and Reward Debunking the myth of increasing the performance of creative workers with carrot and stick. A few months ago, the GNU project had to withdraw its article on motivation and monetary reward, because its author did not allow them to spread it anymore. So I recreated the core of its message - with references to solid research. ## Executive Summary For creative tasks, the quality of performance strongly correllates with intrinsic motivation: Being interested in the task itself. This article will only talk about that. The main factors which are commonly associated with intrinsic motivation are: • Positive verbal feedback which increases intrinsic motivation. • Payment independent of performance which actually has no effect. • Payment dependent on performance which reduces the motivation on the long term. • Negative verbal feedback which directly reduces intrinsic motivation. • Threatening someone with punishment which strongly reduces intrinsic motivation. To make it short: Anything which diverts the focus from the task at hand towards some external matter (either positive or negative) reduces the intrinsic motivation and that in turn reduces work performance. If you want to help people perform well, make sure that they don’t have to worry about other stuff besides their work and give them positive verbal feedback about the work they do. ## Background Since this claim goes pretty much against the standard ideology of market-trusting economists, I want to back it with solid scientific background. The easiest way to do that is going to google scholar and searching for research on motivation and rewards. It gives a meta-analysis of experiments on the effects of extrinsic rewards on intrinsic motivation: A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. — E.L. Deci, R Koestner, R.M. Ryan - Psychological bulletin, 1999 - psycnet.apa.org This paper is cited by 2324 other papers Google knows about, which is an indicator of being accepted by the psychological community (except if it should have 2324 rebuttals) - an indicator which even those can understand who are not really versed in that community (for example me). I dug into the paper to find solid scientific research on the effects of payment on motivation. And that led me to this older paper from Edward L. Deci: The Effects of Contingent and Noncontingent Rewards and Controls on Intrinsic Motivation — Edward L. Deci, University of Rochester, Organizational Behavior and Human Performance, 1972 Their research question was trying to find out if money paid unconditionally weakens intrinsic motivation like money paid for good performance: » Two recent papers (Deci, 1971, 1972) have presented evidence that when money was paid to subjects for performing intrinsically motivated activities, and when that money was made contingent on their performance, they were less intrinsically motivated after the experience with money than were subjects who performed the same activity for no pay.« This is about intrinsic motivation: The kind of motivation which fuels artists and other creative people and allows them to do great deeds. It’s the kind of motivation, a company should try to inspire in every employee who does anything remotely creative or complex. ### What reduces intrinsic motivation There was previous research which showed a reduction of intrinsic motivation due to payment. To make their research solid, the first thing E.L. Deci and his group did was a replication to ensure that the basic theory is correct. In another experiment using the one-session paradigm, Deci and Cascio (1972) showed that negative feedback resulting from bad performance on an intrinsically motivated activity caused a decrease in intrinsic motivation. In my words: Tell people that they do bad work and you reduce their motivation - not surprisingly. “Your performance sucks” → intrinsic motivation decreases. Further, Deci and Caseio (1972) reported that when subjects were threatened with punishment for poor performance, their intrinsic motivation also decreased. Threaten people, and their motivation gets reduced, too. “If you fail, you’re fired” → intrinsic motivation decreases. […]Deci (1972) replicated the finding that subjects who were paid one dollar per puzzle solved showed a decrease in intrinsic motivation. Pay people for good performance and you reduce their motivation. “For each housing loan you sell, you get 20€” → intrinsic motivation decreases. This is the result which actually marks all the performance-based payment schemes which are so popular with the administration folks as utter nonsense - at least for creative and complex jobs. For those jobs your employees enjoy doing, bonusses actually decrease performance on the long run. These are the kinds of jobs in which people can work overnight and concentrated for hours and lose track of time while they work on systems which are too complex for most people to even pretend to understand. The kind of jobs where some people get into the flow and do more work in an hour than other people do in a week. Jobs in science, in programming and actually in any other topic in which you do not just follow prescribed rules but actually solve problems. The kind of jobs which is more and more common, because jobs with prescribed rules can just as well be done by machines. And social jobs, the other kind of jobs for which you need people, because people doing social jobs work with people and anything involving people is a complex problem by definition. At least if you want really good results. Or, seen from a different perspective: If two companies compete in a segment of the market and one has motivated people and the other doesn’t - and other factors are mostly equal - then the company with motivated people wins. So you want motivated people. And in creative, complex or social jobs, you want them intrinsically motivated. You want them to do a good job for the sake of doing a good job. Which means, you want to avoid • giving them negative feedback, • threatening them and • paying them based on their performance. With that in mind, let us go on: How can we actually motivate people? ### What enhances motivation To answer that, let’s listen to research again: On the other hand, Deei (1971, 1972) has reported that verbal reinforcements do not decrease intrinsic motivation; in fact, they appear to enhance it. So, to increase motivation, tell people that they do good work. „I like that plan! Go for it!“ → intrinsic motivation increases. That’s all you can do. Tell them that they do good work. Encourage them. But isn’t there a paradox? How can we actually employ people, if paying them money for good work decreases their motivation? ### How to pay motivated people? That’s the real question, the paper from Edward L. Deci tackled: While extrinsic rewards such as money can certainly motivate behavior, they appear to be doing so at the expense of intrinsic motivation. […but…] when payments were not contingent upon performance, intrinsic motivation did not decrease. So the answer is pretty simple: Just pay them money independent of how well they do. „You get 3000€ a month. Flat. That’s enough to lead a good life.“1 → intrinsic motivation stays stable. The real trick is to just give them money, independent of how well they do. If motivated people work for you, ensure that they do not have to worry about money. Do all you can to take money concerns off their mind. And tell them what they do well. At least that’s what you should do if you want to base your actions on research instead of on the broken intuition of people who get paid for their performance in convincing you of their ideology (and consequently often do so in blatant, uncreative ways). If you do that already: That’s great! Likely it’s really cool to work with you. ## Illustration A very illustrative experiment on losing intrinsic interest due to external reward was done by Lepper, Mark R.; Greene, David; Nisbett, Richard E..2 They observed three groups of pre-school children. The first group was told that they would get a “certificate with a gold seal and ribbon” if they would draw something. The second group wasn’t told that they would get a reward, but got it after drawing, too. The third group did not get any reward and did not expect any. Before the start of the experiment, their intrinsic interest in drawing was measured by observing how much time they spent drawing when they had the chance. One to two weeks after the experiment, the intrinsic interest of the children was measured again by observing them through a one-way mirror. In that subsequent measurement, the children who had been told that they would get the reward for drawing (and had gotten the reward) used half as much time for drawing as those who had not gotten any reward or those who had gotten an unexpected reward. And even when the pictures which they had drawn during the initial test were compared, the pictures from the group who expected a reward were of significantly lower quality than the pictures from the two other groups. the difference between expected extrinsic reward and no reward was 2.18 vs. 2.69 on an independently judged quality scale between 1 (very poor) and 5 (very good). So offering children a reward for drawing not only reduces their intrinsic interest in drawing, but also reduces the quality of the pictures they draw. And this is perfectly in line with the results from the paper from Edward L. Deci on intrinsic motivation of adults. ## Summary To increase the motivation of people, DO • Pay them a good monthly income, so they don’t have to worry about money, and • Give them positive verbal feedback on the things they do well. And should you happen to be interested in helping a free software project with money, just employ some of the people hacking on the project - and give them a good, longterm contract with enough freedom of choice, so they don’t have to worry about money or what they are allowed to do, but can instead focus on working to make the project succeed - like they did before you employed them, but now with much more time at their disposal. And, as with anything else, give them positive feedback on the things they do well. If you want to help people perform well, make sure that they don’t have to worry about other stuff besides their work and give them positive verbal feedback about the work they do. 1. Actually the ideal yearly income would be 60.000€, but only few people earn that much. Which might be a societal problem in itself which limits the performance we could have as society. If that’s something you want to tackle: Head into politics and change the world - or found a company and do it right from the start. There’s a lot which even a small group of motivated people can achieve. 2. Undermining children's intrinsic interest with extrinsic reward by Mark R. Lepper and David Greene from Stanford University and Richard E. Nisbett from the University of Michigan, Journal of Personality and Social Psychology, Vol 28(1), Oct 1973, 129-137. doi: 10.1037/h0035519 # No, it ain’t “forever” (GNU Hurd code_swarm from 1991 to 2010) If the video doesn’t show, you can also download it as Ogg Theora & Vorbis “.ogv” or find it on youtube. This video shows the activity of the Hurd coders and answers some common questions about the Hurd, including “How stagnated is Hurd compared to Duke Nukem Forever?”. It is created directly from commits to Hurd repositories, processed by community codeswarm. Every shimmering dot is a change to a file. These dots align around the coder who did the change. The questions and answers are quotes from todays IRC discussions (2010-07-13) in #hurd at irc.freenode.net. You can clearly see the influx of developers in 2003/2004 and then again a strenthening of the development in 2008 with less participants but higher activity than 2003 (though a part of that change likely comes from the switch to git with generally more but smaller commits). I hope you enjoyed the high-level look on the activity of the Hurd project! PS: The last part is only the information title with music to honor Sean Wright for allowing everyone to use and adapt his Album Enchanted. # Some technical advantages of the Hurd → An answer to just accept it, truth hurds, where Flameeyes told his reasons for not liking the Hurd and asked for technical advantages (and claimed, that the Hurd does not offer a concept which got incorporated into other free software, contributing to other projects). Note: These are the points I see. Very likely there are more technical advantages which I don’t see well enough to explain them. Please feel free to point them out. Information for potential testers: The Hurd is already usable, but it is not yet in production state. It progressed a lot during the recent years, though. Have a look at the status report if you want to see if it’s already interesting for you. Thanks for explaining your reasons. As answer: Firstoff: FUSE is essentially an implementation of parts of the translator system (which is the main building block of the Hurd) to Linux, and NetBSD recently got a port of the translators system of the Hurd. That’s the main contribution to other projects that I see. On the bare technical side, the translator-based filesystem stands out: The filesystem allows for making arbitrary programs responsible for displaying a given node (which can also be a directory tree) and to start these programs on demand. To make them persistent over reboots, you only need to add them to the filesystem node (for which you need the right to change that node). Also you can start translators on any node without having to change the node itself, but then they are not persistent and only affect your view of the filesystem without affecting other users. These translators are called active, and you don’t need write permissions on a node to add them. The filesystem implements stuff like Gnome VFS (gvfs) and KDE network transparency on the filesystem level, so those are available for all programs. And you can add a new filesystem as simple user, just as if you’d just write into a file “instead of this node, show the filesystem you get by interpreting file X with filesystem Y” (this is what you actually do when setting a translator but not yet starting it (passive translator)). One practical advantage of this is that the following works: settrans -a ftp\: /hurd/hostmux /hurd/ftpfs / dpkg -i ftp://ftp.gnu.org/path/to/*.deb  This installs all deb-packages in the folder path/to on the FTP server. The shell sees normal directories (beginning with the directory “ftp:”), so shell expressions just work. You could even define a Gentoo mirror translator (settrans mirror\: /hurd/gentoo-mirror), so every program could just access mirror://gentoo/portage-2.2.0_alpha31.tar.bz2 and get the data from a mirror automatically: wget mirror://gentoo/portage-2.2.0_alpha31.tar.bz2 Or you could add a unionmount translator to root which makes writes happen at another place. Every user is able to make a readonly system readwrite by just specifying where the writes should go. But the writes only affect his view of the filesystem. Starting a network process is done by a translator, too: The first time something accesses the network card, the network translator starts up and actually provides the device. This replaces most initscripts in the Hurd: Just add a translator to a node, and the service will persist over restarts. It’s a surprisingly simple concept, which reduces the complexity of many basic tasks needed for desktop systems. And at its most basic level, Hurd is a set of protocols for messages which allow using the filesystem to coordinate and connect processes (along with helper libraries to make that easy). Also it adds POSIX compatibility to Mach (while still providing access to the capabilities-based access rights underneath, if you need them). You can give a process permissions at runtime and take them away at will. For example you can start all programs without permission to use the network (or write to any file) and add the permissions when you need them. groups # → root addauth -p(ps -L) -g mail
groups # → root mail


And then there are subhurds (essentially lightweight virtualization which allows cutting off processes from other processes without the overhead of creating a virtual machine for each process). But that’s an entire post of its own…

And the fact that a translator is just a simple standalone program means that these can be shared and tested much more easily, opening up completely new options for lowlevel hacking, because it massively lowers the barrier of entry.

And then there is the possibility of subdividing memory management and using different microkernels (by porting the Hurd layer, as partly done in the NetBSD port), but that is purely academic right now (search for Viengoos to see what its about).

So in short: The translator system in the Hurd is a simple concept which makes many tasks easy, which are complex with Linux (like init, network transparency, new filesystems, …). Additionally there are capabilities, subhurds and (academic) memory management.

Best wishes,
Arne

PS: I decided to read flameeyes’ post as “please give me technical reasons to dispell my emotional impression”.

PPS: If you liked this post, it would be cool if you’d flattr it:

PPPS: Additional information can be found in Gaël Le Mignot’s talk notes, in niches for the Hurd and the GNU Hurd documentation pages.

P4S: This post is also available in the Hurd Staging Wiki.

# (A)GPL as hack on a Python-powered copyright system

AGPL is a hack on copyright, so it has to use copyright, else it would not compile/run.

All the GPL licenses are a hack on copyright. They insert a piece of legal code into copyright law to force it to turn around on itself.

You run that on the copyright system, and it gives you code which can’t be made unfree.

To be able to do that, it has to be written in copyright language (else it could not be interpreted).

my_code = "<your code>"

def AGPL ( code ):
"""
>>> is_free ( AGPL ( code ) )
True
"""
return eval (
transform_to_free ( code ) )

copyright ( AGPL ( my_code ) )


You pass “AGPL ( code )” to the copyright system, and it ensures the freedom of the code.

The transformation means that I am allowed to change your code, as long as I keep the transformation, because copyright law sees only the version transformed by AGPL, and that stays valid.

Naturally both AGPL definition and the code transformed to free © must be ©-compatible. And that means: All rights reserved. Else I could go in and say: I just redefine AGPL and make your code unfree without ever touching the code itself (which is initially owned by you by the laws of ©):

def AGPL ( code ):
"""
>>> is_free ( AGPL ( code ) )
False
"""
return eval (
transform_to_mine ( code ) )


In this Python-powered copyright-system, I could just define this after your definition but before your call to copyright(), and all calls to APGL ( code ) would suddenly return code owned by me.

Or you would have to include another way of defining which exact AGPL you mean. Something like “AGPL, but only the versions with the sha1 hashes AAAA BBBB and AABA”. cc tries to use links for that, but what do you do if someone changes the DNS resolution to point creativecommons.org to allmine.com? Whose DNS server is right, then - legally speaking?

In short: AGPL is a hack on copyright, so it has to use copyright, else it would not compile/run.

# 7,26€ through Flattr last month

Last month I earned 7,26€ through my Flattr account (Flattr is a voluntary payment service where people can make micropayments if they like something - after enjoying it). The flattrs came in through just 4 items:

Thank you very much for your flattrs, dear supporters1! Thanks to you I could pay most of my server cost this month via the money from flattr - and that’s great!2

1. This month I was flattred by eileentso, esocom, Elleo and a user who wanted to stay anonymous. Thank you again!

2. And being able to pay the server might become much more important in the following months, as soon as my wife’s parental money runs out and I need to finance the family from a (50%) PhD-salary for a year…

# A simple solution to the dining philosophers problem

### The problem

5 Philosophers do nothing but eat and think.

They have a table with 5 chairs, 5 plates and 5 forks.

Each of them eats with two forks.

Ensure that none of them starves.

### The solution

First I teach them to always take the left fork first.

Then I smash one of their chairs.

### Explanation

Since they can't repair the chair (they think, but they don't build), there are only 4 places left, and so they have one leftover fork which gets passed on, once one finished eating.

Inspired by Willim Stallings' Operating systems: "Use a servant who lets only 4 dine at the same time"

Naturally now they have to either change places or move chairs, so they might still need a servant :)

# Censorship in the Streets — it’s idiocy everywhere

A man in the streets faces a knife.
Two policemen are there it once. They raise a sign:

“Illegal Scene! Noone may watch this!”


The man gets robbed and stabbed and bleeds to death.
The police had to hold the sign.

Welcome to Europe, citizen. Censorship is beautiful.

→ Courtesy to Censilia, who wants censorship in the EU after it failed in Germany. You might also be interested in 11 more reasons why censorship is useless and harmful.

PS: This poem is free licensed. Please feel free to use it anyway you like, as long as you provide a backlink.

# def censor_the_net_2012()

def censor_the_net():
try: SOPA() # see Stop Online Piracy Act
except Protest: # see sopastrike.com
try: PIPA() # see PROTECT IP Act
except Protest: # see weak links
try: OPEN() # see red herring
except Protest:
except Protest: # see resignation⁽¹⁾, court, vote anyway and advise against
try: CISPA() # see Stop the Online Spying Bill
except Protest: # see Dangers
do_it_anyway() # destroy free speech and computers (english video).
while wealth_breeds_wealth and wealth_gives_power: # (german text and english video)
censor_the_net() # see wealth vs. democracy (german)


This code is valid Python.

Feel free to use and change this snippet, as long as you include a reference to this page (http://draketo.de/node/475 or http://draketo.de/light/english/politics/def-censor-the-net-2012) or my name (Arne Babenhauserheide).

Here’s the linked english video, embedded (external, not GPL!):

# Going from a simple Makefile to Autotools

## 1 Intro

I recently started looking into Autotools, to make it easier to run my code on multiple platforms.

Naturally you can use cmake or scons or waf or ninja or tup, all of which are interesting in there own respect. But none of them has seen the amount of testing which went into autotools, and none of them have the mount of tweaks needed to support about every system under the sun. And I recently found pyconfigure which allows using autotools with python and offers detection of library features.

I had already used Makefiles for easily storing the build information of anything from python projects (python setup.py build) to my PhD with all the required graphs.

But I wanted to test, what autotools have to offer. And I found no simple guide which showed me how to migrate from a Makefile to autotools - and what I could gain through that.

So I decided to write it.

## 2 My Makefile

The starting point is the Makefile I use for building my PhD. That’s pretty generic and just uses the most basic features of make.

It creates plots from data and then builds a PDF from an org-mode file.

all: doktorarbeit.pdf sink.pdf

sink.pdf : sink.tex images/comp-t3-s07-tem-boas.png images/comp-t3-s07-tem-bona.png images/bona-marble.png images/boas-marble.png
pdflatex sink.tex
rm -f  *_flymake* flymake* *.log *.out *.toc *.aux *.snm *.nav *.vrb # kill litter

comp-t3-s07-tem-boas.png comp-t3-s07-tem-bona.png : nee-comp.pyx nee-comp.txt
pyxplot nee-comp.pyx

doktorarbeit.pdf : doktorarbeit.org
emacs --batch --visit "doktorarbeit.org" --funcall org-export-as-pdf


If you do not know it yet: A basic makefile has really simple syntax:

# comments start with #
thing : required source files # separated by spaces
build command with the files
# ^ this is a TAB.


## 3 Feature Equality

The first step is simple: How can I replicate with autotools what I did with the plain Makefile?

For that I create the files configure.ac and Makefile.am. The basic Makefile.am is simply my Makefile without any changes.

The configure.ac sets the project name, inits automake and tells autoreconf to generate a Makefile.

dnl run autoreconf -i to generate a configure script.
dnl Then run ./configure to generate a Makefile.
dnl Finally run make to generate the project.

AC_INIT([Doktorarbeit Inverse GHG], [0.1], [arne.babenhauserheide@kit.edu])
dnl we use the build type foreign here instead of gnu because I do not have a NEWS file and similar, yet.
AM_INIT_AUTOMAKE([foreign])
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Now, if I run autoreconf -i it generates a Makefile for me. Nothing fancy here: The Makefile just does what my old Makefile did.

But it is much bigger, offers real –help output and can generate a distribution - which does not work yet, because it misses the source files. But it clearly tells me that with make distcheck.

## 4 make dist: distributing the project

Since make dist does not work yet, let’s change that.

… easier said than done. It took me the better part of a day to figure out how to make it happy. Problems there:

• I have to explicitely give automake the list of sources so it can copy them to the distributed package.
• distcheck uses a separate build dir. Yes, this is the clean way, but it needs some hacking to get everything to work.
• I use pyxplot for generating some plots. Pyxplot does not have a way (I know of) to search for datafiles in a different folder. I have to copy the files to the build dir and kill them after the build. But only if I use a separate build dir.
• pdflatex can’t find included images. I have to adapt the TEXINPUT environment variable to give it the srcdir as additional search path.
• Some of my commands litter the build directory with temporary or intermediate files. I have to clean them up.

So, after much haggling with autotools, I have a working make distcheck:

pdf_DATA = sink.pdf doktorarbeit.pdf

sink = sink.tex
pkgdata_DATA = images/comp-t3-s07-tem-boas.png images/comp-t3-s07-tem-bona.png
dist_pkgdata_DATA = images/bona-marble.png images/boas-marble.png

plotdir = .
dist_plot_DATA = nee-comp.pyx nee-comp.txt

doktorarbeit = doktorarbeit.org

EXTRA_DIST = ${sink}${dist_pkgdata_DATA} ${doktorarbeit} MOSTLYCLEANFILES = \#* *~ *.bak # kill editor backups CLEANFILES =${pdf_DATA}
DISTCLEANFILES = ${pkgdata_DATA} sink.pdf :${sink} ${pkgdata_DATA}${dist_pkgdata_DATA}
TEXINPUTS=${TEXINPUTS}:$(srcdir)/:$(srcdir)/images// pdflatex$<
rm -f  *_flymake* flymake* *.log *.out *.toc *.aux *.snm *.nav *.vrb # kill litter

${pkgdata_DATA} :${dist_plot_DATA}
$(foreach i,$^,if test "$(i)" != "$(notdir $(i))"; then cp -u "$(i)" "$(notdir$(i))"; fi;)
${MKDIR_P} images pyxplot$<
$(foreach i,$^,if test "$(i)" != "$(notdir $(i))"; then rm -f "$(notdir $(i))"; fi;) doktorarbeit.pdf :${doktorarbeit}
if test "$<" != "$(notdir $<)"; then cp -u "$<" "$(notdir$<)"; fi
emacs --batch --visit "$(notdir$<)" --funcall org-export-as-pdf
if test "$<" != "$(notdir $<)"; then rm -f "$(notdir $<)"; rm -f$(basename $(notdir$<)).tex $(basename$(notdir $<)).tex~; else rm -f$(basename $<).tex$(basename $<).tex~; fi  You might recognize that this is not the simple Makefile anymore. It is now a setup which defines files for distribution and has custom rules for preparing script runs and for cleanup. But I can now make a fully working distribution, so when I want to publish my PhD thesis, I can simply add the generated release tarball. I work in a Mercurial repo, so I would more likely just include the repo, but there might be reasons for leaving out the history - and be it only that the history might grow quite big. An advantage is that in the process of preparing the dist, my automake file got cleanly separated into a section defining files and dependencies and one defining build rules. But I now also understand where newer build tools like scons got their inspiration for the abstractions they use. I should note, however, that if you were to build a software project in one of the languages supported by automake (C, C++, Python and quite a few others), I would not have needed to specify the build rules myself. And being able to freely mix the dependency declaration in automake style with Makefile rules gives a lot of flexibility which I missed in scons. ## 5 Finding programs Now I can build and distribute my project, but I cannot yet make sure that the programs I need for building actually exist. And that’s finally something which can really help my build, because it gives clear error messages when something is missing, and it allows users to specify which of these programs to use via the configure script. For example I could now build 5 different versions of Emacs and try the build with each of them. Also I added cross compilation support, though that it a bit over the top for simple PDF creation :) Firstoff I edited my configure.ac to check for the tools: dnl run autoreconf -i to generate a configure script. dnl Then run ./configure to generate a Makefile. dnl Finally run make to generate the project. AC_INIT([Doktorarbeit Inverse GHG], [0.1], [arne.babenhauserheide@kit.edu]) # Check for programs I need for my build AC_CANONICAL_TARGET AC_ARG_VAR([emacs], [How to call Emacs.]) AC_CHECK_TARGET_TOOL([emacs], [emacs], [no]) AC_ARG_VAR([pyxplot], [How to call the Pyxplot plotting tool.]) AC_CHECK_TARGET_TOOL([pyxplot], [pyxplot], [no]) AC_ARG_VAR([pdflatex], [How to call pdflatex.]) AC_CHECK_TARGET_TOOL([pdflatex], [pdflatex], [no]) AS_IF([test "x$pdflatex" = "xno"], [AC_MSG_ERROR([cannot find pdflatex.])])
AS_IF([test "x$emacs" = "xno"], [AC_MSG_ERROR([cannot find Emacs.])]) AS_IF([test "x$pyxplot" = "xno"], [AC_MSG_ERROR([cannot find pyxplot.])])
# Run automake
AM_INIT_AUTOMAKE([foreign])
AM_MAINTAINER_MODE([enable])
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


And then I used the created variables in the Makefile.am: See the @-characters around the program names.

pdf_DATA = sink.pdf doktorarbeit.pdf

sink = sink.tex
pkgdata_DATA = images/comp-t3-s07-tem-boas.png images/comp-t3-s07-tem-bona.png
dist_pkgdata_DATA = images/bona-marble.png images/boas-marble.png

plotdir = .
dist_plot_DATA = nee-comp.pyx nee-comp.txt

doktorarbeit = doktorarbeit.org

EXTRA_DIST = ${sink}${dist_pkgdata_DATA} ${doktorarbeit} MOSTLYCLEANFILES = \#* *~ *.bak # kill editor backups CLEANFILES =${pdf_DATA}
DISTCLEANFILES = ${pkgdata_DATA} sink.pdf :${sink} ${pkgdata_DATA}${dist_pkgdata_DATA}
TEXINPUTS=${TEXINPUTS}:$(srcdir)/:$(srcdir)/images// @pdflatex@$<
rm -f  *_flymake* flymake* *.log *.out *.toc *.aux *.snm *.nav *.vrb # kill litter

${pkgdata_DATA} :${dist_plot_DATA}
$(foreach i,$^,if test "$(i)" != "$(notdir $(i))"; then cp -u "$(i)" "$(notdir$(i))"; fi;)
${MKDIR_P} images @pyxplot@$<
$(foreach i,$^,if test "$(i)" != "$(notdir $(i))"; then rm -f "$(notdir $(i))"; fi;) doktorarbeit.pdf :${doktorarbeit}
if test "$<" != "$(notdir $<)"; then cp -u "$<" "$(notdir$<)"; fi
@emacs@ --batch --visit "$(notdir$<)" --funcall org-export-as-pdf
pyxplot = 'pyxplot $SOURCE' # pdflatex is quite dirty. I directly clean up after it with rm. pdflatex = 'pdflatex$SOURCE -o $TARGET; rm -f *_flymake* flymake* *.log *.out *.toc *.aux *.snm *.nav *.vrb' # build the PhD thesis from emacs org-mode. Command("doktorarbeit.pdf", "doktorarbeit.org", orgexportpdf) # create plots Command(["images/comp-t3-s07-tem-boas.png", "images/comp-t3-s07-tem-bona.png"], ["nee-comp.pyx", "nee-comp.txt"], pyxplot) # build my sink.pdf Command("sink.pdf", ["sink.tex", "images/comp-t3-s07-tem-boas.png", "images/comp-t3-s07-tem-bona.png", "images/bona-marble.png", "images/boas-marble.png"], pdflatex) # My editors leave tempfiles around. I want them gone after a build clean. This is not yet supported! tempfiles = Glob('*~') + Glob('#*#') + Glob('*.bak') # using this here would run the cleaning on every run. #Command("clean", [], Delete(tempfiles))  If you want to integrate building with scons into a Makefile, the following lines allow you to run scons with make sconsrun. You might have to also mark sconsrun as .PHONY. sconsrun : scons python scons/bootstrap.py -Q scons : hg clone https://bitbucket.org/ArneBab/scons  Here you can see part of the beauty of autotools, because you can just add this to your Makefile.am instead of the Makefile and it will work inside the full autotools project (though without the dist-integration). So autotools is a real superset of simple Makefiles. ## 8 Notes If org-mode export keeps pestering you about selecting a TeX-master everytime you build the PDF, add the following to your org-mode file: #+BEGIN_LaTeX %%% Local Variables: %%% TeX-master: t %%% End: #+END_LaTeX  # How to make companies act ethically → comment on Slashdot concerning Unexpected methods to promote freedom? Was it really Apple who ended DRM? Would they have done so without the protests and evangelizing against DRM? Without protesters in front of Apple Stores? And without the many people telling their friends to just not accept DRM? That “preaching” created a situation where Apple could reap monetary gain from doing the right thing. You see how they act when the stakes are diffecent. What you can do to make companies act ethically is to create a situation where they can make more money by working ethically than by ripping you off. The ways to do that are 1. Laws (breaking them costs money when you get caught), 2. Taxes on doing the wrong thing (i.e. pollution), 3. Offering your work in ways, which make it easier for people to make money ethically than unethically (that’s what copyleft licensing does), 4. Trying to convince people to do 3, 5. Trying to convince people to shun products which are created unethically (that’s what you call preaching), 6. Only paying for products which were produced ethically. RMS does 3,4, 5 and 6, so he’s pretty much into gaming the market - and “preaching” is only one tool in his box. Though what he does is more convincing than preaching: He gives us reasons why unfree software is bad - and the mental tools to resist the preaching from the other side (for example via analyses of speech-tricks, like calling state-granted monopolies “property”). # identi.ca Group: Out of Group (!oog) ## What !oog is The Out of Group group is a way to request leading an overboarding discussion out of group (so you don't spam all the people who are in the group where the discussion started, but who simply want news). ## Motto Please discuss out of group. You can wrap up the discussion afterwards (link to the context) and add a group tag then. ## How To To request taking a discussion out of group, simply join !oog, add !oog to your message and then leave the group again (except if you want to see other !oog requests). For example you can use the following to request moving !oog: Please let us continue the discussion !oog and wrap it up afterwards. It disturbs others in here. !group1 !group2 ## Background This is a reaction to a discussion about the use of group-tags in discussions. ## Archived discussion Available under the Creative Commons Attribution 3.0 license. • rysiek @teddks: stop using group tags, please. everybody on !ubuntu and !linux have heard enough, really. • teddks @rysiek I respond to in-group messages in-group. If you don't want me to use them, don't use them to me. !ubuntu !linux • arnebab @rysiek @teddks please leave the group tags out, both of you. • teddks @arnebab My policy for group-posting rebuttals was posted a bit ago. • arnebab @teddks please discuss out of the group. You can wrap up the discussion afterwards and add a group tag then. You're one post from a block. • arnebab @teddks By not discussing in the group you make the others' in-group posts look silly. Otherwise you just look silly yourself. • teddks @arnebab Wait, what? I only discuss in-group if the previous post was in-group. • arnebab @teddks That principle doesn't scale. If we all used it groups would be useless. A wrapup post can get people to read all http://is.gd/6CoYI • teddks @arnebab I might start doing that for oog discussions, but I'm not going to deny myself the same forum my opponents have. • arnebab @teddks they are not opponents but discussion partners. And they look silly if they stay ingroup while you post oog. Just ask them to go oog • arnebab @teddks and wrap it up later. If they insist on staying ingroup, just post one ingroup request to come oog and let peer pressure do the rest • arnebab @teddks they broadcast to people who aren't interested and will block them. • arnebab @teddks sorry for the phraselike answer. 140 chars aren't ideal for discussing more complex topics... • teddks @arnebab I understand. I kind of regret how identi.ca has taken the place of IRC for a lot of things. • teddks @arnebab That's irrelevant; in-group they get to broadcast their arguments and their views. I'm not going to deny myself that. • arnebab @teddks That means I have to block you when you post your next ingroup broadcast. You lose all readers that way. • teddks @arnebab Now you are inviting spam and personal attacks against me in !ubuntu. Is that pursuant to the Code of Conduct? • arnebab @teddks You do know that people can see the context with one click, do you? • teddks @arnebab I don't see what you're implying - that I should depend on clickthroughs to have my arguments be heard? • arnebab @teddks Since you kept spamming the !ubuntu and !linux group and explicitely said you won't stop ( http://is.gd/6Cucx ) I blocked you. • teddks @arnebab I can't respond fully now, but I will later. I'm sorry that the #Ubuntu group's de facto policy is now one of censorship. Note: arnebab was no member of the ubuntu group at that time. The block was/is a purely personal one: Nothing teddks writes will appear on arnebabs timeline, till he unblocks teddks. # Internet, community cloud foo and control of my own data ## Why? What I miss in the internet is the notion of being able to control what my apps access for data. Why can’t a chat application just connect to a neighborhood- or community-server, and why can’t the activity-stream come from the people I know — and query only their systems, like jabber does? Almost all geolocation services should be implementable over direct friend-to-friend connections like jabber, and I don’t really see why my local identi.ca program can’t also get the news from my local jabber contacts. Or why I can’t set a local info-provider as geolocation source and have a “phone-book” of info-providers in each town. And when it can do that, why can’t I have a general info-server which serves as synchronization and aggregation service for any of my devices, so all my programs on any device know which sources to use? And why can’t I tell that server to allow my friends to access a subset of my data — selected by me? Sadly I assume that the answer is “power”. Google and Apple don’t want to lose their control on synchronization and sharing. Otherwise most of the control and centralization (=moneymaking monopoly) of the internet would fade away. ## What? For example I’d like to be able to select whose information I get, and I’d like to be able to also get the information my friends and their get. Without anyone outside knowing that I access that data (because I ask them directly). And ideally also without me knowing from which of their friends the data originates, but still being able to block those individually. Then I could allow certain product information providers (=good advertisers) inside my network, so I get news about stuff I might like to spend money on. And automatically get information about the info-providers from my friends — or my community. And all that without direct dependency on a single company or system. It would make it infeasible to monopolize the services without making everyone trust you — and having to make sure most people trust you creates a reverse-dependency which could help to keep the information-providers honest. ## How? And I think one key to that is to make that service less like a full-storage and more like update-collecting and synchronization services. There’s no reason why a synchro-server should keep any data I already pulled to all my devices. This would be similar to using a Mercurial push-cache of kinds: When I push data to a service, it just stores a bundle against the revision of the data on my least up-to-date device. All my devices can access that bundle, and when all are up to at least a certain state, the now useless data gets stripped out and only the new data remains. Not yet pulled information could be stored as snapshots, until the first of my devices pulls it. Then it could get replaced by synchronization data — a compressed update-bundle. That would also make sure that incoming data has to be integrated and parsed only once. Maybe Akonadi (from KDE) can someday accomplish something like that. PS: Originally this started as a comment to The state of the internet operating system by O’Reilly. # Neither Humble nor Indie Bundle Comment to New Humble Bundle Is Windows Only, DRM Games. The new Humble Indie Bundle is no longer free, indie, cross-plattform or user-respecting. When the first bundle had a huge boost in last-minute sales after the devs offered to free the source of 4 of the 5 games, I had hoped, they would keep that. I was one of those who paid when they offered to free the games, and I’m pretty sure that they got a huge boost in people who knew the Humble Indie Bundle due to that. But when the second bundle did not offer freeing the source, I did not pay. Unfree games aren’t worth much to me and I feared they would go further down that track. Now Steam comes to GNU/Linux, so being cross-plasform isn’t unique for the Humble Indie Bundle anymore. And they dropped cross-platform support and added DRM. They replaced fans with short-term cash-cows who will happily switch to another project without second thoughts. Somehow I saw that coming… Well, they sell their brand while it still holds, but by doing that they burn the ones who brought them where they are today. Never put effort in a project where you have to trust the creator to not misuse it. Free copyleft licenses are a safeguard for contributors - not only the coders, but also for those who promote the project.1 1. That’s one of the reasons why I put the 1w6 roleplaying game completely under the GPL2 and why we are developing most of the stuff we do in a decentral versiontracking system. It makes it so easy for people to take over in case I should betray them that the benefit I could get from betrayal is small enough that I hope that I can withstand it on the long term. 2. 1w6 was freed completely in february 2009 by putting it under GPLv3. Before that it used a custom license, which was free but incompatible with other free works. # Ogg Theora and h.264 - which video codec as standard for internet-video? Links: - Video encoder comparison - a much more thorough comparision than mine We had a kinda long discussion on identi.ca about Ogg Theora and h.264, and since we lacked a simple comparision method, I hacked up a quick script to test them. It uses frames from Big Buck Bunny and outputs the files bbb.ogg and bbb.264 (license: cc by). The ogg file looks like this: The h.264 file looks like this: download ### Results What you can see by comparing both is that h.264 wins in terms of raw image quality at the same bitrate (single pass). So why am I still strongly in favor of Ogg Theora? The reason is simple: Due to licensing costs of h.264 (a few millions per year, due from 2015 onwards) making h.264 the standard for internet video would have the effect that only big companies would be able to make a video enabled browser - or we would get a kind of video tax for free software: if you want to view internet video with free software, you have to pay for the right to use the x264 library (else the developers couldn't cough up the money to pay for the parent license). And noone but the main developers and huge corporations could distribute the x264 library, because they’d have to pay license fees for that. And noone could hack on the browser or library and distribute the changed version, so the whole idea of free software would be led ad absurdum. It wouldn't matter that all code would be free licensed, since only those with a h.264 patent license could change it. So this post boils down to a simple message: “Support !theora against h.264 and #flash [as video codec for the web]. Otherwise only big companies will be able to write video browsers - or we get a h.264 tax on !fs” Theoras raw quality may still be worse, but the license costs and their implications provide very clear reasons for supporting Theora - which in my view are far more important than raw technical stuff. ### The test-script  for k in {0..1} do for i in {0..9} do for j in {0..9} do wget http://media.xiph.org/BBB/BBB-360-png/big_buck_bunny_00$k$i$j.png          done      done done 

 mplayer -vo yuv4mpeg -ao null -nosound mf://*png -mf fps=50 

 theora_encoder_example -z 0 --soft-target -V 400 -o bbb.ogg stream.yuv 

 mencoder stream.yuv -ovc x264 -of rawvideo -o bbb.264 -x264encopts bitrate=400 -aspect 16:9 -nosound -vf scale=640:360,harddup 

# p2p-networks help law enforcement catch hard criminals

Comment to: Local man faces court on child pornography charges by heraldstandard.com

As I see it, the only way the authorities did track him was due to his use of p2p-networks.

At the moment, technology makes it relatively easy for the police to track hard criminals in p2p-networks, but it also allows people to do small infringements rather safely (just like people don't stop at red traffic lights when there is no car in sight),

So I'd think the current state quite ideal.

Sadly there's an organisation which drives p2p-networks underground and which will eventually cease that action or archieve the "fame" to have been the one organisation which was responsible in the end for forcing p2p-networks to evolve into completely anonymous and untrackeable networks, where hard crimes aren't trackeable anymore.

So, this case shows once again, that "piracy" shouldn't be attacked but should instead be allowed and even fostered, because they increase social welfare (the access to media is improved, while there is no significant damage to sales) and in many cases even helped law enforcement catch ciminals who really do damage (and in this case: did very much damage).

Information about the impact of p2p-networks based on a study from the university of chicago:
- http://www.journals.uchicago.edu/JPE/journal/issues/v115n1/31618/31618.h...
- http://www.journals.uchicago.edu/cgi-bin/resolve?JPE31618PDF

# Patent law overrides copyright breaks ownership

Concise and clear.

In patent law, copyright and property there are two pillars: protection and control.

## Protection

• Property: No person shall take that from me.
• Copyright: No person shall have the same without my permission. A monopoly.
• Patent Law: No person shall create something similar without my permission. An even stronger monopoly.

## Control

• Property: I decide what happens with this.
• Copyright: I decide what happens to everything which is the same. Takes another ones property. → a monopoly¹.
• Patent Law: I decide what happens to every similar thing. Takes the copyright and property of others. → An even stronger monopoly¹.

In short: Patent law overrides copyright breaks ownership.

¹: Others may have copyrights and property rights which they can only exercise with my permission. So effectively all their rights belong to me. If you want a longer argument on this, please read Intellectual Property Is Theft.

(translation of Patentrecht bricht Urheberrecht bricht Eigentum)

# Phoronix conclusions distort their results, shown with the example of GCC vs. LLVM/Clang On AMD's FX-8350 Vishera

Phoronix recently did a benchmark of GCC vs. LLVM on AMD hardware. Sadly their conclusion did not fit the data they showed. Actually it misrepresented the data so strongly, that I decided to speak up here instead of having my comments disappear in their forums.

Taking out the OpenMP benchmarks (where GCC naturally won, because LLVM only processes those tests single-threaded) and the build times (which are irrelevant to the speed of the produced binaries), their benchmark had the following result:

LLVM is slower than GCC by:

• 10.2% (HMMer)
• 12.7% (MAFFT)
• 6.8% (BLAKE2)
• 9.1% (HIMENO)
• 42.2% (C-Ray)

With these results (which were clearly visible on their result summary on OpenBenchmarking, Michael Larabel from Phoronix concluded:

» The performance of LLVM/Clang 3.3 for most tests is at least comparable to GCC «

Nobu from their Forums supplied a conclusion which represents the data much better:

» GCC is much faster in anything which uses OpenMP, and moderately faster or equal in anything (except compile times) which doesn't [use OpenMP] «

But Michael from Phoronix did not stop at just ignoring the performance difference between GCC and LLVM. He went on claiming, that

In a few benchmarks LLVM/Clang is faster, particularly when it comes to build times.

And this is blatant reality-distortion which I am very tempted to ascribe to favoritism. LLVM is not “particularly” faster when it comes to build times.

LLVM on AMD FX-8350 Vishera is faster ONLY when it comes to build times!

This was not the first time that I read data-distorting conclusions on Phoronix - and my complaints about that in their forum did not change their actions. So I hope that my post here can help making them aware that deliberately distorting test results is unacceptable.

For my work, compiler performance is actually quite important, because I use programs which run for days or weeks, so 10% runtime reduction can mean saving several days - not counting the cost of using up cluster time.

To fix their blunders, what they would have to do is:

• Avoiding Benchmarks which only one compiler supports properly (OpenMP).
• Marking the compile time tests explicitely, so they strongly stand out from the rest, because they measure a completely different parameter than the other tests: Compiler Runtime vs. Performance of the Compiled Binaries.
• Writing conclusions which actually fit their results.

Their current approach gives a distinct disadvantage to GCC (even for the OpenMP tests, because they convey the notion that if LLVM only had OpenMP, it would be better in everything - which as this test shows is simply false), so the compiler-tests from Phoronix work as covert propaganda against GCC, even in tests where GCC flat-out wins. And I already don’t like open propaganda, but when the propaganda gets masked as objective testing, I actually get angry.

I hope my post here can help move them towards doing proper testing again.

PS: I write so strongly here, because I actually like the tests from Phoronix a lot. I think we need rather more than less testing and their testsuite actually seems to do a good job - when given the right parameters - so seeing Phoronix distorting the tests to a point where they become almost useless (except as political tool against GCC) is a huge disappointment to me.

# pyRad - a wheel type command interface for KDE

pyRad is a wheel type command interface for KDE1, designed to appear below your mouse pointer at a gesture.

## Install

### in any distro

• Get Python.
• call easy_install pyRadKDE in any shell.
• Test it by calling pyrad.py.
• This should automatically pull in pyKDE4. If it doesn’t, you need to install that seperately.

• For a "live" version, just clone the pyrad Mercurial repo and let KDE run "path/to/repo/pyrad.py" at startup. You can stop a running pyrad via pyrad.py --quit. pyrad.py --help gives usage instructions.

### In Gentoo

• emerge -a kde-misc/pyrad

### In unfree systems (like MacOSX and Windows)

• I have no clue since I don’t use them. You’ll need to find out yourself or install a free system. Examples are Kubuntu for beginners and Gentoo for convenient tinkering. Both run GNU/Linux.

## Setup

• Run /usr/bin/pyrad.py. Then add it as script to your autostart (systemsettings→advanced→autostart). You can now use Alt-F6 and Meta-F6 to call it.

### Mouse gesture (optional)

• Add the mouse gesture in systemsettings (systemsettings→shortcuts) to call D-Bus: Program: org.kde.pyRad ; Object: /MainApplication ; Function: newInstance (you might have to enable gestures in the settings, too - in the shortcuts-window you should find a settings button).

• Alternately set the gesture to call the command dbus-send --type=method_call --dest=org.kde.pyRad /MainApplication org.kde.KUniqueApplication.newInstance.

### Customize the wheel

Customize the menu by editing the file "$HOME/.pyradrc" or middle-clicking (add) and right-clicking (edit) items. ## Usage and screenshots To call pyRad and see the command wheel, you simply use the gesture or key you assigned. Then you can activate an action with a single left click. Actions can be grouped into folders. To open a folder, you also simply left-click it. Also you can click the keyboard key shown at the beginning of the tooltip to activate an action (hover the mouse over an icon to see the tooltip). To make the wheel disappear or leave a folder, click the center or hit the key 0. To just make it disappear, hit escape. For editing an action, just right click it, and you’ll see the edit dialog. Each item has an icon (either an icon name from KDE or the path to an icon) and an action. The action is simply the command you would call in the shell (only simple commands, though, no real shell scripting or glob). To add a new action, simply middle-click the action before it. The wheel goes clockwise, with the first item being at the bottom. To add a new first item, middle-click the center. To add a new folder (or turn an item into a folder), simply click on the folder button, say OK and then click it to add actions in there. See it in action: ## download and sources pyRad is available from PS: The name is a play on ‘python’, ‘Rad’ (german for wheel) and pirate :-) PPS: KDE, K Desktop Environment and the KDE Logo are trademarks of KDE e.V. PPPS: License is GPL+ as with almost everything on this site. # pyRad is now in Gentoo portage! *happy* My wheel type command interface pyRad just got included in the official Gentoo portage-tree! So now you can install it in Gentoo with a simple emerge kde-misc/pyrad. Many thanks go to the maintainer Andreas K. Hüttel (dilfridge) and to jokey and Tommy[D] from the Gentoo sunrise project (wiki) for providing their user-overlay and helping users with creating ebuilds as well as Arfrever, neurogeek, floppym from the Gentoo Python-Herd for helping me to clean up the ebuild and convert it to EAPI 3! # Python for beginning programmers (written on ohloh for Python) Since we already have two good reviews from experienced programmers, I'll focus on the area I know about: Python as first language. My experience: • I began to get into coding only a short time ago. I already knew about processes in programs, but not how to get them into code. • I wanted to learn C/C++ and failed at general structure. After a while I could do it, but it didn't feel right. • I tried my luck with Java and didn't quite get going. • Then I tried Python, and got in at once. Advantages of Python: • The structure of programs can be understood easily. • The Python interpreter lets you experiment very quickly. • You can realize complex programs, but Python also allows for quick and simple scripting. • Code written by others is extremely readable. • And coding just flows - almost like natural speaking/thinking. As a bonus, there is the great open book How to Think Like a Computer Scientist which teaches Python and is being used for teaching Python and Programming at universities. So I can wholeheartedly recommend Python to beginners in programming, and as the other reviews on Ohloh show, it is also a great language for experienced programmers and seems to be a good language to accompany you in your whole coding life. PS: Yes, I know about the double meaning of "first language" :) # Reducing the Python startup time The python startup time always nagged me (17-30ms) and I just searched again for a way to reduce it, when I found this: The Python-Launcher caches GTK imports and forks new processes to reduce the startup time of python GUI programs. Python-launcher does not solve my problem directly, but it points into an interesting direction: If you create a small daemon which you can contact via the shell to fork a new instance, you might be able to get rid of your startup time. To get an example of the possibilities, download the python-launcher and socat and do the following: PYTHONPATH="../lib.linux-x86_64-2.7/" python python-launcher-daemon & echo pass > 1 for i in {1..100}; do echo 1 | socat STDIN UNIX-CONNECT:/tmp/python-launcher-daemon.socket & done  Todo: Adapt it to a given program and remove the GTK stuff. Note the & at the end: Closing the socket connection seems to be slow, so I just don’t wait for socat to finish. Breaks at somewhere over 200 simultaneous connections. Option: Use a datagram socket instead. The essential trick is to just create a server which opens a socket. Then it reads all the data from the socket. Once it has the data, it forks like the following:  pid = os.fork() if pid: return signal.signal(signal.SIGPIPE, signal.SIG_DFL) signal.signal(signal.SIGCHLD, signal.SIG_DFL) glob = dict(__name__="__main__") print 'launching', program execfile(program, glob, glob) raise SystemExit  Running a program that way 100-times took just 0.23 seconds for me so the Python startup time of 17ms got reduced to 2.3ms. You might have to switch from forking to just executing the code instead of forking if you want to be even faster and the code snippets are small. For example when running the same test without the fork and the signals, 100 executions of the same code took just 0.09s, cutting down the startup time to an impressing 0.9ms - with the cost of no longer running in parallel. (That’s what I also do with emacsclient… My emacs takes ~30s to start (due to excessive use of additional libraries I added), but emacsclient -c shows up almost instantly.) I tested the speed by just sending a file with the following snippet to the server: import time with open("2", "a") as f: f.write(str(time.time()) + "\n")  Note: If your script only needs the included python libraries (batteries) and no custom-installed libs, you can also reduce the startuptime by avoiding site initialization: python -S [script]  Without -S python -c '' takes 0.018s for me. With -S I am down to time python -S -c '' → 0.004s.  Note that you might miss some installed packages that way. This is slower than the daemon method by up to factor 4 (4ms instead of 0.9), but still faster than the default way. Note that cold disk buffers can make the difference much bigger on the first run which is not relevant in this case but very much relevant in general for the impression of startup speed. PS: I attached the python-launcher 0.1.0 in case its website goes down. License: GPL and MIT; included. This message was originally written at stackoverflow. # Screencast: Tabbing of everything in KDE I just discovered tabbing of everything in KDE: (download) Created with recordmydesktop, cut with kdenlive, encoded to ogg theora with ffmpeg2theora (encoding command). Music: Beat into Submission on Public Domain by Tryad. To embed the video on your own site you can simply use: <video src="http://draketo.de/files/screencast-tabbing-everywhere-kde.ogv" controls=controls> </video>  If you do so, please provide a backlink here. License: cc by-sa, because that’s the license of the song. If you omit the audio, you can also use one of my usual free licenses (or all of them, including the GPL). Here’s the raw recording (=video source). ¹: Feel free to upload the video to youtube or similar. I license my stuff under free licenses to make it easy for everyone to use, change and spread them. ²: Others have shown this before, but I don’t mind that. I just love the feature, so I want to show it :) ³: The command wheel I use for calling programs is the pyRad. # Shackle-Feats: The poisoned Apple This is a mail I sent as listener comment to Free as in Freedom. Hi Bradley, Hi Karen, I am currently listening to your Steve Jobs show (yes, late, but time is scarce these days). And I side with Karen (though I use KDE): Steve Jobs managed to make a user interface which feels very natural. And that is no problem in itself. Apple solved a problem: User interfaces are hard to use for people who don’t have computer experience and who don’t have time to learn using computers right. But they then used that solution to lure people into traps they set up to get our money and our freedom. As analogy: A friend of mine told me, that Photoshop gives her Freedom, because she can do things with it, which she can’t do with anything else. And she’s right on that: She gets a kind of freedom. But she has to give up other freedoms for that, for example the freedom to do freelancing work without paying 3000€ up front. To make the problem with that kind of freedom visible, let’s use one more analogy: When I get a flying car with which I can visit the Himalaya without having to get a drivers license, then I just got the Freedom to actually visit Himalaya. But sadly that car comes with a rule, that I am not allowed to take friends with me, and it does not allow me to drive into cities ruled by left-wing politicians. It costs so much, that I can’t afford another car1, so now if I want to visit Himalaya, I can’t take friends with me even when I just want to drive to the next shop and I can’t visit left-wing friends. That car would give me a kind of Freedom, but it would take away other freedoms I had before I used it. If all people used it, the effects would be horrible, and not just for left-wings and car owners: You would not be able to get a ride from a neighbor when you needed to get to the doctor fast. Now imagine what would happen, if people would find ways to make money with that flying car. They would create a society, where you have to give up Freedom if you want to get one of the good jobs. So creating a new kind of Freedom and coupling it with heavy shackles does not give you more Freedom. It creates a situation where people have a harder time living their life when they want to keep their basic freedom, because those shackle-feats become mandatory. I need to remember the name shackle-feats :) Apple kinda invented the shackle-feat “use shiny computers without understanding them”. They managed to make shackles almost mandatory for parts of society by creating a pressure on people that they have to be able to do the feat, so they have to accept the shackles. Now we have to recreate that feat without the shackles so people are able to keep up without losing their freedom. We have to do additional work, because society is being shaped by those who made the shackles. Best wishes, Arne Babenhauserheide PS: Steve Jobs managed to create really nice interfaces. Sadly he used his abilities to shackle people. He once was a hero to me. Even today there is stuff he did that I admire. But he decided to use his abilities for shackling people. 1. Or it is so different from other cars, that using it for some time makes it necessary for me to relearn other stuff, so using any other car requires a high relearning effort. And for most people, time is as scarce as money. # The ease of losing the spirit of your project by giving in to short-term convenience Yesterday I said to my father » Why does your whole cooperative have to meet for some minor legalese update which does not have an actual effect? Could you not just put into your statutes, that the elected leaders can take decisions which don’t affect the spirit of the statutes? « He answered me » That’s how dictatorships are started.« With an Ermächtigungsbescheid. I gulped a few times while I realized how easy it is to fall into the pitfalls of convenience - and lose the project in the process. An answer to tanto in Sone (Freenet - official site) # Simple positive trust scheme with threshholds I don’t see a reason for negative reputation schemes — voting down is in my view a flawed concept. The rest of this article is written for freetalk inside freenet, and also posted there with my nonanonymous ID. That just allows for community censorship, which I see as incompatible with the goals of freenet. Would it be possible to change that to use only positive votes and a threshhold? • If I like what some people write, I give them positive votes. • If I get too much spam, I increase the threshhold for all people. • Effective positive votes get added. It suffices that some people I trust also trust someone else and I’ll see the messages. • Effective trust is my trust (0..1) · the trust of the next in the chain (0..1) · … Usecase: • Zwister trusts Alice and Bob. • Alice trusts Lilith. • Bob hates Lilith. In the current scheme (as I understand it), zwister wouldn’t see posts from Lilith. In a pure positive scheme, zwister would see the posts. If zwister wants to avoid seeing the posts from Lilith, he has to untrust Alice or ask Alice to untrust Lilith. Add to that a personal (and not propagating) blocking option which allows me to “never see anything from Lilith again”. Bob should not be able to interfere with me seeing the messages from Lilith, when Alice trusts Lilith. If zwisters trust for Alice (0..1) multiplied with Alices trust for Lilith (0..1) is lower than zwisters threshhold, zwister doesn’t see the messages. PS: somehow adapted from Credence, which would have brought community spam control to Gnutella, if Limewire had adopted it. PPS: And adaption for news voting: You give positive votes on news which show up. Negative votes assign a private threshhold to the author of the news, so you then only see news from that author which enough people vote for. # Simple steps to attach the GNU Public License (GPL) to your project Here's the simple steps to attach a GPL license to your source files (written after requests by DiggClone and Bandnet): For your own project, just add the following text-notice to the header/first section of each of your source-files, commented out in whatever way your language uses: ----------------following is the notice----------------- /* * Your Project Name - -you slogan- * Copyright (C) 2007 - 2007 Your Name * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ ---------------------------------------------- the "2007 - 2007" needs to be adjusted to "year when you gave it the license in the first place" - "current year". Then put the file gpl.txt into the source-folder or a docs folder: http://www.gnu.org/licenses/gpl.txt If you are developing together with other people, you need their permission to put the project under the GPL. ------ Just for additional Info, I found this license comparision paper by sun: http://mediacast.sun.com/share/webmink/SunLicensingWhitePaper042006.pdf And comments to it: http://blogs.sun.com/webmink/entry/open_source_licensing_paper#comments It does look nice, but it misses one point: GPL is trust: Contributors can trust, that their contributions will keep helping the community, and that the software they contribute to will keep being accessible for the community. (That's why I decided some years ago to only support GPL projects. My contributions to one semi-closed project got lost, because the project wasn't free and the developer just decided not to offer them anymore, and I could only watch hundreds of hours of work disappear, and that hurt.) Best wishes, Arne PS: If anything's missing, please write a comment! # Some Python Programs of mine heavily outdated page. See bitbucket.org/ArneBab for many more projects… Hi, I created some projects with pyglet and some tools to facilitate 2D game development (for me), and I though you might be interested. • babglet: basic usage of pyglet for 2D games with optional collision detection and avoidance. • blob_swarm: a swarm of blobs with emerging swarm behaviour through only pair relations. • blob_battle: a duel-style battle between two blobs (basic graphics, control and movement done) • fuzzy_collisions: 2 groups of blobs. One can be controlled. When two blobs collide, they move away a (random) bit to avoid the collision. They are avaible from the rpg-1d6 project on sourceforge: -> https://sf.net/projects/rpg-1d6/ The download can be found at the sf.net download page: -> https://sourceforge.net/project/showfiles.php?group_id=199744 # Steve Jobs, Get Your Head out Of the Sand! - Broken Apple Heart Dear Steve, Do you understand that imposing Digital Restrictions Management (DRM) is unethical? That attempting to control our computers and electronic devices to monitor what we do with digital files is wrong and a danger to society? The problem for DRM proponents is that DRM doesn't work as advertised - and you are helping perpetuate a lie. We know you know this, you've said as much about music and DRM yourself. So why do you persist in touting DRM for video? What DRM does do is trample my rights and create a situation where, if I were to circumvent a DRM scheme to be in control of my computer, it would be a criminal act - thanks to legislation like the Digital Millennium Copyright Act (DMCA). So what does DRM do? It monitors what I do, Often, it reports on my activities to a central authority. It locks me to one vendor of software. It limits what I do with the stuff i own. Yet Apple takes advantage of DRM to gain exactly this kind of control over its customers doesn't it? We don't want DRM! We do want our music and video in formats free from proprietary restrictions. And we want the devices we buy to be under our control. Do you still have Apple's head stuck in the sand? I'm writing to suggest you take it out. - http://defectivebydesign.com Personal Comment: I've been a Mac User my whole life. I left you with a broken heart, when you used the TPM chip to lock down _developer_ Macs. Now I'm a GNU/Linux user (KDE), and even though I sometimes think back to Macs, to Shufflepuck (my first addiction), to professional video editing (With my old 66Mhz Mac), to the 6 months when I tried every beta of MacOSX even though my 266Mhz G3 was far too slow to render it in speed, and to the ease of music production on my Flat-Panel iMac, I won't come back to have my freedom taken. You're creating great computers. Why do you still have to make them a tool for digital slavery, even though you now aknowledged yourself, that this slavery is bad? The one who acts bad but doesn't know it, is a fool. The one who acts bad and knows it, is a criminal, regardless of the laws. Disappointed wishes with but a glimpse of hope, Arne Babenhauserheide - Broken Apple Heart (german) # Tail Call Optimization (TCO), dependency, broken debug builds in C and C++ — and gcc 4.8 TCO: Reducing the algorithmic complexity of recursion. Debug build: add overhead to a program to trace errors. UPDATE: GCC 4.8 gives us -Og -foptimize-sibling-calls, and I had a few quite embarrassing errors in my C - thanks to AKF for the catch! ## 1 Intro Tail Call Optimization (TCO) makes this def foo(n): print(n) return foo(n+1) foo(1)  behave like this def foo(n): print(n) return n+1 n = 1 while True: n = foo(n)  ## Table of Contents I recently told a colleague how neat tail call optimization in scheme is (along with macros, but that is a topic for another day…). Then I decided to actually test it (being mainly not a schemer but a pythonista - though very impressed by the possibilities of scheme). So I implemented a very simple recursive function which I could watch to check the Tail Call behaviour. I tested scheme (via guile), python (obviously) and C++ (which proved to provide a surprise). ## 2 The tests ### 2.1 Scheme (define (foo n) (display n) (newline) (foo (1+ n))) (foo 1)  ### 2.2 Python def foo(n): print n return foo(n+1) foo(1)  ### 2.3 C++ The C++ code needed a bit more work (thanks to AKF for making it less ugly/horrible!): #include <stdio.h> int recurse(int n) { printf("%i\n", n); return recurse(n+1); } int main() { return recurse(1); }  Additionally to the code I added 4 different ways to build the code: Standard optimization (-O2), Debug (-g), Optimized Debug (-g -O2), and only slightly optimized (-O1). all : C2 Cg Cg2 C1 # optimized C2 : tailcallc.c g++ -O2 tailcallc.c -o C2 # debug build Cg : tailcallc.c g++ -g tailcallc.c -o Cg # optimized debug build Cg2 : tailcallc.c g++ -g -O2 tailcallc.c -o Cg2 # only slightly optimized C1 : tailcallc.c g++ -O1 tailcallc.c -o C1  ## 3 The results So now, let’s actually check the results. Since I’m interested in tail call optimization, I check the memory consumption of each run. If we have proper tail call optimization, the required memory will stay the same over time, if not, the function stack will get bigger and bigger till the program crashes. ### 3.1 Scheme Scheme gives the obvious result. It starts counting numbers and keeps doing so. After 10 seconds it’s at 1.6 million, consuming 1.7 MiB of memory - and never changing the memory consumption. ### 3.2 Python Python is no surprise either: it counts to 999 and then dies with the following traceback: Traceback (most recent call last): File "tailcallpython.py", line 6, in <module> foo(1) File "tailcallpython.py", line 4, in foo return foo(n+1) … repeat about 997 times … RuntimeError: maximum recursion depth exceeded  Python has an arbitrary limit on recursion which keeps people from using tail calls in algorithms. ### 3.3 C/C++ C/C++ is a bit trickier. First let’s see the results for the optimized run: #### 3.3.1 Optimized g++ -O2 C.c -o C2 ./C2  Interestingly that runs just like the scheme one: After 10s it’s at 800,000 and consumes just 144KiB of memory. And that memory consumption stays stable. #### 3.3.2 Debug So, cool! C/C++ has tail call optimization. Let’s write much recursive tail call using code! Or so I thought. Then I did the debug run. g++ -g C.c -o Cg ./Cg  It starts counting just like the optimized version. Then, after about 5 seconds and counting to about 260,000, it dies with a segmentation fault. And here’s a capture of its memory consumption while it was still running (thanks to KDEs process monitor): Private 7228 KB [stack] 56 KB [heap] 40 KB /usr/lib64/gcc/x86_64-pc-linux-gnu/4.7.2/libstdc++.so.6.0.17 24 KB /lib64/libc-2.15.so 12 KB /home/arne/.emacs.d/private/journal/Cg  Shared 352 KB /usr/lib64/gcc/x86_64-pc-linux-gnu/4.7.2/libstdc++.so.6.0.17 252 KB /lib64/libc-2.15.so 108 KB /lib64/ld-2.15.so 60 KB /lib64/libm-2.15.so 16 KB /usr/lib64/gcc/x86_64-pc-linux-gnu/4.7.2/libgcc_s.so.1  That’s 7 MiB after less than 5 seconds runtime - all of it in the stack, since that has to remember all the recursive function calls when there is no tail call optimization. So we now have a program which runs just fine when optimized but dies almost instantly when run in debug mode. But at least we have nice gdb traces for the start: recurse (n=43) at C.c:5 5 printf("%i\n", n); 43 6 return recurse(n+1);  ### 3.4 Optimized debug build So, is all lost? Luckily not: We can actually specify optimization with debugging information. g++ -g -O2 C.c -o Cg2 ./Cg2  When doing so, the optimized debug build chugs along just like the optimized build without debugging information. At least that’s true for GCC. But our debug trace now looks like this: 5 printf("%i\n", n); printf (__fmt=0x40069c "%i\n") at /usr/include/bits/stdio2.h:105 105 return __printf_chk (__USE_FORTIFY_LEVEL - 1, __fmt, __va_arg_pack ()); 5 6 return recurse(n+1);  That’s not so nice, but at least we can debug with tail call optimization. We can also improve on this (thanks to AKF for that hint!): We just need to enable tail call optimization separately: g++ -g -O1 -foptimize-sibling-calls C.c -o Cgtco ./Cg  But this still gives ugly backtraces (if I leave out -O1, it does not do TCO). So let’s turn to GCC 4.8 and use -Og. g++ -g -Og -foptimize-sibling-calls C.c -o Cgtco ./Cgtco  And we have nice backtraces! recurse (n=n@entry=1) at C.c:4 4 { 5 printf("%i\n", n); 1 6 return recurse(n+1); 5 printf("%i\n", n); 2 6 return recurse(n+1);  ### 3.5 Optimized for size Can we invert the question? Is all well, now? Actually not… If we activate minor optimization, we get the same unoptimized behaviour again. g++ -O1 C.c -o C1 ./C1  It counts to about 260,000 and then dies from a stack overflow. And that is pretty bad™, because it means that a programmer cannot trust his code to work when he does not know all the optimization strategies which will be used with his code. And he has no way to define in his code, that it requires TCO to work. ## 4 Summary Tail Call Optimization (TCO) turns an operation with a memory requirement of O(N)1 into one with a memory requirement of O(1). It is a nice tool to reduce the complexity of code, but it is only safe in languages which explicitely require tail call optimization - like Scheme. And from this we can find a conclusion for compilers: C/C++ compilers should always use tail call optimization, including debug builds, because otherwise C/C++ programmers should never use that feature, because it can make it impossible to use certain optimization settings in any code which includes their code. And as a finishing note, I’d like to quote (very loosely) what my colleague told me from some of his real-life debugging experience: “We run our project on an AIX ibm-supercomputer. We had spotted a problem in optimized runs, so we activated the debugger to trace the bug. But when we activated debug flags, a host of new problems appeared which were not present in optimized runs. We tried to isolate the problems, but they only appeared if we ran the full project. When we told the IBM coders about that, they asked us to provide a simple testcase… The problems likely happened due to some crazy optimizations - in our code or in the compiler.” So the problem of undebuggable code due to a dependency of the program on optimization changes is not limited to tail call optimization. But TCO is a really nice way to show it :) Let’s use that to make the statement above more general: C/C++ compilers should always do those kinds of optimizations which lead to changes in the algorithmic cost of programs. Or from a pessimistic side: You should only rely on language features, which are also available in debug mode - and you should never develop your program with optimization turned on. And by that measure, C/C++ does not have Tail Call Optimization - at least until all mainstream compilers include TCO in their default options. Which is a pretty bleak result after the excitement I felt when I realized that optimizations can actually give C/C++ code the behavior of Tail Call Optimization. Note, though, that GCC 4.8 added the -Og option, which improves the debugging a lot (Phoronix wrote about plans for that last september). It still does not include -foptimize-sibling-calls in -Og, but that might be only a matter of time… I hope it is. ## Footnotes: 1 : O(1) and O(N) describe the algorithmic cost of an algorithm. If it is O(N), then the cost rises linearly with the size of the problem (N is the size, for example printing 20,000 consecutive numbers). If it is O(1), the cost is stable regardless of the size of the problem. # The danger of promoting dead closed clients I had a strange feeling about people advertising the dead and closed source Gnutella client BearShare, but I only found one of the reasons for that gut feeling today. Assumptions I use: We want Gnutella to continue to evolve and grow better. To have Gnutella evolve, the developers of actively developed clients need feedback (and be it only encouragement). If people now use a dead client, which won't evolve anymore, they don't provide essential feedback to actively developed clients, and it might even happen, that some developers waste time on trying to hack the dead client to make something work (again), instead of contributing to an active open client. So every user who uses a dead closed client instead of an active open (and free) client hinders the evolution of Gnutella. That's not the fault of the user, and it's not per se damaging to the current state of the network (as long as the user shares, he contributes to the availaable files), but on the long term it hinders Gnutella from becoming better. And that in mind, promoting a closed dead client directly damages Gnutella. I know I'm human and as such prone to errors, so if you see anything I overlooked, please tell me about it. # The dynamics of free culture and the danger of noncommercial clauses NC covered works trick people into investing in a dead end Free licensing lowers the barrier of entry to creating cultural works, which unlocks a dynamic where people can realize their ideas much easier - and where culture can actually live, creating memes, adjusting them to new situations and using new approaches with old topics. But for that to really take off, people have to be able to make a living from their creations - which build on other works. Then we have people who make a living by reshaping culture again and again - instead of the current culture where only a few (rich or funded by rich ones) can afford to reuse old works and all others have to start from scratch again and again. Sharealike licensing gives those who allow others to reuse their works an edge over those who do not do that: They can access many resources early in their career which allow them to produce high-quality stuff without needing to pay huge amounts up front. And they hone their skills in working with free stuff. So when they become good enough that they can work in art for a living, they are deeply invested in free culture, so they have very good reasons for also licensing their new works under free licenses. As a real-life example for the dynamic of free licensing, I’ve been working on a free tabletop roleplaying system in my free time for the last 10 years. For 3 or 4 years now it has been licensed under the GPL, so we could use images from Battle for Wesnoth in our books. And 2 years ago, I worked together with another roleplayer to create minimal roleplaying supplements on just one Flyer - where only half the images were from Battle for Wesnoth, because a great artist decided to contribute (All hail Trudy!). All this would have been possible with NC licensing. But about 2 months ago a roleplayer from a forum I discuss at unveiled his plans to create a german free rpg day and I realized that our minimal RPG would be a great fit for that - but that I could not afford myself to print it in high enough numbers and good enough quality to reach many people. So I worked on the design and text to polish them, and when I was happy I started a 4-day fundraiser to finance printing the RPGs. Within just those 4 days I got over 200€ in donations which allowed me to print 2000 RPGs in great quality along with supplements and additional character cards which made every single RPG instantly playable - instead of 1500 RPGs with only one card so people would need 3 RPGs to actually play. And this would have been plain illegal with NC material. It is not yet “making a living with free art”, but it is a first step out of the purely hobby creation into a stronger dynamic. One which allows us to bring 2000 physical RPGs to people without going broke - and more importantly: One which started small and can grow organically. An RPG might not be the best example here, because tabletop RPGs are notoriously bad for generating money. But it is the example I experienced myself. As an example which might be closer to you: Imagine that you created a movie with free music and other material from free licensed works. Imagine that half of the visuals you use could have already been created - maybe for some other movie. By using free stuff, you could save half the effort for creating the movie. But if that other stuff had been NC, you would not be allowed to start a fundraiser for getting it to blu-ray quality - at least not without replacing all NC parts, which would have added a high cost to be able to increase your outreach. Likely it would have been a blocking cost. It would have been easier to just create a new project than to polish the one you have to reach more people. And polish is what allowed me to move the RPG from just being a hardly readable PDF to a work I can look at with pride. To wrap it up: Free culture - just like free software - allows people to take little steps into creating culture and to move organically from just being a hobby artist towards making a living from their work - and spreading their work to many more people. And NC covered works trick people into investing in a dead end, because they can never move beyond being a hobbyist without huge investments which bring no other benefit than recreating what they could directly use when they did not try to make a living. It’s like learning to use Photoshop and then realizing that you aren’t allowed to earn a little extra by improving wedding-images without shelling out 3000€ for a creative suite license. And that means, that you can’t move in small steps from a boring day job to a professional creative life. (written in reply to a question from Keith, one of the makers of Software Wars, a movie about free software which is trying to fund going to a high-quality blu-ray release at the moment) # The effect of the optional restrictions of the GPLv3 I just thought a bit about the restrictions the GPLv3 allows, and I think I just understood their purpose and effect for the first time (correct me, if I'm wrong :) ). ## What are the restrictions? The GPLv3 allows developers (=copyright holders) to add selected restrictions, like forbidding the use of a certain brand name or similar. The catch with them is, that any subsequent developer who adds anything is free to simply strip off the restrictions. ## What is their effect? Now I wondered for a long time, what that really gains us. today I then realized that subsequent develoeprs are only free to strip off the restrictions, as long as that doesn't violate any license of some part of the program. That means, the GPLv3 restrictions simply have the effect of adding compatibility to other licenses, while keeping the option to strip off any restriction, when you replace the part under the other license with a more liberal licensed part. So this doesn't place any additional burden on packagers, because they already have to check those other licenses for their restrictions. Now the GPLv3 description of the whole package clearly states what additional restrictions are inferred by the parts which are under different but compatible licenses. While those parts where under seperate licenses before (and had to be checked), they can now be impcoved with GPLv3 code with additional restrictions. And as soon as the GPLv3 code can stand on its own feet, the more restrictive licensed part can be replaces with GPLv3 code, and the restrictions can be removed again, making the work of the packagers easier. Better still, the GPLv3 shows clearly the sum of all restrictions of the individual (differently but compatible licensed) parts, so packagers only need to check the GPLv3 license information to see all restrictions in a standardized format (GPLv3 additional restrictions). ## Example Let's assume I find this great piece of software which says "do what you want but don't touch my brand", and I want to build my GPLv3 program on it. Let's call the piece of software "foo". So I just begin coding and use the GPLv3 for my parts (simply copyright message in my code files). For the whole package I add a license information ("license.txt" or "COPYING" or similar) which give the information  "This program is licensed under the GPLv3 with the additional restriction that the brand 'foo' may not be used for derived products. The additional restriction is inferred by the package foo. (plus license mumbo jumbo you can find at and copy from http://gnu.org/licenses/gpl.html)"  Now someone else takes my program and improves it. But he also uses the package "blah" which also says that it's brand must not be violated. Now the combined license would be:  "This program is licensed under the GPLv3 with the additional restriction that the brand 'foo' and the brand 'blah' may not be used for derived products. The additional restriction for brand 'foo' is inferred by the package foo. The additional restriction for brand 'blah' is inferred by the package blah. (plus license mumbo jumbo you can find at and copy from http://gnu.org/licenses/gpl.html)"  So now a group of free software activists takes offense at the restrictions. They don't want anyone to be restricted by copyright from using a brand. One reason could be that the brand protection was voided by some trademark action. Now they can't just say "that brand isn't protected anymore", since the protection was reinforced by copyright law. But they can just replace the parts under the more restrictive licenses with honest GPLv3 licensed parts - either by writing them or by finding a drop-in replacement. Now let's suppose they write the packages "bar" and "baz" which implement the functionality of "foo" and "blah". They now no longer use any parts under licenses which require additional restrictions, so they are free to remove them. As they release their package, the license information might read as follows:  "This program is licensed under the GPLv3. (plus license mumbo jumbo you can find at and copy from http://gnu.org/licenses/gpl.html)"  If they are nice (and we assume they are) their changelog will also contain a line saying something like "replaced 'foo' and 'blah' which allowed us to remove the additional license restrictions to avoid using the brands 'foo' or 'blah'." ## Final remark As you see (and if I understand it correctly), the additional restrictions can be a great tool for freeing software from restrictions, because they allow you to combine GPlv3 code with somewhat restrictive licensed code and get rid off the restrictions later by replacing the more restrictive licensed parts. And since the only allowed additional restrictions are those which don't harm the four freedoms of free software, you can still make sure that you use ethically sound software by simply checking whether it is GPL licensed. So kudos to the designers of the GPLv3. They did such a great job that it took me two years to realize one of the many powerful tools they gave us with the GPLv3 - and I did take part in the public discussion of the GPLv3 since draft 1 (but I never watched a GPLv3 speech...). Also, since the GPLv3 allows combination with the AGPLv3 software (which adds the restriction, that the source code must also be supplied when the software is used over a network), it gives us a clear path into the future where people might use more and more software "as a service", so it doesn't get executed on their local machine and the normal GPL alone isn't enough to protect our freedom. # The generation of cultural freedom I am part of a generation who experienced true cultural freedom - and who experienced that freedom being destroyed. We had access to the largest public library which ever existed and saw it burned down for lust for control. Not even for greed or gain, because enough studies showed that we did no damage and that we actually paid more for cultural goods than those who did not enjoy that freedom. They fought for control over us. And the loss of cultural freedom is only the precursor for the loss of personal freedom, as those many new censorship laws show. I decided to become active to stop that, but there was always one thing I wondered about: Who are the persons who want internet censorship, PIPA, ACTA, etc? Not the companies, nor the governments. The top proponents in the background. Those who make sure that basically the same laws are proposed over and over again by different strawmen and in different places. (my question to the RightsCon Rio - send yours) # The “Apple helps free software” myth → Comment to “apple supports a number of opensource projects. Webkit and CUPS come to mind”. Apple supports a number of copyleft projects, because they have to. They chose to profit from the work other people released as copyleft, and so they are obliged to release their improvements. ## Webkit Webkit is an especially good example of this: Apple took the khtml code from KDE, worked with it for half a year and only released binaries (which is a breach of the license of khtml) until they finally released their code in one big code-drop which the khtml folks had no chance of integrating cleanly. That way Apple broke away from the community and created their own fork in a way which made sure that the KDE folks could not profit from Apples work without throwing out their own structure. They still had to adhere the license, though, which enabled others to use Webkit - and essentially created a revolution in Webbrowser-development, because Apple added all the polish needed for a modern browser. If you look at the way they treated the khtml developers, though, do you really think they would have released any code on that critical part of their OS, if they had not been forced to do so by the strong copyleft used by KDE? ## Cups CUPS, the other example of Apple-maintained free software, … is GPL licensed, too. No surprise there: Why else should Apple give their work to others, if not because the license forces them to? And even there they try to get out by adding a GPL-exception to the parts they write, which allows using those parts without giving out source code. But “This exception is only available for Apple OS-Developed Software and does not apply to software that is distributed for use on other operating systems”. What do you think how much they will still maintain, as soon as they managed to get that header into all files - and don’t fear a free fork anymore? (also note, that shortly after Apple started maintaining cups, it broke on my GNU/Linux system - „Ein Schelm, wer böses dabei denkt“, as we say in Germany) ## Darwin Just look at what they did with Darwin. They took all the code from FreeBSD. Then they kept the uninteresting part free as long as needed to have a good name and get people work in their spare time on porting it to intel architectures, a work which greatly benefitted Apple, because they could then get away from PPC to no longer depend on IBM. The interesting part however, the graphical interface, was completely locked up from the beginning. See why OpenDarwin stopped: “Availability of sources, interaction with Apple representatives, difficulty building and tracking sources, and a lack of interest from the community“ — OpenDarwin Shutting Down 4 of 5 reasons for stopping the free alternative directly come from Apple… ## Epilogue Should I complain about that? Actually no. After all, they are allowed to do it by the license. They just do what they can to maximize their monetary gain. And actually I prefer seeing a big company use copyleft programs to improve its products, because that means that others will be able to achieve at least that part with free software. If I should complain about anybody, then about all the people who praise Apple for doing what they are forced to do to get the work of others for free - and about shortsighted developers, who use non-copyleft licenses, which allow folks like Apple to save lots of money while locking out others and creating “the computer as a jail made cool”, as Richard M. Stallman put it quite nicely — I call that shackle-feats. # turn files with wikipedia syntax to html (simple python script using mediawiki api) I needed to convert a huge batch of mediawiki-files to html (had a 2010-03 copy of the now dead limewire wiki lying around). With a tip from RoanKattouw in #mediawiki@freenode.net I created a simple python script to convert arbitrary files from mediawiki syntax to html. Usage: • Download the script and install the dependencies (yaml and python 3). • ./parse_wikipedia_files_to_html.py <files> This script is neither written for speed or anything (do you know how slow a webrequest is, compared to even horribly inefficient code? …): The only optimization is for programming convenience — the advantage of that is that it’s just 47 lines of code :) It also isn’t perfect: it breaks at some pages (and informs you about that). It requires yaml and Python 3.x. #!/usr/bin/env python3 """Simply turn all input files to html. No errorchecking, so keep backups. It uses the mediawiki webapi, so you need to be online. Copyright: 2010 © Arne Babenhauserheide License: You can use this under the GPLv3 or later, if you add the appropriate license files → http://gnu.org/licenses/gpl.html """ from urllib.request import urlopen from urllib.parse import quote from urllib.error import HTTPError, URLError from time import sleep from random import random from yaml import load from sys import argv mediawiki_files = argv[1:] def wikitext_to_html(text): """parse text in mediawiki markup to html.""" url = "http://en.wikipedia.org/w/api.php?action=parse&format=yaml&text=" + quote(text, safe="") + " " f = urlopen(url) y = f.read() f.close() text = load(y)["parse"]["text"]["*"] return text for mf in mediawiki_files: with open(mf) as f: text = f.read() HTML_HEADER = "<html><head><title>" + mf + "</title></head><body>" HTML_FOOTER = "</body></html>" try: text = wikitext_to_html(text) with open(mf, "w") as f: f.write(HTML_HEADER) f.write(text) f.write(HTML_FOOTER) except HTTPError: print("Error converting file", mf) except URLError: print("Server doesn’t like us :(", mf) sleep(10*random()) # add a random wait, so the api server doesn’t kick us sleep(3*random())  # Weltenwald-theme under AGPL (Drupal) After the last round of polishing, I decided to publish my theme under AGPLv3. Reason: If you use AGPL code and people access it over a network, you have to offer them the code. Which I hereby do ;) That’s the only way to make sure that website code stays free. It’s still for Drupal 5, because I didn’t get around to port it, and it has some ugly hacks, but it should be fully functional. Just untar it in any Drupal 5 install. tar xjf weltenwald-theme-2010-08-05_r1.tar.bz2  Maybe I’ll get around to properly package it in the future… Until then, feel free to do so yourself :) And should I change the theme without posting a new layout here, just drop me a line and I’ll upload a new version — as required by AGPL. And should you have some problem, or if something should be missing, please drop me a line, too. No screenshot, because a live version kicks a screenshot any day ;) (in case it isn’t clear: Weltenwald is the theme I use on this site) # Why free speech does not equal to the right of being heard → written in a discussion with Sascha1 in Freenet using Sone. If free speech included being allowed to force all people to listen, then it would also include my right to force you to listen to everything I say. Think this on the scale of 6 billion people all using freenet. Every one of them could force you to listen to him/her/it. Whom would you ignore? In WoT getting some people2 to see your message is possible, but it has a price: solving captchas. The same is true for real life demonstrations: If you want to be seen, you have to get up and actually invest something - be it time, effort or risk to your reputation. In real life we have channels through which we sell our attention. They are called advertisements and advertisement financed services, and access to our attention is tightly controlled by some few gatekeepers who make lots of money by keeping a hold on our attention. In freenet all you have to do is solve some captchas, a rule which is the same for everyone. 1. this is only true for those people who decide to publish captchas. If they disable that feature, you can only get their attention by first getting trusted by people whom they trust. # Why Gnutella scales quite well You might have read in some (almost ancient) papers, that a network like Gnutella can't scale. So I want to show you, why the current Version of Gnutella does scale, and does it well. In earlier versions, up to v0.4, Gnutella was a a pure broadcast network. That means, that every search request did reach every participant, so the number of search requests hitting each node was for an optimal network exactly equal to the number of requests, made by nodes who were in the network. And you can see easily why that can't scale. But that was only true for Gnutella 0.4. In the current incarnation of Gnutella (Gnutella 0.6), Gnutella is no longer a pure Broadcast network. Instead, only the smallest percentage of the traffic is done via broadcast. If you want to read about the methods used to realize this, please have a look at the GnuFU guide (english, german). Here I want to limit it to the statement, that the first two hops of a search request are governed via Dynamic Querying, which stops the request as soon as it has enough sources (this stops a search as soon as it gets about 250 results), and that the last two hops are governed via the Query Routing Protocol, which ensures, that a search request reaches only those hosts, which can actually have the file (which is only about 5% of the nodes). So in todays reality, Gnutella is a quite structured and very flexible network. To scale it, Ultrapeers can increase their number of connections from their current 32 upwards, which makes Dynamic Querying (DQ) and the Query Routing Protocol (QRP) even more effective. In the case of DQ most queries for popular files will still provide enough results after the same number of clients have been contacted, so increasing the number of connections won't change the network traffic at all which is caused by the first two steps. In the case of QRP, queries wil still only reach the hosts, which can have the file, and if Ultrapeers are connected to more nodes at the same time (by increasing the number of connections), it will provide more results for each connection, so DQ will stop even earlier than with fewer connections per Ultrapeer. So Gnutella is now far from a broadcast model, and the act of increasing the size of the Gnutella Network can even increase its efficiency for popular files. For rare files, QRP kicks in with full force, and even though DQ will likely check all other nodes for content, QRP will make sure that only those nodes are reached, which can have the content, which might be only 0.1% of the net or even far less. Here, increasing the number of nodes per Ultrapeer means that nodes with rare files are in effect closer to you than before, so Gnutella also gets more efficient when you increase the network size, when rare file searches are your major concern. So you can see, that Gnutella has become a network, which scales extremly well for keyword searches, and due to that it can also very efficiently be used to search for metadata and similar concepts. The only thing which Gnutella can't do well are searches for strings which aren't seperate words (for example file-hashes), because that kills QRP, so they will likely not reach (m)any hosts. For these types of searches, the Gnutella developers work on a DHT (Distributed Hash Table), which will only be used, if the string can't be split into seperate words, and that DHT will most likely be Kademlia, which is also proven to work quite well. And with that, the only problem which remains in need of fixing is spam, because that inhibits DQ when you do a rare search, but I am sure that the devs will also find a way to stop spamming, and even with spam, Gnutella is quite effective and consumes very little bandwidth, when you are acting as a leaf, and only moderate bandwidth when you are acting as ultrapeer. Some figures as finishing touch: • Leaf network traffic: About 1kB/s if you add outgoing and incoming traffic, which is about the seventh part of the speed of a 56k modem. • Ultrapeer traffic: About 7kB/s, outgoing and incoming added together, which is about one full ISDN line of less than 1/8th of a DSLs outgoing speed. Have fun with Gnutella! - ArneBab 08:14, 15. Nov 2006 (CET) PS: This guide ignores, that requests must travel through intermediate nodes. But since those nodes make up only about 3% of the network and only 3% of those nodes will be reached by a (QRP-routed) rare file request, it seems safe to ignore these 0.1% of the network in the calculations for the sake of making it easier to follow them mentally (QRP takes care of that). # wisp: Whitespace to Lisp: An indentation to brackets preprocessor to get more readable Lisp ## 1 Intro I love the syntax of Python, but crave the simplicity and power of Lisp. display "Hello World!" ↦ (display "Hello World!")  define : hello-world ↦ (define (hello-world) display "Hello World!" ↦ (display "Hello World!"))  • Wisp turns indentation into lisp expressions. • Get it from its Mercurial repository: hg clone http://bitbucket.org/ArneBab/wisp • See more Examples. ## Table of Contents ## 2 What is wisp? Wisp is a simple preprocessor which turns indentation sensitive syntax into Lisp syntax. The basic goal is to create the simplest possible indentation based syntax which is able to express all possibilities of Lisp. Basically it works by inferring the brackets of lisp by reading the indentation of lines. It is related to SRFI-49 and the readable Lisp S-expressions Project (and actually inspired by the latter), but it tries to Keep it Simple and Stupid. Instead of a full alternate reader like readable, it is a simple preprocessor which can be called by any lisp implementation to add support for indentation sensitive syntax. Just call ./wisp.py –help to see what you can do with it (./wisp.py - takes its input from stdin, so it can be used with pipes): ./wisp.py --help Usage: [-o outfile] [file | -] Options: -h, --help show this help message and exit -o OUTPUT, --output=OUTPUT  Currently wisp is implemented in Python, because that’s the language which I know best and which inspired my wish to use indentation-sensitive syntax in Lisp. To repeat the initial quote: I love the syntax of Python, but crave the simplicity and power of Lisp. With wisp I hope to make it possible to create lisp code which is easily readable for non-programmers (and me!) and at the same time keeps the simplicity and power of Lisp. Its main technical improvements over SRFI-49 and Project Readable are using lines prefixed by a dot (". ") to mark the continuations of the parameters of a function after intermediate function calls and working as a simple preprocessor which can be used with any flavor of Lisp. ## 3 Wisp syntax rules 1. A line without indentation is a function call, just as if it would start with a bracket. display "Hello World!" ↦ (display "Hello World!")  2. A line which is more indented than the previous line is a sibling to that line: It opens a new bracket. display ↦ (display string-append "Hello " "World!" ↦ (string-append "Hello " "World!"))  3. A line which is not more indented than previous line(s) closes the brackets of all previous lines which have higher or equal indentation. You should only reduce the indentation to indentation levels which were already used by parent lines, else the behaviour is undefined. display ↦ (display string-append "Hello " "World!" ↦ (string-append "Hello " "World!")) display "Hello Again!" ↦ (display "Hello Again!")  4. To add any of ' , or  to a bracket, just prefix the line with any combination of "' ", ", " or " " (symbol followed by one space). ' "Hello World!" ↦ '("Hello World!")  5. A line whose first non-whitespace characters are a dot followed by a space (". ") does not open a new bracket: it is treated as simple continuation of the first less indented previous line. In the first line this means that this line does not start with a bracket and does not end with a bracket, just as if you had directly written it in lisp without the leading ". ". string-append "Hello" ↦ (string-append "Hello" string-append " " "World" ↦ (string-append " " "World") . "!" ↦ "!")  6. A line which contains only whitespace and a colon (":") defines an indentation level at the indentation of the colon. It opens a bracket which gets closed by the next less-indented line. If you need to use a colon by itself. you can escape it as "\:". let ↦ (let : ↦ ((msg "Hello World!")) msg "Hello World!" ↦ (display msg)) display msg ↦  7. A colon sourrounded by whitespace (" : ") starts a bracket which gets closed at the end of the line. define : hello who ↦ (define (hello who) display ↦ (display string-append "Hello " who "!" ↦ (string-append "Hello " who "!")))  8. You can replace any number of consecutive initial spaces by underscores, as long as at least one whitespace is left between the underscores and any following character. You can escape initial underscores by prefixing the first one with \ ("\___ a" → "(___ a)"), if you have to use them as function names. define : hello who ↦ (define (hello who) _ display ↦ (display ___ string-append "Hello " who "!" ↦ (string-append "Hello " who "!")))  To make that easier to understand, let’s just look at the examples in more detail: ### 3.1 A simple top-level function call display "Hello World!" ↦ (display "Hello World!")  This one is easy: Just add a bracket before and after the content. ### 3.2 Multiple function calls display "Hello World!" ↦ (display "Hello World!") display "Hello Again!" ↦ (display "Hello Again!")  Multiple lines with the same indentation are separate function calls (except if one of them starts with ". ", see Continue arguments, shown in a few lines). ### 3.3 Nested function calls display ↦ (display string-append "Hello " "World!" ↦ (string-append "Hello " "World!"))  If a line is more indented than a previous line, it is a sibling to the previous function: The brackets of the previous function gets closed after the (last) sibling line. ### 3.4 Continue function arguments By using a . followed by a space as the first non-whitespace character on a line, you can mark it as continuation of the previous less-indented line. Then it is no function call but continues the list of parameters of the funtcion. I use a very synthetic example here to avoid introducing additional unrelated concepts. string-append "Hello" ↦ (string-append "Hello" string-append " " "World" ↦ (string-append " " "World") . "!" ↦ "!")  As you can see, the final "!" is not treated as a function call but as parameter to the first string-append. This syntax extends the notion of the dot as identity function. In many lisp implementations1 we already have (= a (. a)). = a ↦ (= a . a ↦ (. a))  With wisp, we extend that equality to (= '(a b c) '((. a b c))). . a b c ↦ a b c  ### 3.5 Double brackets (let-notation) If you use let, you often need double brackets. Since using pure indentation in empty lines would be really error-prone, we need a way to mark a line as indentation level. To add multiple brackets, we use a colon to mark an intermediate line as additional indentation level. let ↦ (let : ↦ ((msg "Hello World!")) msg "Hello World!" ↦ (display msg)) display msg ↦  ### 3.6 One-line function calls inline Since we already use the colon as syntax element, we can make it possible to use it everywhere to open a bracket - even within a line containing other code. Since wide unicode characters would make it hard to find the indentation of that colon, such an inline-function call always ends at the end of the line. Practically that means, the opened bracket of an inline colon always gets closed at the end of the line. define : hello who ↦ (define (hello who) display : string-append "Hello " who "!" ↦ (display (string-append "Hello " who "!")))  This also allows using inline-let: let ↦ (let : msg "Hello World!" ↦ ((msg "Hello World!")) display msg ↦ (display msg))  ### 3.7 Visible indentation To make the indentation visible in non-whitespace-preserving environments like badly written html, you can replace any number of consecutive initial spaces by underscores, as long as at least one whitespace is left between the underscores and any following character. You can escape initial underscores by prefixing the first one with \ ("\___ a" → "(___ a)"), if you have to use them as function names. define : hello who ↦ (define (hello who) _ display ↦ (display ___ string-append "Hello " who "!" ↦ (string-append "Hello " who "!")))  ## 4 Syntax justification I do not like adding any unnecessary syntax element to lisp. So I want to show explicitely why the syntax elements are required. ### 4.1 . (the dot) The dot at the beginning of the line as marker of the continuation of a variable list is a generalization of using the dot as identity function - which is an implementation detail in many lisps. (. a) is just a. So for the single variable case, this would not even need additional parsing: wisp could just parse ". a" to "(. a)" and produce the correct result in most lisps. But forcing programmers to always use separate lines for each parameter would be very inconvenient, so the definition of the dot at the beginning of the line is extended to mean “take every element in this line as parameter to the parent function”. Essentially this dot-rule means that we mark variables in the code instead of function calls, since in Lisp variables at the beginning of a line are much rarer than in other programming languages. In lisp assigning a value to a variable is a function call while it is a syntax element in many other languages, so what would be a variable at the beginning of a line in other languages is a function call in lisp. (Optimize for the common case, not for the rare case) ### 4.2 : (the colon) For double brackets and for some other cases we must have a way to mark indentation levels without any code. I chose the colon, because it is the most common non-alpha-numeric character in normal prose which is not already reserved as syntax by lisp when it is surrounded by whitespace, and because it already gets used for marking keyword arguments to functions in Emacs Lisp, so it does not add completely alien characters. The inline function call via inline " : " is a limited generalization of using the colon to mark an indentation level: If we add a syntax-element, we should use it as widely as possible to justify the added syntax overhead. But if you need to use : as variable or function name, you can still do so by escaping it with a backslash, so this does not forbid using the character. ### 4.3 _ (the underscore) In Python the whitespace hostile html already presents problems with sharing code - for example in email list archives and forums. But in Python the indentation can mostly be inferred by looking at the previous line: If that ends with a colon, the next line must be more indented (there is nothing to clearly mark reduced indentation, though). In wisp we do not have this help, so we need a way to survive in that hostile environment. The underscore is commonly used to denote a space in URLs, where spaces are inconvenient, but it is rarely used in lisp (where the dash ("-") is mostly used instead), so it seems like a a natural choice. You can still use underscores anywhere but at the beginning of the line, and even at the beginning of the line you simply need to escape it by prefixing the first underscore with a backslash (example: "\___"). ## 5 Background A few months ago I found the readable Lisp project which aims at producing indentation based lisp, and I was thrilled. I had already done a small experiment with an indentation to lisp parser, but I was more than willing to throw out my crappy code for the well-integrated parser they had. Fast forward half a year. It’s February 2013 and I started reading the readable list again after being out of touch for a few months because the birth of my daughter left little time for side-projects. And I was shocked to see that the readable folks had piled lots of additional syntax elements on their beautiful core model, which for me destroyed the simplicity and beauty of lisp. When language programmers add syntax using \\,$ and <>, you can be sure that it is no simple lisp anymore. To me readability does not just mean beautiful code, but rather easy to understand code with simple concepts which are used consistently. I prefer having some ugly corner cases to adding more syntax which makes the whole language more complex.

I told them about that and proposed a simpler structure which achieved almost the same as their complex structure. To my horror they proposed adding my proposal to readable, making it even more bloated (in my opinion). We discussed a long time - the current syntax for inline-colons is a direct result of that discussion in the readable list - then Alan wrote me a nice mail, explaining that readable will keep its direction. He finished with «We hope you continue to work with or on indentation-based syntaxes for Lisp, whether sweet-expressions, your current proposal, or some other future notation you can develop.»

It took me about a month to answer him, but the thought never left my mind (@Alan: See what you did? You anchored the thought of indentation based lisp even deeper in my mind. As if I did not already have too many side-projects… :)).

Then I had finished the first version of a simple whitespace-to-lisp preprocessor.

And today I added support for reading indentation based lisp from standard input which allows actually using it as in-process preprocessor without needing temporary files, so I think it is time for a real release outside my Mercurial repository.

So: Have fun with wisp v0.2 (tarball)!

PS: If you want to run wisp code pseudo-directly, you can use the following script:

#!/bin/sh
~/path/to/wisp.py -o /tmp/wisptmp.scm \$@ && guile -l ~/.guile -s /tmp/wisptmp.scm


PPS: Wisp is linked in the comparisions of SRFI-110.

# Write programs you can still hack when you feel dumb

I just read the post Hyperfocus and balance of Arc Riley from PySoy who talks about trying to get to the Hyperfocus state without endangering his health. Since I have similar needs1, I am developing some strategies for that myself (though not for my health, but because my wife and son can’t be expected to let me work 8h without any interruptions in my free time).

I try to change my programming habits instead of changing myself to fit to the requirements of my habits, though.

The guideline I learned from writing PnP roleplaying games is to keep the number of things to know below 7 at each point (well, the actual limitation for average humans is 4 objects!). For a function of code I would convert that as follows:

1. You need to keep in mind the function you work in, and
2. what it should do (purpose and effect), and
3. the resources it uses (arguments or global values/class attributes).

Only 4 things left for the code of your function. (three if you use class attributes/global values and function arguments. Two, if you have complex custom data-structures with peculiar names or access-methods which you have to understand for doing anything. One if you also have to remember the commands of an unfamiliar editor or VCS tool. See how fast this approaches zero even starting with 7 things?)

Add an if-switch, for-loop or similar and you have only 3 things left.

You need those for what the function should actually do, so better put further complexities into subfunctions.

But if you want to be able to hack that code while you feel dumb (compared to those streaks of genius when you can actually hold the whole structure of your program in your head and forsee every effect of a given change before actually doing it), you need to make sure that you don’t have to take all 7 things into account. Tune it down for the times when you feel dumb by starting with 5 things2, and you get:

2 things for your function. Some Logic and calling stuff are 2 things.

If it is an if-switch, let it be just an if-switch calling other functions. Yes, it may feel much easier to do it directly here, when you are fully embedded in your code and feel great, but it will bite you when you are down. Which is exactly when you won’t want to be bitten by your own code.

To find a practical way of achieving this, Django’s concept of loose coupling and tight cohesion (more detailed) helped me most, because it reduces the interdependencies.

The effects of any given change should be contained in the part of the code you work in - and in one type of code.

As web framework, Django seperates the templates, the URI definitions, the program code and the database access from each other. (see how these are already 4 cathegories, hitting the limit of our mind again?)

For a game on the other hand, you might want to seperate story, game logic, presentation (what you see on the screen) and input/user actions. Also people who write a scenario or level should only have to work in one type of code, neatly confined in a file or a small set of files which reside in the same place.

And for a scientific program, data input, task definition, processing and data output might be seperated.

Remember that this seperation does not only mean that you put them into different files, but that these parts are only loosely coupled:

They only use lean and clearly defined interfaces and don’t need to know much about each other.

This does not only make your program easier to adapt (because the parts you need to change for implementing a given feature are smaller). If you apply it not only to the bigger structure, but to every part of the program, it’s main advantage is that any part of the code can be understood without having to understand other parts.

And you can still understand and hack your code, when your child is sick, your wife is overworked, you slept only 3 hours the night before - and can only work for half an hour straight, because it’s evening and you don’t want to be a creep (but this change has to be finished nontheless).

Note that finding a design which allows that is far more complex than it sounds. If people can read your code and say “oh, that’s easy. I can hack that” (and manage to do so), then you did it right.

Designing a simple structure to solve a complex task is far harder than designing a complex structure to solve that task.

1. Where I got bitten badly by my high-performance coding habits is the keyboard layout evolution program. I did not catch my error when the structure grew too complex (while adding stuff), and now that I do not have as much uninterrupted time as before, I cannot actually work on it efficiently anymore. I’m glad that this happened with a mostly finished project on whoose evolution no ones future depended. Still it is sad that this will keep me from turning it into a realtime visual layout optimizer. I can still work on its existing functionality (I kept improving it for the most importang task: the cost calculation), but adding new functionality is a huge pain.

2. See how I actually don’t get below 5 here? A good TODO list which shows you the task so you can forget it while coding might get you down to 4. But don’t bet on it. Not knowing where you are or where you want to go are recipes for desaster… And if you make your functions too small, the collection of functions gets more complex, or the object hierarchy too deep, adding complexity at other places. Well, no one said creating well-structured programs were easy. You need to find the right compromise for you.

# Your browser history can be sniffed with just 64 lines of Python (tested with Firefox 3.5.3)

After the example of making-the-web, I was quite intrigued by the ease of sniffing the history via simple CSS tricks.

- Firefox Bug report - still open!
- Start Panic! - a site dedicated to spreading the news about the vulnerability.
- What the internet knows about you - easily sniff yourself.
- Cute kitten - look at cute kittens. Does this look suspicious? :)

So I decided to test, how small I get a Python program which can sniff the history via CSS - without requiring any scripting ability on the browser-side.

I first produced fully commented code (see server.py) and then stripped it down to just 64 lines (server-stripped.py), to make it really crystal clear, that making your browser vulnerable to this exploit is a damn bad idea. I hope this will help get Firefox fixed quickly.

If you see http://blubber.blau as found, you're safe. If you don't see any links as found, you're likely to be safe. In any other case, everyone in the web can grab your history - if given enough time (a few minutes) or enough iframes (which check your history in parallel). This doesn't use Javascript.

It currently only checks for the 1000 or so most visited websites and doesn't keep any logs in files (all info is in memory and wiped on every restart), since I don't really want to create a full fledged history ripper but rather show how easy it would be to create one.

Besides: It does not need to be run in an iframe. Any Python-powered site could just run this test as regular part of the site while you browse it (and wonder why your browser has so much to do for a simple site, but since we’re already used to high load due to Javascript, who is going to care?). So don’t feel safe, just because there are no iframes. To feel and be safe, use one of the solutions from What the Internet knows about you.

Konqueror seems to be immune: It also (pre-)loads the "visited"-images from not visited links, so every page is seen as visited - which is the only way to avoid spreading my history around on the web and still providing “visited” image-hints in the browser!

Firefox 4.0.1 seems to be immune, too: It does not show any :visited-images, so the server does not get any requests.

So please don't let your browser load anything depending on the :visited state of a link tag! It shouldn't load anything based on internal information, because that always publicizes private information - and you don't know who will read it!

In short: Don't keep repeating Ennesbys Mistake:

• Mistake:

• Effects:

(comic strips not hosted here and not free licensed → copyright: Howard V. Tayler)

And to the Firefox developers: Please remove the optimization of only loading required css data based on the visited info! I already said so in a bug report, and since the bug isn't fixed, this is my way to put a bit of weight behind it. Please stop putting your users privacy at risk.

Usage:

• python server.py
start the server at port 8000. You can now point your browser to http://127.0.0.1:8000 to get sniffed :)

# Songs

Below you find some of my songs.

To see only songs which have a recording I deem "listenable", please check the

< < Songs in the Wind of Time > >

- they also feature a PodCast.

Happy listening!

Besides: If you speak german (or just happen to like it), you might enjoy some of my german songs.

# (All this is) Gentoo for me

- Words and Music: Arne Babenhauserheide ( http://draketo.de )

Listen to the song: ogg
This recording is part of the music podcast singing in the winds of time.

Refrain:
I build my kernel and I strip it down,
my programs only do what I need
the tree is at my very core
it's my whole world and it is my seed.

I came to Gentoo several years ago,
it's power was my joy and woe,
replaced OSX with a mighty shell,
and learned its ways and learned them well.

(well mostly, and learning at times is a hell)

--

I rebuilt only 2 times since that day,
for at first I didn't know my way,
the second one was a lovely bird,
but a new Computer brought the third.

(someday I want a Gentoo GNU/Hurd)

--

I learned each day and my knowledge grew,
from the wiki and forums it leaped and flew,
information in structure gave power in mind,
and the strongest is what the tutorials bind.

(but read them well, or trouble's what you find)

--

A new life came when I met the snake,
I'd been asleep, now I'm awake,
for portage might be quite complex,
but reading Python's sometimes close to sex.

(go deeper and deeper and the world seems to shift)

--

Somewhere between some seedlings appeared,
with stuff for special people geared,
sometimes dangerous, but mostly good,
and the tree had grown a little wood.

(but remember where the main trunk stood)

--

And now the tree has KDE 4,
since that appeared I like it evermore.
All that nifty stuff I missed from my Mac,
usability and beauty and the vision are back.

(and don't forget power, more than any I knew before)

--

Together all this is Gentoo for me,
but there sure is more I don't get or see,
and some parts for which I feel quite strong,
just didn't fit into this song.

(Gentoo's much too large to fit into any... )

PS: I just uploaded this into my Jamendo Account.

PS: I just found another (older) Gentoo song.

# A song from the icy lands

A song about sharing and free software and changing the world. Originally written to recreate the vision of the Polar Skulk in art.

Criticism and praise would be a great gift to the pup writing this song.

## A song from the icy lands

Freedom for Music, for Movies and for every word,
Fighting is not quite absurd,
and we are peaceful, good and kind,

--

Ref1:
Our world is ice,
but we're together,
calling to the moon,
the cousins of the wolf.

Our tales of freedom
light a fire
of love and family,
the song of foxes.

we teach the wolves,
and sing of beauty,
gather wisdom,
and sing the music of the world.

Our world is ice,
but we're together,
calling to the moon,
the cousins of the wolf.

Our tales of freedom
light a fire
of love and family,
the song of foxes.

we teach the wolves,
and sing of beauty,
gather wisdom,
and free the music of the world.

--

Our skulk is happy with that which is free,
we spread the free things which we see,
like Gnus, who made the firelight,
we spread the freedom in the night.

--

Ref2:
-Ref 1 but:

... learn the wisdom of the world

... free the wisdom of the world.

--

Each night we meet artists who give us their songs,
and more learn each day, where the music belongs,
and wherever we travel, a seed takes its hold,
and singing and dancing shine brighter than gold.

and wherever we travel, a seed takes its hold,
and singing and dancing grow stronger than gold.

--

Ref3:
-Ref 1 but:
... dance the rhythm of the world

... change the rhythm of the world.

If you liked the song from the icy lands, you might also like a tale of foxes and freedom and Infinite Hands.

# Dragon Cycle

The War of Dragons and Birth of the Dragonriders Sung and played at FilkCONtinental 2004.

No music yet - but someday I'll get that recording...

# Dragon Cycle 1: Dragons Lament

Ah_ah_ah...

What have those people done?
The Dragon lies there, in Her own blood.

Ah_ah_ah...

What have those people done?
The Dragon lies there, in Her own blood.

Ah_ah_ah...

They came in great hords,
The Dragon lies there, in Her own blood.

Ah_ah_ah...

They came in great hords,
The Dragon fought, but not well enough.

Ah_ah_ah...

She killed many hords,
The Dragon fought, but not well enough.

Ah_ah_ah...

She killed many hords,
But at last, She lost to the flood.

Ah_ah_ah ... ah___

# Dragon Cycle 2: Step into their Land

I come to you for my child has cried,
I know, you're shivering now in dread,
but don't you fear for your hide.
No dragon will burn your cities down,
when you give what we demand.
The bodies of those, who took her life,
shall Die from human hand.
They shall die from Human Hand.
For dragon's Law and Custom, now,
I'll fold my wings till sundown,
To see what you decide.

# Dragon Cycle 3: Capture

Puny Human, what have you Done?!
You call powers, which aren't yours to control,
which will sweep all away, when used in war.

The bonds on this, my body, will not hold forever,
and when they perish, so will You!

Back off in fear, that I might use what you did,
which neither Dragon nor Human should ever touch.

Why don't you leave wizard?

Show me that, which you clutch in your robes,
black as they are to block my view.

No! You know not, what you do!
I call on all you learned through your study of magic,
don't soil your soul any more
by forcing what is immortal

Don't you dare!
shall be hunted by all dragons
for now and forever!

A childs voice: "Where am I? Where am I?"

Act: *drop down and look up like an innocent unknowing child.*

# Dragon Cycle 4: Flight and Slaughter

e              C      D
The dragons in all glory ceased to fight,
e          D          e
as wizards power scorched their wings.

As human armies marched along, in greatest size,
With wizards in their leading ranks.
The dragons left the battleground without a single strike
left inhabited lands.

They fled to lonely forests, dark and lush,
The humans burned them down.
They fled to plains and grasslads, never seen and never touched,
the humands brought their crown.

They then fled to dark marches, where sunlight never shines,
a thousand workers pumped them dry
And then at last they all drew back,
to moountains near the sky.

|| Instrumental ||

e                 D     e
In darkness dances a little flame,
C                     D
from teeth it leaps, from breath it came

And sparkles bright on polished stone,
on scales of one, who sleeps alone,
And dreams uneasy dreams at night,
of hunger suffering thirst and blight,

And each time a being dies in vain,
the dragons body shakes in pain,
For she, the oldest on this land,
can feel the pain, the fury, the hate
and despair of all who live.

|| fade out. then spoken ||

Into this darkness sounds a step,
of boots of metal, cold and rash,
the ringing of swords, when unsheated,
and many boots, and always more.
scaled eyelids flutter and rusty red eyes
shimmer as the light of the moons gets caught therein.

"Why do you invade my home?
You gain little by slaying me, but
the world loses a close friend with me."

Nothing answered but the singing of steel,
when it is flung through the air,
and the blood of a dragon wet the ground this day,
and the rage of the dragons got unleashed,
when the swords returned to their sheets.

# Dragon Cycle 5: Death and waking

Fire sweeping over the land,
destruction and death,
the dragons are free.
-
Hate and fury in the village,
wings bring storm
and burning hail.
-
The fire burns the woman,
burns the man,
the dragon nears the child.
-
Eyes of fury meet the fear,
nostrils taste
the anguish of the child.
-
Fire builds deep in the guts,
leaps from teeth,
-
A cry meets dragons fury
"Leave my sister!"
The dragon stops.
-
From rags beneath the window board
a child rises
and stares the dragon down.
-
Fingers grow to dragon claws,
Teeth grow sharper, skin goes black,
fire burns the clothes. A dragon returns.
-
A voice of power, voice of War,
"We will not fight,
Not anymore!"

-
Bows then down to the childs big eyes,
Hot breath on her face,
quietly speaks:
"On my back you ride today,
we shall from now together stay,
and fly the winds as one."

# Dragon Cycle 6: Bard's Fair

Dragon and human they fly on the winds,
their bodies floating ever higher.
Their bond of purity and of loving,
and something deep within their souls.

||: And they always remember the voice of war,
"We will not fight, not anymore!" :||

# Drowsy Pagan (and his stew) - a Filk on Dawson's Christian

To the melody of Dawson's Christian from Duane Elms.

Jason Drowsy was a hunter known to cook a burning stew,
and he turned to be a pagan in the hunt of eigthy two.
Now that pagan was the finest cook of the royal twins
and the stew of Jason Drowsy smelled like sins.

In the hunt for the kings wedding, waiting for the royal son,
he then saw a regal steed who was equal to no one,
as the royal son came by him, and he rode out for a prize,
Drowsy knew just far to well whom he must slice.

No one talking saw the battle, though the guard was quick to leave,
when they reached the site they found a scene no sane man could believe.
Dead in grass there lay the princeguard, cut to ribbons all around,
but no sign of Jason Drowsy could be found.

Chorus 1:
There are stories of the nightwatch and the ents and dragonwood,
there are stories of the unicorn with a lady at his foot,
but the tale that warms my spirit more because I know it's true
is the tale of Jason Drowsy and his stew,
yes the tale of Drowsy pagan and his stew.

- break for music -

I was second scout for heras dream, the escort was all mine,
we were shipping precious metals and a carriage with wine,
It was in the second week of the most uneventful ride,
when the cold and snow froze all our breath at night.

Now to me there was no question, for there was nowhere to run,
and you just can't keep moving when you never see the sun,
so we stopped and built a campsite for a time in freezing snow,
when in underbrush a light began to glow.

First we thought it a predator, but the color was all wrong,
then we thought it might be rescue, but no sound of horn did come,
then the fire grew and started burning red.

Now a glow came from that fire that is known by very few,
and we never knew a meal could smell just like that special stew,
never fearing our numbers then a figure left the wood,
and he carried a huge bowl which smelled too good.

Chorus 2:
And that pagans stew burned hotter than all stew I ate before,
and its taste would melt to easily the heart of any whore,
as the meal then filled our stomachs and we search for some more shreds,
all the fear of cold was wiped from our heads,
all the fear of cold was wiped from our heads.

Just as quickly as we started all the feasting then was done,
for the cold inside had vanished and the strangers stew had won,
though we tried to call and thank him, not an answer could we draw,
then he dropped the bowl and this is what we saw.

It had markings there all over and an emblem on one side,
and we knew that every owner but that pagan had long died,
for the markings spoke of royalty, and deep inside we knew,
we all ate from Drowsy pagans fabled stew.

But instead of staying with us, he then simply walked away,
but came back each night with more stew tasting as if made by fey,
when at last the cold did lift, deep inside us each one knew,
we were saved by Jason Drowsy's burning stew, yes, we were saved by Drowsy pagans burning stew.

- Chorus 1 -

Background: I really love the sound of Dawson's Christian, but I never liked the name of the ship - and I learned from my parents not to glorify violence, at least not all the time. Violence is the ultimate escalation of a conflict, so it is well suited to stories, but there are much more important things in life than being the best soldier - for example being the legendary cook who saves caravans from freezing to death and chose a life in the wilderness over the life for his king when he realized what's really important in our world.

# Filk the gist

A parody on March of Cambreadth by Heather Alexander aka Alexander James Adams, the Fairy Tale Minstrel, written on the filk-de list to say “damn, we are filkers! We don’t quabble about politics — we sing about them!” (and to make crystal clear what I mean, because it wasn’t on the mailing list: Politics in song are Filk, and this song is against lying politicians! I’m sorry, Le-matya. This was meant to support your position but I forgot to doublecheck if it is clear in the context.)

## Filk the gist

Keyboards klick, Cellphones ring,
Shining laptop’s hackers sing,
Newsfeeds burn with polished prose,
Show us where we find our foes,
Midnight flame with congressmen,
Fight the trolls to keep us sane,
Sound the horn and call the cry,
How many of us can spot their lie?

Fuck the orders you get told,
Make their shallow hearts get cold,
Fight until you die or drop,
A force like ours is hard to stop,
Close your mind to stress and pain,
Write till you’re no longer sane,
Let not one wrong word pass by,
How many of us can spot their lie?

Guard your disk and emails well,
Send these bastards back to hell,
We’ll teach them the cyberway,
They won’t write in our clay,
Fight till every line glows red,
Raise the flag up to the sky,
How many of us can spot their lie?

Dawn has broke, the time has come,
Publish to a marching drum,
We’ll win the war and pay the toll,
We’ll fight as one in heart and soul,
Midnight flame in filkers list,
Write the songs and catch the gist,
Sound the horn and call the cry,
How many of us can spot their lie?

Hackers blog while Filkers sing,
Yesterday we were too shy,
How many of us can spot their lie?

# Happy Birthday to GNU - 25 years

Today is the 25th birthday of the GNU project - the very beginning of the free software community we are today.

This is my small, humble contribution for the birthday celebration.

Happy Birthday to GNU,
Happy Birthday to GNU,
Happy Birthday not Unix,
Happy Birthday to GNU.

Naturally this recording is free licensed.

It is part of the music podcast singing in the winds of time.

# Infinite Hands - singing a part of the history of free software (filk)

- Free Software version of "Finity's End"; original: {lyrics: CJ Cherryh, music: Leslie Fish}.
- filked by Draketo aka Arne Babenhauserheide (draketo.de) (capo 3)

- please check the dedicated site: http://infinite-hands.draketo.de -

Songtext for printing and passing on: pdf | odt (source) | txt
Audio-files: ogg | mp3
This recording is part of the music podcast singing in the winds of time.

==== Infinite Hands ====

C        a             D           a
Infinite Hands build a world to be free,
E       G            a
the digital space we all know,
C      (a)         D            a
unlimited use has the code that we write,
C             G            a
and freedom's the badge we all show.

C                           D           a
The stuff runs our servers, our desktops and grids,
D                    a
by uncounted hands it was made,
a                     D         a
set out in the wild on the day it is born,
C             D           a
for our free running, long coding trade.

Ref:
C             a          D           a
And no law shall bind us or keep us for long,
E       G                    a
for infinity's ours and infinity's free,
C          a           D             a
and no country owns us, and no land's our own,
C         G        a
for Infinite Hands are we.

The companies thought that they'd pay us for lines,
and have all the code for their own.
"You're company people and company teams,
your code will now serve us alone."

R.Stallman was only a student that day,
and he said to himself, thinking deep:
Farewell to a job, all my code shall be free,
for what they don't own, they can't keep.

-Ref-

The miracle came, he did not change his mind
and gathered around him a crew,
and people could buy his free programs from him,
sent by mail and his money got through.

At times others came and they said, "We're free, too,
you can take code as if in a mall.
It will be only yours then, just say it's from us,
and it runs and compiles where you call."

-Ref: But... -

Now Richard M. Stallman was vexed and annoyed,
and he sent out the word as before:
"All code must be free, free to use and improve,

But still many coders were lured from our ranks,
Now for Windows and Apple they strived,
- spoken in background: And for Amiga, BeOS, IBM,
and many more -
their doom and their fall came from finland one day,
as to GNU a free kernel arrived.

-Ref-

"Come all to U.S.", came a call spreading wide,
"for there is no place else you can be."
- spoken in background: DMCA, DRM, TCPA,
software patents, idea patents and a war on terror -
But Richard M. Stallman still sent out the word,
that all code from now on must be free.

So code would stay free and our teams did grow strong,
but some loopholes remained in our side,
which traitors like TiVo exploited to steal,
so we needed a change in the right.

- Ref-

... no words ...

So our license reshaped by the people and GNU,
And orders be none to withhold us or bind,
C                       E          a

Ref:
C             a           D           a
Just that law shall bind us and keep us for long,
E       G                   a
for infinity's ours and infinity's free,
C          a            D             a
and no country owns us, and no land's our own,
C        G               a
for Infinite Hands/Lines are we.
C       E        C           (G) a
are we, for Infinite Hands/Lines are we.


Background:
This is a part of the story of free software, although it misses some details. While "Finity's End" was a work of fiction (the book is avaible on amazon.com, amazon.de and maybe at bookzilla.de), this story really happened and happens today.

Licensing:
This song is free art avaible under the following four licenses (for details, please visit draketo.de/licenses). Permission to filk her work freely was granted by Leslie Fish (cite: "Anything to keep the internet free: Go for it!" - she's great! - maybe you'd like to listen in on her music?) and CJ Cherryh.

- GNU FDL
- GPLv2 or later or GPLv3 or later
- Art Libre v1.3 or later
- Lizenz für freie Inhalte v1.0 webstar

You can use any of those four licenses, because I can't yet know which license will make it to the general license for free art. Please keep all four licenses when you make changes, so we avoid licensing chaos. It doesn't use creative commons licensing, because cc does not protect the free avaibility of the sources (Just think LaTeX and pdf).
Sources: infinite-hands.draketo.de

It was written by Draketo aka Arne Babenhauserheide, finished on 2007-09-28, improved by Alan Thiesen 2007-10-08.
It's first public performance was at FilkCONtinental2007 (A filk convention on the Freusburg in Germany)

Missing topics: DRM, SCO, Open Source – I'd be glad to get suggestions from you! ( just use the comment field )

Arne Babenhauserheide

People visited Infinite Hands since 2008-04-17

# Infinite Hands draft with Bodhran and Flute

A rough draft of Infinite Hands with additional instruments.

The Flute and Bodhran tracks are improvised on the spot and recorded yesterday in one go, so they are a bit rough :)

Also the vocals are finally up to date with the text.

I hope you enjoy it!

If you want to dabble with the recording yourself, just grab the multitrack audacity-source.

And if you like the song, why don’t you flattr it?

# Pond-erosa Puff (OpenBSD)

I recently found the OpenBSD songs, and the artists say that they are part of OpenBSD, logically as well as license-wise. And OpenBSD is licensed under a three-clause BSD license which is GPL compatible - that means I can record and publish it here!

This is the OpenBSD 3.6 release song: Pond-erosa Puff, written about people who make something free and suddenly decide to go the unfree path.

Many thanks to all you OpenBSD guys!
Your license is a bit too weak for my taste, but damn, it's free - and your code is as good as your songs!

Audio-files: ogg | mp3

This recording is part of the music podcast singing in the winds of time.

My recording is far from perfect, but I hope you enjoy it anyway! Also it should give everyone a good headstart who always wanted to play the song on the guitar. Oh, and please do listen to the ogg vorbis file. It sounds far better! - Draketo

### Pond-erosa Puff

Well he rode from the ocean far upstream
Nuthin' to his name but a code and a dream
Lookin' for the legendary inland sea
Where the water was deep n' clean n' free

But the town he found had suffered a blow
Fish were dying, cause the water was low
Fat cat fish name o' Diamond Dawes
Plugged the stream with copyright laws

He said my water's good n' my water's free
So Pond-erosa, you gonna thank me!
Then he bottled it up and he labeled it "Mine"
They opened n' poured, but they ran outta time!

So Puff made a brand and he tanned his hide
Said. "this is the mark of too much pride"
Tied him to a horse, set the tail on fire
Slapped er on the ass and the water went higher!

Pond-erosa Puff
wouldn't take no guff
Water oughta be clean and free
So he fought the fight
and he set things right
With his OpenBSD

Well things were good fer a spell in town
But then one day, dang water turned brown
Comin' to the rescue, Mayor Reed
He said, "This here filter's all ya'll need"

But it didn't take long 'fore the filter plugged
Full of mud, n' crud, n' bugs
Folks said "gotta be a gooder way"
Mayor said "Hell No! She's O.K."

"The water's fine on the Open range"
And he passed a law that it couldn't change.
"No freeze, no boil, no frolicking young"
Puff took him aside, said "this is wrong"

Then he found the Mayor was addin' the crud!
So he took him down in a cloud of blood
Said "The Mayor's learnd, he's done been mean"
So they did it right and the water went clean!

CHORUS

So once agin' it was right, but then
The lake went dry, she was gone again!
Fish started flippin' and floppin' about
Yellin' "Mercy Puff! It's a doggone drought!"

So he rolled up-gulch till he hit the lake
Of Apache fish, they was on the take
They'd built a dam that was made of rules
Now Puff was pissed and he lost his cool!

I'm sick and tired of these goldarn words!
n' laws n' bureaucratic nerds!
You're full o' beans n' killin' my town
and if you's all don't shut er down

I'll hang a lickin' on every one
of you sons o' bitchin' greedy scum!
So he blew the dam, an' he let 'er haul
Cause water oughta be free for all!

CHORUS

# Realistically Me (the square root)

-Melodie partly from "Swing low, sweet chariot"-

He looked over squares, and what did he see?
coming just for driving him mad,
The rational numbers didn't fit for me,
coming just for driving him mad.

He looked over pentagrams, and what did he see?
coming just for driving him mad,
There was funny looking a cousin of me,
coming just for driving him mad.

He told his pupils, all the world is a number,
coming just for driving him mad,
And one of them said: "this one makes me wonder"
coming just for driving him mad,

He told him of me, and to his growing dread,
coming just for driving him mad,
He proved my being, and what did he get,

I'm the square root,
The funny square root,
Just me, the square root,
And everything in me is good!

(being irrational can be great! :-) )

# Seiken Densetsu 3 Bardstale

The introduction story of Angela from the SNES-Game Seiken Densetsu 3 (SD3) which you play when you start the game with her as main character, done in song-form. Infos about her and about that game: http://www.fantasyanime.com/mana/som2char_2.htm

This is the first song I ever wrote myself, text melody and guitar, and I am still not quite satisfied with the way I can play it.

-> ogg vorbis music file.

It misses a violin. (I played it once together with a fiddler, and it was exactly what I imagined. But I had no recording accessories at hand at that time, and I'm sad we weren't able to play together more often... I hope you still like it the way it is now!)

Songtext and chords:

## Seiken Densetsu 3 Bardstale

Chords:

D A E G
D A C G

Ref: d a d a
d a C G

The power of the magic, the magic of the spell,
brought her out of danger, brought her out of hell.
Beauty in her eyes and beauty in her face,
magic in her heart, but no magic in the mind.

Her mother was against her, the queen of the castle,
crying out for power, for power to prevail.

Ref: Lonely girl, beautiful girl, arrogant girl with magic in her heart.

Her queen needed a life, taken away from a human,
tried to take another, another than her own,
Her kingdom was freezing, the mana was fading,
by fleeing her mother, she finally ran away.

Ref: Lonely girl, beautiful girl, arrogant girl with magic in her heart.

Carried by the magic, the magic in her heart,
safe from the grip of her mothers magic hands,
Alone in the cold, but living at least,
she awoke outside the castle and ask'd her where to go.

- New Chords: -
- d a e a -
- a C G a -

Slowly she walked south to be attacked by fierce fiends,
after the victory the cold took her in its hands.

- d C a -
- Strummed: D A E G -

She awoke in the bed of an all unknown house,
selfishly stepping out without a thank you.

# The truth is in there - Maxwell gives us the speed of light

- a Filk on "X as in Fox" by Cecilia Eng -

Once we believed in the speed of the light,
and experiments show that what we thought is right,
But we search our math for another sight,

'Cause we hope that the truth is in there.

When we measure the speed of something somehow,
we can only check against the distance, but now
we'll show that we get it from Maxwell', and wow!

We will know that the truth is in there!

First we take a sheet of charge at hand,
then we move it by an unseen command,
and nabla rot B shows the field where we stand,

And we know that the truth is in there.

Now that formula says, our field's everywhere,
when about the electrics we never care,
but that can't be true, for our world's still there,

And we know that the truth is in there.

Since nabla rot E is -dB/dt,
a field can never change at once all that we see,
It takes some time, which gives us the v,

And we guess that the truth is in there.

So now we take a small square far from the sheet,
when we check the change in B, it then shows quite neat,
It is width times the speed times B, which we need,

To see that the truth is in there.

For we also know that an electric field,
round a loop is (as Stokes will quickly yield),
Just the length of the loop times the strength of the field,

That's a clue that the truth is in there.

For now we take both and make the loop small,
so the length in a field is its width, and we call,
"E is v times B", which gives us all

To believe that the truth is in there.

We then do the same for nabla rot B,
But there's a c squared so we easily see,
E is also c squared times (B divided by v),

And we see that the truth is in there.

For with these we can easily tell,
that v must be c which we like so damn well,
For now we are sure we're right when we yell,

We know that the truth is in there!

So what do we know from this funny tale?
The strength of the fields leads us through the veil,
and gives us the speed of light without fail,

So we see that the truth is in there!

Now we only need to measure the attraction of charge,
and then the attraction of flowing charge,
and the root of their quotient might be quite large,

But it gives us the truth that's in there.
The invariant truth that's in there.

Jepp, that's a way to get the speed of light from the Maxwell equations and the knowledge that our world still exists :)

I hope you enjoyed reading the song as much as I enjoyed writing it!

# Volkoi

There’s music in their stamping,
in their shouting to above,

There’s rhythm in their live,
in fight and death and love,

(There's rhythm in their stance,
in strike and blow and bluff,)

Where nobler people seek the truth
and never find their hearts,

A Volkoi’s always on his toes,
when the music starts.

— Eschrandar, Nayan War Engine, Mechanical Dreams (sadly only the store is left of this great game…)

# With Python from the Shadows

- Written by Draketo aka Arne Babenhauserheide, originally to the melody of Moonlight Shadow on 22.2.08 but switched on 2008-06-27 to be able to put a recording under free licenses -

Audio-files: ogg
This recording is part of the music podcast singing in the winds of time.

### With Python from the Shadows

First time ever I saw it,
carried away by its lightweight structure,
My heart grew fuzzy and sunlit,
carried away by its lightweight structure,

All I saw was the relevant part,
deep inside every programs core,
It flowed like my thoughts but it looked like art,
So clear that at once I saw through.

-

The bridge of doom I was then crossing,
carried away by its lightweight structure,
The guardsman into darkness tossing,
carried away by its lightweight structure,

( "What is the fastest way to store a list of unicode chars?"
"A mutable or an immutable list?"
"I don't know... Aaaargh!" )

It's a month now since I passed that whitening guard,
deep inside every programs core,
To know what I want is the hardest part,
since the code I can simply see through.

-

Ref: I chime, I rime, see you with Python all the time,
I chime, I rime, see you with Python, next time.

-

Four a.m. in the morning,
carried away by its lightweight structure,
you can see my fingers are still coding,
carried away by its lightweight structure,

All it takes is an idea in me,
For stuff inside any programs core,
And the code flows freely for my mindview to see,
So clear that at once I see through.

-

- Ref -

-

mmmmmmm
carried away by its lightweight structure,
mmmmmmm
carried away by its lightweight structure,

I write too late, even typing grows hard,
mmmmmmm
The night is heavy and my lids will not part,
but my mind can still simply see through.

# Broken Apple Heart - Why I'm a Mac user no more

Beware of that Fruit (Broken Apple Heart) ( http://bah.draketo.de/?p=13 )

(What do you think, why Macs no longer Smile?)

Chorus: I was an Apple User and loyal to the core,
But one grey day I realized what made my heartache soar,
They want to make the big bucks now and want no one to see,
That ever more surveillance takes the Users rights as fee.

I was a little Bugger, when I saw the first of Mac,
Discovered there then Shufflepuck and all the time came back,
It belonged to parents friends then, but my will it showed its grip,
And when they tried to take “My Mac”, their efforts meant a zip.

My third Mac, it was bigger, not so cute, but lovely, too,
And to my greatest pleasure, I owned the smiling goo,
I was a big fanatic, Apple was it all the time,
And when I got to talking, all my friends could do was wine.

Then came the time of MacOSX, it was the thing for me,
The beta was the slowest beast, but with it I felt free,
I worked and it grew faster and I never bid the time,
And every single Update pushed the speed another line.

But then they made the panther and it hated my old Mac,
And though I bought a new one, my belief did not come back,
Then came OSX on intel, my belief lost every race,
Apple takes “trusted computing”, hits me squarely in the face.

Now my Mac here owns a Linux and Apple makes no gain,
Since for my precious income I want freedom and no chain,
So I switch on to a Linux, MacOSX I use the least,
with those lovely little penguins I take midsummers feast.

Some days Im feeling sad and my parting brings me pain,
But without freedom for their Users, all their genius is in vain,
When I’d come back to Apple, I will tell them with delight,
To get me back they must adhere to freedom and my rights,
They must adhere to freedom and to every Users rights.

Dear Steve Jobs,

I once left Apple after a Life of using Macs because you included the TPM chip in the Intel Macs, and I’ve been an active opponent to Apple ever since, because DRM and Trusted Computing aka Treacherous Computing go against everything I believe right in informatics and you had just made Apple the spearhead of DRM.

I’m not likely return as user (I’ve grown too fond of KDE for that), but I am likely to return as supporter, if you decide to give back the rights of your users to manage their own computers freely.

DRM takes away freedom from users, and I can’t support anybody who takes the freedom of people to turn it into profit. You have the option now, to shape Apple into a “good guy” again, and I urge you with all my heart to do it. You broke my heart in the past, but you might be on the way to mend it… and until you do so, I’m going to sing the song into which I shaped my pain back then:

# The scientific method in a dent/tweet (140 characters)

science in a dent:

1. Form a theory. 2. design an experiment to test the theory. 3. do it. 4. Adjust the theory, if needed → 2

→ written in identi.ca.

Please feel free to use it!

If that’s to brief:

and

That’s not faith. It’s theory. The difference is that there’s a clearly defined way to adjust the theory, when it’s wrong.

Naturally this is still vastly oversimplified, but that’s the price you pay by trying to explain a complex system in 140 characters. What’s to remember: theory and experiment go side by side and fertilize each other. New theories allow finding new experiments which answer questions in the theories and allow finding new theories (or changes to old theories – or tell us which direction of fleshing out theories will likely be useful).

Just like this text they need to be licensed under free licenses.

For my new Neo-Keyboard I wanted the GNU head from GNU and the plussy from FSFE on the meta/super keys (those which often have a Fenster-Logo). Sadly the normal GNU head did not work very well with the Laser from Schubi, so I grabbed my tablet, fired up mypaint and created a new one, building on the old, but adding more contrast and stronger lines. I hope you like it!

See the attachments for other versions, OpenRaster and SVG source.

PS: And my spacebar says “Infinity’s ours and infinity’s free”! *happy*

# A roleplaying easter holiday 2011

We played Exalted sunday morning, slaying a second circle demon before nightfall, and Dresden Files (FATE) till 2 o'clock in the night. We had a cross in the room, though: It was screwed tightly into the wall, so we could not get rid of it without damaging the wall, so it stayed there…

Well, playing solar and lunar exalted (warriors of the gods) and a renegate dragonblooded (normally weaker exalted, but slayers of solars and lunas, because those are prone to go mad) and burning glowing pillars of light into the night sky after finishing off the bambi-faced, brown-skinned, fan-armed demon directly after she was reborn from hell might account as serving the gods, don’t you think ;)

Seeing my characters mother dead on the field of battle was a major blow, though. She died in this battle because I betrayed my kin to help the solar, who had become my friend before his exaltation and whom I had been sent to kill. My family was disgraced, so it is my fault that she got sent on this mission against a huge demon. I removed her armor and burned her in my anima’s fire on a funeral pyre made from her 11 dead companions in full armor. When nothing was left but the dark-red glowing armor of her companions, I took her armor and moved back to the tavern where we had lived, while the other exalted where invited to a huge festival given by tale spinner and dream weaver, two of the towns three most powerful gods.

@Anonymous This is not about pixels. We play Pen-and-Paper, so we sit around the table and spin the tale ourselves. 12 people in two groups, 2 game masters, 5 players each. The game masters describe the world and we describe what we do and act out what our characters say - like improvisation theater with self-created characters and longer play-time; and more creative worlds, since we don’t need to show the worlds to anyone else: They only need to be alive in our own heads.

But yes: I burned in passionate anger myself, when I saw my mother dead on the battlefield. And when we shouted at each other when talking about the best strategy, our emotions flared up - after making sure that it’s OK for everyone who was involved in the shouting to let them flare up. You can live the emotions of your character and know that they are from your character - and even though you were close from slitting each others throats (“I throw my swords on the ground and call up to you: ‘I betrayed my own kin to save your damn live! Don’t you throw it away now! Calling our enemies to us is idiocy!’” - Chireka, renegate dragonblooded, to Bright Arrow, dawn-caste solar exalted).

That’s better than anything you can experience in the movies or in books, because it is you experiencing it and acting it out and deciding what you do. And the one I shouted with and me really enjoyed ourselves doing so: Our eyes glowed brightly when we talked about it afterwards: The intensity was awesome: „Chireka so wants to kill your character - that was great!“

The game master asked me afterwards about the death of my mother if that wasn’t over the top, but I could wholeheartedly deny that: It was exactly the right thing to make my character burn with anger in the final battle. And as fire aspected dragonblooded that meant that she literally burned.

The only problem we have now is that Chireka is not that sure anymore if betraying her kin to save the solar was the right thing to do. But that is a tale for another day :)

The important point is: Even though you live your character while you play, it’s still just description and numbers on a piece of paper, just like characters in a book. When the play is finished, the character becomes a treasured memory, like other heroes of fiction do - but you know that it was you who created him/her/it, and you shared a part of his/her/its life for a few hours.

PS: The rules of the game create a safety net against full submerging: When your character acts to change something, you roll dice to see what happens, so there is always a level of abstraction behind which you can step, if the intensity grows too high - just like you can take a break when reading a book or watching a video-tape.

PPS: On Friday and Saturday we played Werewolf; the easter roleplaying weekend is our yearly “Extremspielwochenende”. I wrote this post as reply in Freenet (Sone) to what we did on the easter sunday.

# fonättikl inglisch

The slaves we freed,

That they all fled.

PS: The title is “phonetical english”, written in a way, that germans can just read it aloud to speak it correctly.

# Foreign Lands

to bomb and conquer foreign lands,

It won't be attack, nor a sin,
as we will be the ones, who win,

and should someone then criticize,
we'll show our muscles to his eyes,

so never should again he say,
that foreign lands will foreign stay.

Defend ourselves, is what we do,
and our friends defend us, too,

So it's a real honor thing,
that defend bells of yours we'll ring,

we'll die for it, as you will see,

and if it's now not one of yours,
soon it will be, we'll help, of course.

# Gary Gygax (1938-2008) - he made the world a better place

It's strange to think that Gary Gygax is gone, if only bodily.

Here at germany his creation is fighting a deadly battle with DSA (a german fantasy RPG which came out just after DnD and which raised me to be what I am today), and it's not quite sure which rules they use for that, but it's likely that DSA wouldn't have existed if it hadn't been for DnD setting an example.

And the many roleplaying worlds and systems which sprung into existence after DnD.

I owe many thanks to Gary Gygax, though I only begun playing DnD as one of my rounds about a year before his death.

He was one of the creators of our hobby, and I believe that he not only made our world more fun, but also made it a better place.

I don't believe in a heaven, but he archieved about the most a person can archieve in our world.

And I know that this isn't a normal letter of condolences.
I just can't look at him with regret, now, but only with deep gratitude.

- Arne Babenhauserheide aka Draketo

Tributes by others:

And many more which are gathered at Enworld News who seem to do this gathering far better than I could.

Also an email account was created, to send a letter of condolence without swamping his main mailaccount: http://www.freeyabb.com/phpbb/viewtopic.php?t=4378&mforum=trolllordgames

All content of these sites is under free licenses, except where explicitely noted otherwise.

This means you can use my works however you want (even commercially), as long as you allow and enable others (and me) the same with all the works you create from or using parts of my works, and say who created and modified the original works.
The works must stay under the same license(s).

To use them, you can (for example) just put this license text alongside them (i.e. as html page) and create a link pointing to it. Other possibilities can (later) be found below.

More exactly: My works can be used (depending on the type of content) under the following free licenses:

• GPL (with the "or later" option).
• GNU FDL without invariant sections or cover-sections. The most widely spread one, also used by the Wikipedia, but mainly intended for documentation of free software (With the option to switch to the SFDL when it's ready.).
• Art Libre.
• Lizenz für freie Inhalte.

Programs/Applications are only avaible under the GPL, other content under all four licenses.

I keep the right to relicense all content on these sites under other licenses, as long as those other licenses make sure that the "four freedoms" are being kept.

More detailed info can be found on the german version of this site.

# Style over Substance

Stories of Weaklings, who win every fight
against bigger foes with their voices might,

Stories of Anarchists, who do nothing more,
than talk and talk, and still win the war.

Stories of Mages, who mumble and roar,
for a fizzling spell, which still makes them sore.

Stories of Dreamers, who sing in the night,
and weave our future, shining so bright.

All this you can find here, come out of the dark,
set Style over Substance, for that is our mark.

# Some of my answers to basic questions

Written in a survey about attitudes towards free software.

## Is proprietary (=unfree) software immoral or unethical?

it isn't immoral (moral = what's the current stance of mainstream society), but it is unethical.

In a society where people are used to being forbidden to give bread to a starving child, giving bread you'd otherwise throw away to that child instead could well be immoral.

So only software which allows you to act ethically is ethical - and that's free software. Even better is free software under strong copyleft licenses like the GPL, because that protects our right to act ethically for any future versions of the software.

## Do you believe that proprietary software is "illegitimate"?

No.

Legitimate doesn't mean "not contrary to existing law". Even in countries where the police is allowed to torture people, torture is illegitimate. At least that's my understanding. It means that something is wrong and should be forbidden.

I believe that people have the right to make unfree software (people also have the right to do tv-shows like "popstars"). I don't think anyone should use that software, though.

I can't force people to adher to my ethics without acting against my ethics myself. But I can try to convince tham that my understanding of ethics is right.

## Do you believe that proprietary software is "antisocial"?

In many cases yes. But it depends on the case,

# Note

If I had to develop unfree software to earn enough to live a more or less comforting life, I'd likely choose to do so. That's why I fight now, so I can earn money ethically right later on. Or at least enable my children to do so (more detailed in german).

# "Creative Content in a European Digital Single Market: Challenges for the Future"

-> sent to avpolicy@ec.europa.eu, markt-d1@ec.europa.eu in reply to "Creative Content in a European Digital Single Market: Challenges for the Future" as published by the european commission.

Thanks to Glynmoody for getting the word out!

Dear European Commission,

Summary: The goal of copyright is to get more money to more authors and more cultural works to more citizens. Due to the changes the free copying of the internet brings, additional protection doesn't help achieve that goal.
The proposal paper goes into many technical details, but loses the focus on the benefit of copyright to the citizens - and what kind of copyright protection is useful today.
Due to this, many of the measures (especially DRM) have to be reevaluated, if they really benefit our society and cultural development, or only try to cement a status which doesn't benefit the citizens in the light of the changes to technology and consumption of cultural works.

Please keep in mind that copyright is no inherent right. Instead it's a state given information monopoly with a simple goal: Increase the quality and quantity of creative works available to everyone.

As such, copyright law grants authors (copyright holders) the right to control who may be in possession of their works, because being able to make money with ones creations helps creating more and higher quality works.

Also it grants middlemen the right to make money from copies by establishing treaties with authors. These middlemen are useful, as long as they offer a major contribution in getting the works to the public and getting money to the author.

And it grants fair use rights to all citizens, which helps spreading the works and enabling more people to enjoy our culture the way they enjoy it most. These fair use rights are being accompanied by flat payments which are given directly to the authors, so creators of creative works money from an additional pool whose size is related to the amount of cultural works people share.

Currently the best balance between these different kinds of rights (copyright of the creator, use rights of the middlemen and fair use rights of the citizens) is changing due to almost costfree copying of digital content.

Now the middlemen often no longer serve as waybuilders between authors and citizens, but as gatekeepers who lock out citizens from our culture. Also they often take a high percentage of the money citizens pay for cultural works, even though their costs for spreading works (and finding good works) were reduced greatly. When a musician gets a few tens of a Euro from each sale of a 15 Euro CD, it's quite clear that the middlemen use up money which then doesn't help the authors create more cultural works.

Traditional (expensive) ways of spreading content are becoming unnecessary by the faster ways of spreading content digitally. But the middlemen control the flow of content from author to citizens (partly by copyright law), and they use their control to draw a major share from the money citizens want to give the author of the works they enjoy.

More: They often also hinder citizens from telling others about the works they like. In the digital world, people can instantly send music they enjoy to their friends, and if their friends like it, they can buy it - or send it onward to other people who might like it more. And once someone gets something she/he enjoys very much, she/he most times wants to give the author money, so the author can create more works she/he enjoys.

By using "illegal downloads", people learn about new works and decide wether they are worth paying money - and recent studies show that those who use p2p networks to download music illegally are also the ones who buy the most music.

Because of this, I think that the paper focusses too much on the "protection of the copyrightholders" and too little on the question, how laws can help making as many cultural goods available to every citizen as possible. So I want to offer some thoughts:

To achieve that goal, copyright always has to strike a balance between different objectives:

1) Authors need money to be able to work full time. So they want as much money as possible for their works. Some kinds of works take far longer to create, but have great cultural value (for example science books and investigative journalism), so authors who spend very much time on research (or similar) need a way to earn enough from their work, even though they have a smaller quantitative output.

2) Citizens want as much culture they enjoy as possible for the money they have available.

3) Authors and citizens need to find each other, so the citizens can find works they enjoy.

4) Cultural works have to be brought from the authors to the citizens and money has to be brought from citizens to authors of works they enjoy (with as little loss as possible). "Bringing works to the citizens" can include polishing the work, so the citizens can enjoy the works more. A book with 10 errors on each page is very hard to enjoy for most people, as is one with glaring errors in the plot. And a CD without coverimage will find far fewer listeners, regardless of the quality of the music.

In earlier times, the balance which brought citizens the highest amount of cultural works they enjoy was to have big middlemen who were able to shoulder the high cost for printing books, recording tapes, pressing CDs and carrying these from country to country (as well as a part of the risk of promoting unknown authors).

Today the cost for spreading cultural works is almost zero (more exactly: We already pay it by paying for our broadband connections) and finding an author I enjoy is easier with a search engine or using resources written by online communities for free, so the best balance is shifted. Due to this, having stronger fair use rights (so people can more easily pass on works and turn others into paying fans of an author) could be a far more efficient way to bring cultural works to everyone while paying the authors.

And stronger protection of "rightholders" (which today more often serve as gateeepers than waybuilders) could backfire quite badly and harm the cultural development of Europe (even today musicians complain, that they only get a very minor share of the money people pay for their works).

And since the cost of spreading a cultural work to people is almost zero (with technologies developed in filesharing communities, even the bandwidth cost drops to almost zero, since every participant contributes some bandwidth for spreading the work), so there is no real reason, why someone who has only 15€ to spare each month should enjoy far fewer cultural works than someone who earns 10.000€ a month.

In earlier times, if a poor person spent 15€ on a book, more than 10€ were needed to pay for producing the book. That was a natural restriction on the number of works he could enjoy. 5€ went to the author he liked best (if the author was very lucky), because he could only pay for at most one book. He couldn't afford to read works from other authors.

Today that same person could read 15 books and pay 5€ to the 3 authors she/he likes best, and the author of the first book would gain just as much money, two others would get money (who wouldn't have gotten money otherwise), and the remaining 12 authors wouldn't lose anything compared to the high-production-cost alternative.

And this clearly shows a glaring error in ever increasing the "protection" of monopolies: Someone who has 15€ to spend on cultural works doesn't get more money to spend if he can't read works for free. So the main question is, how to get the people to give the money they have available to the authors while giving them as much access to cultural works as possible. And since for example in germany about 50% of the citizens have too little money to pay any relevant amount of taxes, this thought is valid for about 50% of the people in germany.

Adapting copyright laws to the current times has to take into account how copyright laws benefit the society. Copyright monopoly rights are being granted by the state (since we're living in a democracy that means: by all citizens) to individuals for the benefit of all citizens. So the goal of any copyright change should be to benefit all citizens.

It's the interest of society, that as many people as possible can enjoy as many cultural works as possible.

Criminalizing most citizens doesn't come close to that goal. And restricting what people can do with works they purchased (DRM), doesn't achieve that, either. Both only protect the middlemen, but neither the authors (or their income from which DRM is effectively financed), nor the citizens. DRM makes spreading cultural works more expensive, so it harms authors as well as citizens. It adds a needless control structure which sucks away money that should go to the authors.

And people like Howard Taylor (the creator of the free webcomic http://schlockmercenary.com) and all the free software programmers out there who make a living with their programming show that many citizens today are mature enough to pay for the things they enjoy, even though there is no gatekeeper forcing them to.

So please leave the "we need more protection" track. What we need is more money for more authors and more cultural works for citizens.

Cementing the current power-structures in creative business despite the changing technological environment doesn't achieve that.

When considering, how a single-market (a market accessible to everyone in the same way) affects the creation and spreading of creative works, the focus should instead be on comparing the different possible approaches how to strengthen the creation and spreading of cultural works and to see which balance between these ways is most efficient. This requires rethinking the support which copyright law gives to the different revenue sources of authors (flat payments on copying devices, income from direct sales, money from middlemen, money from "additional value products" like signed copies, direct donations by fans so they keep producing, and many more) and as such adjusting the balance between state-granted monopoly rights for authors, state granted monopoly exploitation rights for middlemen and fair use rights of citizens to make it fit for the current technological and social situation.

There's one more interesting fact on that topic I want to spotlight: The german group for spreading the money from flat payments on printers and photocopies "VG-Wort"[1] now pays Webloggers with money from flat payments, because they acknowledge, that these create a considerable share of currently consumed cultural works. Since most webloggers work without direct payments, this is a major change for the commercial viability of creating works which are freely available to everyone with an internet connection, regardless of the financial situation.

At the same time, projects like Creative Commons[2] show, that for a major share of authors of creative works it is most important that noone can misrepresent their content as the creation of someone else, while "forbidding people to pass on the work without making money from it" isn't very interesting (and isn't even useful financially for lesser known authors, because it stops people from spreading the word about the author).

So the first question to be answered is not "how can we ensure that the copyright protection holds in the light of current technology", but "which balance of monopoly protection, fair use rights and direct state-support of authors (like the sponsoring of theaters in germany) is most efficient in achieving the goal to enable as many citizens as possible to have access to as many cultural works as possible in the changed technological environment". Detailed questions about monopoly protection schemes and such (and which of them benefit our society today) only make sense once this basic question has been answered for the current situation.

And "Copyright is the basis for creativity" isn't an answer to that question, because it a) is clearly wrong. People created at all times, while copyright law is only a few hundred years old, and b) doesn't answer, how copyright law benefits European citizens - and how that benefit changes with digitization where every act of viewing is in fact a copy.

Best wishes, Arne Babenhauserheide

• on differing content and goals: The content of the article shows a nice overview of problems of the current licensing system between companies, while the 'Strategy for "Creative Content Online"' talks of goals (DRM, filesharing prevention) which aren't more than brushed by the content.

• on the focus of the paper: Important topics like user/created content are only named but missing the simple point, that most of these works are simply illegal today. Companies can clear their licensing with each other - they don't necessarily need new rights for that. But most citizens can't. They can't just sit together and decide to only buy media licensed under specific terms, because the companies can almost completely control the supply. Ordinary citizens are the ones who need clearer laws. And in a democracy, they are the ones for whom laws should be made.

• on "financial incentives for creatives": As psychological studies show, creativity is best fostered by giving creatives enough money to live a comforting live, but the hunt for as much money as possible can stifle creativity instead of strengthening it. So strengthening a single-minded market-driven revenue model for a state-given monopoly doesn't help create creative works of higher quality.

• on the justification of copyright itself: You can also find related thoughts about the reasons for having certain kinds of copyright (in german) at http://draketo.de/licht/politik/geistiges-eigentum-sinn-des-urheberrecht...

• on DRM systems: DRM-systems establish a control inside peoples computers which isn't in turn legitimated and controlled by the state. As such it takes the role of the police without being authorized by the state (which in turn is being authorized by the citizens). To force citizens to accept this additional foreign-control on their actions, middlemen abuse the monopolies granted by copyright law, because these give them the right to establish new rules on how their content may be consumed. That way the DRM restrictions are being established with powers granted by the state, though they aren't legitimated by democratic processes. They even undermine fair use rights. Also any DRM system breaks the premise, that people are free to act, as long as they are willing to face the legal consequences. While I am free to ignore speed limits when I'm on the way to the hospital because my daughter is bleeding to death on the backseat, but might lose my drivers license afterwards (what's a drivers license compared to the death of a daughter?), a DRM system would keep me from taking that decision and would force me to let my daughter die, because my car simply wouldn't drive faster than allowed. That way DRM systems break the premise of the responsible citizen, but since any democracy requires responsible citizens as its basic premise, this leads our whole legal system ad absurdum. So DRM shouldn't be supported by laws. Also fair use laws need to be protected against DRM restrictions. These restrictions are forced on people by using the monopoly granted by copyright law, and they keep people from exercising their fair use rights, granted by the same copyright laws.

• on "culture industry": A culture industry isn't useful for society by definition. It is only useful, if it helps getting more and more enjoyable cultural works to everyone (or at least the vast majority of citizens - including those who earn only very little money). Only in that case is it warranted to give it any additional legal support.

• on "market as regulator": Using the "market" to regulate the behavior of the middlemen with the power of the consumers doesn't work, because copyrighted works are monopolies by law and the market only works without monopolies. Creative works can't directly compete against each other, because people have no way of getting an equivalent alternative since every creative work is unique.

• on forcing people to pay: Today almost noone is forced to pay for any digital goods, because almost everything is available for unpaid download somehow (sometimes illegally). That people still pay for the creative works they enjoy shows clearly, that most people want to pay authors for the goods they enjoy. That's something which is deeply engrained in our psyche: If someone gives us something, we want to give something back. Due to these two effects, it's quite clear that building bigger and bigger restrictions into legally bought content only harms the people who want to give the authors money. It would be far more useful to establish a system which enables people to securely and effortlessly give a few Euro to someone else - or even just a few cents. A "one click donation" which every EU citizen could use, could give authors of creative works far more support than any "harmonization of restriction management systems".

• on me: I am a stakeholder, as I am at the same time a music and book customer, a hobby free software programmer and a hobby writer who publishes under free licenses (on http://draketo.de and http://1w6.org ). I learned about the music genre I enjoy the most (Filk) when I downloaded some tracks in a filesharing network many years ago and I now own more CDs of that genre than of any other genre - and every year I add three or four CDs to my collection. If there had been any effective fair-use-prevention-measure in place back then, I still wouldn't know my favorite kind of music and I still wouldn't buy more than one CD every two years or so.

# "Person caught who stole IDs via Gnutella" - ridiculous p2p bashing

Comment to LimeWire ID theft case.

That means, people who spread child porn were caught because they used public p2p networks (where law enforcement can find them), and instead of thanking LimeWire that they were able to catch a criminal because he was lured in the open (instead of selling the material invisible via the postal service), politicians blame LimeWire for the existence of the material which had existed in the dark long before Gnutella made sharing easy and public.

These people don't become criminals because of LimeWire.

But they get caught because they use it and don't realize that everyone can find what they share and track them down - including the cops.

As soon as the crime is bad enough that the cops inquire at a court to get the data of the criminal internet user, that user can easily be tracked down. It's far less effort than stopping someone from sending illegal material via the postal service.

So LimeWire and public p2p help the cops.

That ID theft case is even weaker. It is as if we'd ban cars because some people forget to lock them - or ban wallets because some people lose them (including their ID). The main difference is that you have to actively disable security to lose your ID via LimeWire while your wallet just slips out.

Somehow I smell other motivations than stopping crimes here...

# A downside of networking and public reputation: No communication for the sake of communication (alone)

-> A comment on The Importance of Managing Your Online Reputation.

I read your article, and I found the points you make very interesting, though not only in a positive way.

You tackle the “we have a network others can see” from the active side: “How can I make sure my employer likes what he sees?”.

But there's also the other side: We use the web for communicating with people, and this communication is being pulled into the open, and everything we do online is being instrumentalized to draw information about us.

This also means that no communication over a public channel can be done for the sake of the communication itself, and so the channel becomes more and more useless for any creative communication (as opposed to just exchanging preconceived and unchanging ideas).

This might sound hard, but it stems from two concepts:

• When we want to act creatively, we are most efficient, when we do it for the sake of the activity itself. -> http://www.gnu.org/philosophy/motivation.html

• When people know that they are being watched, they act differently (sadly I have no link on this).

Another issue is an adaption of the “unclear prophecy” problem: If people know that their online activity is being measured, they will change their behaviour to please their intended future employer, and so any measurement doesn't give you estimations about the person which are relevant to the job. Instead they only measure one parameter: “How good are you at concious social network building?”

And for many jobs that skill is almost irrelevant.

So using public communication for calculating a score of some kind runs into a paradox as soon as people know that they are screened, and it harms normal communication. Due to that I hope, that more and more people will realize that unscreenable but efficient communication is important.

For example a network similar to identi.ca / twitter could be built on jabber with decentral buddy-lists, which can't easily be read out as massively as twitter, and the really paranoid could completely switch over to freenet as their news communication provider: http://freenetproject.org

# ACTA - A trend to be reversed

A reply to a comment on slashdot named Can we fight the trend?:

There was a trend to having only proprietary software (by former free software being enslaved in the job contracts its creators took) and to having the hacker community die out.

That trend was reversed by GNU with the invention of the GPL and the GNU System.

And today millions of people use free software and we have organizations like the EFF and FSF who work for a free software society.

- That huge success story in about 4 minutes: infinite-hands.draketo.de

More people than ever before use free software, and it becomes an integral part of out society as more and more government offices (e.g. in germany: Munich) and companies adopt it.

Today we have a trend to having only nonfree culture (by the laws being turned upside down and politicians being bought) and members of the free speech community to give up.

What I learn from history is:

That trend can be reversed, too, and our society might become a free culture society, just like it slowly becomes a free software society, even though most people will only realize it in hindsight.

"Do you still remember the times, when every office had Windows in it?"

"Only barely, but do you still remember the times, when we feared lawsuits when we accessed the predecessors of the culture pool?"

"Sure! Those were the times. Now, let's get writing again. Don't want to let our fans wait for the next storyarch, do we?"

The ones who profit from unfree media will give a fight this time, though.

And that they choose to go semi-criminal shows, that different from the proprietary software vendors back when GNU was invented, the unfree media companies are already losing, and they know it.

# ACTA horror - what can we do?

I didn't yet manage to get really safe information on what ACTA actually does (that's a marker for 'this is dangerous' in itself), but what I see on wikileaks sounds horrible:

"The deal would create a international regulator that could turn border guards and other public security personnel into copyright police. The security officials would be charged with checking laptops, iPods and even cellular phones for content that "infringes" on copyright laws, such as ripped CDs and movies."

'Check my laptops content'???

What about my electronic diary, then?

Without a clear judges sentence, noone is allowed to look at my private files, and should they remove that restriction, they can as well remove all privacy.

And it gets worse:
"The guards would also be responsible for determining what is infringing content and what is not."

and worse:

"Mr. Fewer and Mr. Geist said, once Canada signs the new trade agreement it will be next to impossible to back out of it.
In a situation similar to what happened in the Softwood Lumber trade dispute, Canadians could face hefty penalties if it does not comply with ACTA after the agreement has been completed."

Ouch!
That doesn't sound like a treaty between nations, but more like some big players conspiring to create law which binds all others, and that clearly is antidemocratic.

So a big question looms: What can we do against ACTA?

## What can we do?

Ways to act I found:

# Amarok - context on music - yahoo comes a tiny bit too late

There was a talk of Ian Rogers from Yahoo! who explained how labels did a hell of many horrible missteps in fighting p2p and in trying to push DRM, how Yahoo now offers a free music service, and how music software terribly lags behind the music scene. http://www.netribution.co.uk/2/content/view/1317/182/

But....

The context he talks about already exists. Just have a look at Amarok:

- Context: http://amarok.kde.org/d/en/index.php?q=gallery&g2_itemId=1375
- Wikipedia: http://amarok.kde.org/d/en/index.php?q=gallery&g2_itemId=1381
- Lyrics sites: http://amarok.kde.org/d/en/index.php?q=gallery&g2_itemId=1378
- and an integrated store where you don't have to buy to listen:

And all that in a free software program, so noone dictates any rules upon you.

I don't know about you, but I definitely get excited by it!

# Anonymous against trapwire - on camera??

An answer to a reddit-comment by tedemang to the article 1540 Anonymous vs. TrapWire: "We must, at all costs, shut this system down and render it useless".

Do you think, joining anonymous really helps there? That’s fleeting power, but I don’t see alternative structures being set up. This just exposes all those who want to support the cause. In front of cameras, connected to a surveillance system which records every action…

On the short term to keep secure digital communication, use freenet over your existing internet connection. If possible in darknet-mode, connecting only to your friends → freesocial.draketo.de

On the mid-term get a flourishing local community in your neighborhood, ideally with community operated internet like a meshnet - and get someone from your community elected as mayor → /r/darknetplan

And make sure you all have access to alternative media sources. Maybe provide printed copies of good blog posts to your local baker.

On the somewhat longer term, fix the democratic system, so the rich ones cannot completely rig the votes by deciding whom they give their money, so he can run for election.

On the long term, fix the economic system, so we don’t automatically get that huge imbalance in power, once the system runs without major disruption for more than 20-30 years.

Remember that what you are going up against is the very instrument the oppressive elements of our state want to use to oppress us. That instrument will monitor you, and they will try to use that data to oppress you - and to cast you in a bad light, so they can convince your neighbors that they need more cameras against those vandalizing youths (without telling them that those youths are the same ones who come over for coffee during the next summer-festival).

# british telekom wants to block accounts just for using Gnutella or BitTorrent

-> a comment to BT to cut off file sharers from TechWatch.

2) They just had a bittorrent or Gnutella program running.

1 is unlikely, because not every fourth internet user will have downloaded that song.

And if 2 is the case, BT should be sued to its knees.

Having a Gnutella program is not illegal, and blocking access to Gnutella means vastly reduced service.

It's as if they'd take away your flat, because someone saw you using a kitchen knife.

The same is true for BitTorrent which for example gets used by millions of people to download GNU/Linux distributions without creating too much traffic on the servers.

It's what you do with your tool that might be illegal, but having the tool is perfectly legal, and when BT blocks it, they are unduly worsening the service for their customers.

Best wishes,
Arne

# Defective by Design is doing something important - actions like theirs got me to GNU/Linux

-> A reply to bashing against Defective By Design.

I was a rabid MacUser 5 years ago.

Then I learned about DRM, TPM and privacy. And I left Apple because they put in TPM chips into developer machines.

Today I'm a happy GNU/Linux user and I contribute from time to time to Gentoo, KDE and Mercurial.

(my way from Apple to GNU/Linux:
- http://bah.draketo.de/ (Broken Apple Heart in German)
- http://draketo.de/english/songs/light/broken-apple-heart (in english) )

So DBD isn't only talking to the converted. Without actions like theirs, I wouldn't be a free software user today.

They just don't reach every average Joe with a single campaign. But who could? With a few hundred people?

What they can achieve is that once an average joe gets into problems with DRM, there's a chance that he won't think “surely I made a mistake. I'll just buy the stuff again” but “weren't there people who said that Apple tries to take my freedom? Seems they were right. I won't fall for DRM again!”

And they can reach critical thinking people, who realize they should also think about their freedom when they buy a new device.

# deletion attempt against the dwm article on wikipedia (comment)

-> a comment to
Wikipedia, Notability, and Open Source Software by ubunTARD.

2010-03-23
Update: I just got unblocked by henrik who also sent me an excuse for the way the whole process was handled: “…The block was partly an individual misjudgment, but also a result of the systemic culture and some poorly thought out policies. If you're interested, I'd be happy to discuss it in more detail…”. And that restores a lot of my faith in the wikipedia community — thank you very much for your excuse, henrik!
Also they are currently discussing on the incidents board how to avoid similarly overboarding blocking like that in the future.

Just as an inside notice from the discussion: I joined the first deletion discussion when I got note of it (I don't know anymore through which channel) and when it got closed, I joined the second one and got heavily frustrated when people tried to turn “he sent the developers a berliner bratwurst” into “the magazine which published his article is a first source” (which would mean it wouldn't count as source for “notability”).

In that discussion I was mostly alone, and I could only talk there, because I've been a wikipedia user since 2004, and I casually corrected smaller errors in articles whenever I happened to see them while looking something up. I was one of the many small contributors who might not write largescale articles all the time, but who do their share to improve the quality of the articles.

Most others couldn't join up, because the discussion was marked as “semi-closed”, so only longtime users could contribute. And the major contributor to the previous discussion was blocked for meatpuppetry, along with the developer of dwm (additional info) who didn't even cast a vote but only provided sources (reason: “mass ban the meatpuppets” — the dwm developer was unblocked afterwards by others).

After spending hours on refuting their claims, I got frustrated enough that I stopped discussing — and I posted that to identi.ca -> http://identi.ca/arnebab/tag/dwm

Subseqently I got blocked from editing on wikipedia “indefinitely” (except on my talk page) for “canvassing” (since when is ‘they want to delete dwm’ equal to ‘come all here and vote for keeping dwm for the following reasons…’?) and for quoting policy which says that you shouldn't contribute to a deletion discussion if you don't know much about the topic — and that I think that Psychonaut isn't in a position to judge free wm’s.

In my view the policy that you must not speak about the deletion attempt outside wikipedia or risk a ban is even worse than nondisclosure agreements: “You must not speak about this public discussion, or you get banned for meatpuppetry and canvassing.”

I am now pissed off bad enough, that I won't go appealing for an unblock. If the powers-that-be in wikipedia don't see themselves that the block is unjustified, then the power structures in there are such that any contribution I do is on the mercy of moderators who abuse policy for harassing free software since they are not stopped by the ones who don't agree with their doing.

Every public resource run by volunteers faces the danger of falling into the hands of dedicated abusers, and wikipedia is no exception. But it is exceptionally vulnerable, since the ones who contribute content are normally not interested in the necessary day-to-day maintenance, so writers and maintainers are strongly seperated, but the maintainers get most of the power, because they are the ones who get informed of actions which concern articles they are interested in — and because they have the connections inside wikipedia.

But as if that wasn't bad enough, I think there's a third and easily overlooked group: Those who don't write full articles, but do fact checking when they come upon an article on a topic they are knowledgeable about and that way improve the general quality of wikipedia a lot (unstructured peer review). These don't take part in discussions, but mostly use wikipedia as a source, and so they don't want to spend hours on reading some new policy. Instead they generally trust that Wikipedia lives up to it's goal of collecting the sum of human knowledge in encyclopedic articles - and they do their share to help achieve that goal.

They aren't seen as huge contributors, since every one only does some few changes each year, but together they make a huge difference.

I'm mostly a member of the last group (and to some degree article author) — I'm almost sure you expected that :)

And I think that anti-canvassing rules (“don't tell people that the project they feel strongly about is in problems on wikipedia”) and overboarding deletions chase away a major part of these casual editors (don't ask for a citation - this is gut feeling and my own thoughts: “Why should I spend 5 minutes on correcting a few errors in an article on a topic I know much about, when the article could be gone in 5 months time?”).

The article authors might come regardless of the rules and try to add the topic they know much about. But the casual editors will likely be gone for good (and won't ever become authors).

And that would create a major change in the community, cutting wikipedia off from the normal people on the web. And you can imagine how that would affect the value of wikipedia to these people (the vast majority) and its resistance against being misused by some few people to further personal goals.

Besides: Who Writes Wikipedia suggests, that even the main authors are mostly casual contributors, so the effects of alienating casual users would be even worse than I write above: Wikipedia would lose it's source of information.

PS: I didn’t join in the Appeal to delete anyway. Luckily it got refuted. Clearly.

# Don't completely rely on something you don't control (SaaS)

in reply to You do know you can't rely on Gmail, right?

You're citing some of the reasons why I dislike SaaS, but there's one more:

Whenever I use a SaaS application, I trust someone whom I really can't reach, and I trust him without being able to exert any kind of control.

He wants to use my data for marketing purposes? No problem - I won't ever find out, since I can't check the physical disks last accessed flag. So what about that being illegal? If I can't find out about it, why should he care? I won't ever be able to sue him.

Sure, most people are nice and law-abiding, but I prefer not to rely on everyone being honest who has access to my data on some remote server.

Sure, I can use encryption for the data I upload, but any data generated on the server will be open for the admin - regardless of the security scheme on the server, because the admin could just fake that.

So it's always back to trusting people, and I prefer not to trust others too far (nor to little).

So your company keeps its company secrets in gmail accounts? How long will it take for Google to find it, if they chance to become a competitor in the field?

If you use gmail without GnuPG encryption, you can just as well give your data directly to Google.

And the same holds true for every other SaaS solution. You can't ever trust the remote server.

It also holds true for all unfree software, by the way. You can't look inside it (or get someone else to do that), so you can't know what it does. Do you really dare to trust it?

# How Drupal will save the world - Simplicity for beginners, complexity for experts - get in quick

Written in reply to: How Drupal will save the world.

I experienced the same with modules (having to search for hours), and I think I know at least two ways to make Drupal more accessible to newcomers.
A bit of background: I just setup my third Drupal page and I find new modules even now. The pages were of three slightly different but very similar types:

• A newssite, needed mostly taxonomy.
• A personal site, needed book and taxonomy, as well as themes.
• A site for a free roleplaying system. Mostly needed book.

But even though the pages where quite different, I find myself reusing most modules.

And it took me hours to hunt them down.

To make the modules more accessible to newcomers, they should be more organized.

One way to organize them would be, to give them another sorting done by type of page I want to use them for (usecase). A blog, for example, needs different modules, than a newssite. But there will be much overlap.

Then users could simply check "I want a blog. Which modules do I need?"
Still they'd have far too many to choose from and the choice needs to be simplified for first-time users. To do that, users should be able to sort modules by popularity

Ways to sort by popularity:

• Vote: Allow users to vote for modules and show the votes.

The second way to make Drupal more accessible would be to create rich compilations. That means: Don't just offer a "general drupal, search your modules by hand" download, but also some specialized precompiled versions, best with adapted config already included.

• Drupal Community Bookwriting
• Drupal Community Newssite
• Drupal Personal Webpresence
• Drupal Blog
• Drupal Webshop
• Drupal Wiki
• Drupal Forum
• Drupal Rich Community Site (Forums, Community Book, Blogs, Webshop, Wiki - the full package)

These should then be the downloads a visitor first sees, to make the Drupal site a site for users.

Examples:

• Drupal Community Bookwriting: http://1w6.org - mine, german. If you like it, I'll gladly send you the details of the setup. http://1w6.org/contact .
• Drupal Community Newssite, if not perfect: http://gute-neuigkeiten.de - my first drupal installation.
• Drupal Personal Webpresence: http://draketo.de - my second Drupal installation, misses Photo-Albums (since I don't yet need them) and similar to be a full fledged personal webpresence.

- All parts of the design on these sites are licensed under free licenses (one of them being the GPL). -

These two ideas still give experts the full power of Drupal, but enable newcomers to get a site running quickly.

If you like the idea, please feel free to contact me: http://1w6.org/contact

# Howard-Taylor: A rising figure

A comment to The newspaper said it, so it must be true:

You already made the "I get paid for doing a free webcomic" rise, now next part is... ?

Some ideas:

• Being paid really well
• Having Sandra be paid really well, too
• Having a Schlock foundation which pays you for the online comic directly
• Getting a six figures income from Schlock
• Having the Schlock foundation grow enough that it becomes the Taylor Webcomic fund which pays webcomic authors all over the world
• Founding a team of Space Mercenaries and writing the comic about your actual adventures as Schlock sidekick
• Really having someone do the research so the Schlockers can beat NASA to Mars
• Learning the trick to living long enough to go on inking where no one has inked before
• Founding the Schlock colony fund which pays people to leave earth, meet interesting live forms and take over their planets :)
• Finally taking a strange scientist on board who starts the biggest intergalactic war by revolutionarizing galactic transportation.
• And at last, building a time machine and going back in time to be a webcartoonist again :)

# I hope French Filesharers turn to Freenet

→ Comment to France Starts Reporting ‘Millions’ of File-Sharers by Torrent Freak.

I hope they all turn to freenet. There’s scance chance of getting many user-addresses there, and it can provide a service similar to torrents and decentral tracker in one, but anonymously and safe from censorship.

I’ve been running it for years now, and it got better and more secure every year.

The really paranoid can use it in darknet-mode: Only connect to people they know personally. Then it gets really hard to find out that you use freenet.

But even in Opennet, it’s extremely hard to find out what you share or download. Freenet is built for the needs of dissidents in repressive regimes and to avoid any kind of censorship, so it delivers sufficient privacy and anonymity for filesharers.

A word of warning, though: Compared to well-seeded torrents, freenet is slow. That’s the price of anonymity and privacy. But nowadays it’s fast enough for fansubbed anime and beats many weakly seeded torrents :)

Maybe then the media companies will learn that the way to make money with entertainmant is to make it good and personal enough that people want to give them money to make sure they keep producing more great stuff. They could learn from Howard Taylor and Schlock Mercenary.

# KDE and Gnome vs...

I'm a KDE user and quite excited about KDE 4, but I think the progress of Gnome is very promising, too.

Gnome and KDE both innovate, and both push limits, and both will learn from each other.

KDE learns from Gnome and uses the Telepathy definition.

Gnome learns from KDE and switches to WebKit which originates from khtml.

Both work together under the hood of freedesktop.org

And both are moving ever faster to replace proprietary systems.

So hey, I might be a KDE user and I might care most about KDE, but Gnome and KDE are both important, because being two projects they can move in different ways, find together again and move out again and that way cover far more ground than a single project could.

I want many people to use KDE and Gnome users want many people to use Gnome.

Lets move out, then, and create guides for our users and create many great things which bring them to the respective desktop, and while we try to create a better experience than the other free desktops, we might suddenly see, that we just surpassed any non-free desktop together.

Then we can sit down, celebrate a big free software party and begin outpacing the respective other one again.

And while doing so, we can still keep contact, share ideas and work together, and we will make a difference.

# Killing the head of a terrorist organization doesn’t stop it

→ A comment to The Effectiveness of Political Assassinations.

Another answer why this doesn’t work is really simple: Consider that you were in a terrorist organization. You work with people in secrecy, but the ones you know are close to you, because they know your most intimate secrets.

Short: You fight alongside friends (though probably assholes by most ethical standards).

Now someone kills one of your friends.

He shown around in the media and people say how evil he was.

Now imagine not wanting revenge. Quite hard, isn’t it? A religious or power-play argument just got personal.

If it helps, imagine that the one who got killed was your father, sister or beloved one.

If it’s still hard to imagine why killing a leader is counterproductive, try to imagine that someone raped and killed your 14 year old daughter. Then he got celebrated in the media as hero. Would you manage to not start a personal war against him but to calmly go to a lawyer and accept to hear that your daughter incited him to his acts by dressing like a whore?

If this sounds unrelated: It’s the same emotional reaction, just pulled into our own cultural context. Terrorists believe that they fight for a just cause (at least if they aren’t only in it for the money). So any killing just strengthens their will to fight all out.

The only reason why killing a leader could stop the group is that the leader may be the only one whom all inside the group know and who can coordinate it. But naturally he has lieutnants who also know all, and if one of those dies, he gets replaced.

So please fight terrorism in way which works: Making sure that terrorists have no support in the general population. This naturally means that you must not be openly hostile to them.

Ask first “Why do they hate us?”, and then try to change that.

# Last.fm royalties, question about free music

Written at: http://musicmanager.last.fm/contact/

Hi,

I licensed all my works under free and open licenses which permit any kind of commercial copying and reuse, but which don't permit taking away rights from the listeners.

I'd like to upload the files to last.fm, but I can only do so, if I can be sure, that no additional restrictions will be placed on the users (no DRM). Else I would violate the license agreement.

These are the terms under which I work together with other artists, so there's no way around that.

I can upload the files, but I need to know that all users will retain the following rights to my files:

• Free use for any purpose (any way they retrieve it. Paying for getting is OK)
• Free modification
• Free passing on or selling while giving other users the same rights.
• Free passing on or selling of modified works while giving other users the same rights.

Are these rights safe with you?

Best wishes,
Arne

# LimeWire Interview - badmouthing their own technology

Comment to the LimeWire-Interview on Slyck.

Their words, my comments (from three years of reading in and discussing on the Gnutella Development Forum (GDF):

"Gnutella has had a 2 GB file size limit, while BitTorrent excels at delivering truly enormous files."

-> That's just blabber, but it now explains, why LW wasn't that quick in closing the 2GB limit, even though the way to do it has been around for more than two years (and was posted to the Gnutella Development Forum where Gnutella developers discuss).

There is no underlying technological hurdle for sharing files with a size of more than 2GB, except for the one which LimeWire doesn't want to fix so that they can use it as an excuse to include BitTorrent.

Also Gnutella already does completely decentral swarming, and does it since more than two years ago.

The only real advantage of BitTorrent is, that it has torrent-sites where users meet and comment, but you can do the same for Gnutella (for example like http://freebase.be ).

And that the other p2p-clients don't have it.

"A Gnutella program connects to peers randomly, and broadcasts searches into its neighborhood. It can't find a file outside this neighborhood. Enter the Mojito DHT, a revolutionary new technology we've developed for LimeWire. In a distributed hash table like Mojito, the peers don't connect randomly--they organize themselves into a navigable tree. Imagine one computer has the only copy of a rare file, and another on the far side of the network wants it. With Mojito, they'll be able to find each other."

-> Except that this neighborhood is about 400,000 computers and there've been plans for years to extend it to 1 Million while reducing network traffic.

The only thing which hindered that is, that LimeWire didn't manage to get their program keep 100 connections without too much impact on performance.

And with the performance of Gnutella (traffic of only 7kB/s up and down for a fully connected ultrapeer, less then 1kB/s for a Leaf) increasing the network size wouldn't have created many problems.

-> A bit deeper: http://draketo.de/english/p2p/light/why-gnutella-scales-quite-well - if you like it, please digg it...

Still, Mojito will be a great complement to Gnutella, because it can be used to search for files and hosts _by hash_. If you want _exactly that file_, then you use mojito (aka Kademlia) and a hash string. If you want to search by keyword or tag, then you use Gnutella.

And it will help LimeWire, because other Gnutella clients won't have it at once, so they will be in front. Gnutella is an open protocol, so they need to look at all times like they are front row, else some other Gnutella client will take over their users.

I don't know, why they badmouth their own technology, but as you've seen I have some suspicions.

# Never trust a company

I wouldn't begin bashing LW. They are a company, and you don't trust companies. Not because they are evil, but because they have to think of money first and foremost, else they go down and other come up front, who do. At least as long as people still buy the cheapest products, regardless of ethics.

I hold them in very high esteem for GPL-ling LimeWire and for standing up against the lawsuit.

They are a company, and that makes them non-trustworthy, but because they are a company, they can fight a battle which none of us others could fight.

Never trust a company, but don't judge them down for thinking money before morals or ethics from time to time, as long as they don't do it all the time.

And never ever deal with... - a Shadowrun saying :)

# On Forums and trolls

written in the Phex Forum.

"Let them walk against a hill of politeness, and then let them slide off. Have a ban-request as forcepunch somewhere near, if they try to break the hill despite explicitely having been warned."

I try to avoid giving them a chance to justify growing angry. If they shout despite having no justification, and if they don't stop after being asked to disable their capslock (always assume the best), I try to just warn them that they'll be banned if they go on (never had to - and there was just one case where I decided to ignore a provocation instead - see our Polar Skulk forum) and just request a ban, if they don't stop.

Every post in any forum in here (not just Phex) will be read by other people, and if the tone of the posts grows too angry, angry people and trolls will flock here, because they see that provocation makes someone angry in here.

And I know, that trolls come anyway, but a hill of calmness seems to me like the best way to reduce the number of those who actually post.

And my mood is much better, when I read my own calm posts than when I read a post where I let my temper flare up.
- Arne Babenhauserheide

One Guide to rule them all,
One Guide to find them,
One Guide to reach them all
and into calmness bind them.

# On keeping emotions in check

-> An answer to a distro battle at linuxhumor

Please keep your own language in check, and don't pull Stallman in here, when he isn't needed. He's got more important things to do than helping your argumentation.

If you look at what I wrote, you'll see I did never say "Your distro is bad" or anything similar.

Why do you answer to other people who say things intended to offend you?

Or to put it differently: Why didn't you react to my post with backup information, why Ultumix is good and where it helps to convert people, cutting out the advertising language so it can be read as information?

I see that you're pissed off by Ubuntu (I don't like the "one distro to find them, one distro to ..." mindset (you told about) either, but I doubt that all Ubuntu people have it. I'm not active in Ubuntu, so I don't know much about the internals (I don't even know what LoCo means - I assume Local Coordinators or so), I just installed it for my wife, because my Gentoo might be a bit too much for her (this changed in the meantime. She now has a Gentoo, too :) )).

Get a grip on your emotions - get a sandbag and hit it when you're just a bit too pissed (I do that from time to time, and unless you tested it yourself, it might be hard for you to see how very good it feels to just let out the anger at that 15kg sandbag). But stop, before your knuckles bleed :)

We are writing here, so it is possible to just sit back and have a break, which makes it easier to get a grip on oneself. At the same time we're just reading what others say, which makes it easier to misinterpret what others say, so keeping our own emotions in check (or letting them out where they don't hurt anyone but your knuckles) gets more important.

And I know I sound like a pseudo-wise great grandfather now. That isn't intentional. I'm learning my way in life myself, and I might just be wrong about it (and also about anything else I think I know), but I write it anyway, because I made some errors myself, and I want to help others not to walk into the same trap. And if what I see right now is only a necessary transition to even better ways to live, then I can at least help others reach that transition with less hardship than I had.

# Open Letter to Julia Hilden on her article about pay-per-use

I think there are two serious flaws in per use payments:

## (a) Good works of art need to last

As you stated correctly, I define myself partly through the media I "consume".

This does mean, that I want to have the assurance, that I can watch a great movie again a few years in the future.

Imagine this scenario:

• I found a really great book, read it and got entranced.
• It's 20 years later, now. and I want to read the book to my children.
• Suddenly I realize, that I'd have to pay for it again to be able to read it, but it's no longer available, because the company I bought it from on a per use basis died 10 years ago, and no one took over, because the management of the book became too costly to be paid for by the few people who still wanted to read the book in that year.

## (b) Technical realization

For per use payment, someone must monitor, how often I use a work of art, and that means, someone must have data on my behaviour, which isn't in the least compatible with personal data protection.

Also, to enable per use payments, you need DRM: Digital Rights Management, which needs to be spelled "Digital Restrictions Management" to account for its effect on end-users, because it restricts me from looking a second time at a file which I already have on my computer.

Without DRM you can't control my use of a document I downloaded to my computer, because it is on my territory which only I control.

With DRM the control of my computer switches to the manufacturer of the DRM, who restricts my useage and only allows me certain actions.

Naturally the DRM-master is then able to monitor and control my use of digital works, but the price for this is giving my personal domain into the hands of someone who isn't necessarily trustworthy (or would you trust microsoft with your new anti-microsoft book, just to name an example?).

There's a quite nice read on the dangers of going through with your proposal on the web, and the prospect is even smaller than yours - it's only about keeping people from passing on books (for which you also need DRM), but it shows what will likely happen, when you the technics for realizing your idea are deployed:

And there is another one. Since your scheme needs DRM to enforce per use payment, this one might also be interesting to you: Can you trust your computer?
Can You Trust

(and please keep in mind that even today a physics book costs up to 150€, even though it costs far less than that to produce it and students don't have much money, so pay per read wouldn't magically lower prices).

So, while pay per use sounds nice and fair from a distance, it grows into a maze of trouble when you take a closer look.

Best wishes,
Arne Babenhauserheide

# Powers that be - money concentration vs. democracy

-> written in reply to Bogus Copyright Claim Silences Yet Another Larry Lessig YouTube Presentation on techdirt.

This shows painfully how the powers are currently distributed.

<5% of the people have >90% of the resources, so they have more influence on the media which then influences which people are elected into positions of power, and then these elected pass laws which shift more power towards the <5%.

So the simple root of the problem is that money gets concentrated on a few people, and any selfrespecting (intelligent) democracy would have to make sure that money can't accmulate too much like that.

But guess who doesn't want laws which avoid overboarding money concentration…

→ a comment to 10 Hackers Who Made History by Gizmodo.

As DDevine says, Richard Stallman is no proponent of Open Source, but of Free Software. Open Source was forked from the Free Software movement to the great displeasure of Stallman.

He really does not like the term Open Source, because that implies that it is only about being able to read the sources.

Different from that, Free Software is about the freedom to be in control of the programs one uses, and to change them.

More exactly it defines 4 Freedoms:

• (0) The freedom to run the program in any way you want (compare this with Windows, which does not let me start it in a virtual machine, because “the hardware changed”).

• (1) The freedom to access the source and change the program (compare this to Starcraft 2 which I can’t use in a LAN-party without having everyone connected to the internet).

• (2) The freedom to copy it and give it to others (compare that to all these iApps, which I can’t even backup easily for my own use).

• (3) The freedom to distribute my changed versions.

This is Free Software as defined by the free software movement which was initiated by Richard Stallman and which made successes like Google possible by giving them a stepping stone to build upon: Free Software users stand on the shoulders of giants.

Open Source on the other hand is often being used as name for products which don’t even fullfill freedom (1) completely. That’s why the GNU project did not take part in the first Google Summer of Code: Google required contributors to say that they work on Open Source. In the second Summer of code that was changed, so projects can now correctly identify themselves as Free Software Projects, and GNU has been taking part in the Google Summer of Code since then.

PS: But still it’s great to see Stallman in this list!

# Swarming, Torrent and Gnutella

Hi,

I just wanted to add, that swarming is included in Gnutella since 2003 or something, and that it already archieved everything back then that the "new trackerless torrents" archieve today.

If you want easy to read information which doesn't need a coder to understand it, just have a look at Gnutella For Users: A guide to the changes in Gnutella for non-programmers.

http://gnufu.net

# The Four Freedoms of Free Culture: Avoid Cultural Slavery

→ comment to The Four Freedoms of Free Culture on QuestionCopyright.org.

Thank you for spreading the thought of freedom in culture!

I currently don’t use creativecommons licenses on my site, because they have no source protection (you can’t exercise your right of modifying, if the work is hidden inside some non-source container, like autoscrolling flash).

Instead I use the GPLv3, for my site (draketo.delicensing) as well as for a free roleplaying book I write (1w6.org — german).

My reason for using free licenses in all my hobby work is simple: When a cultural work becomes part of my life, any restriction on using that work takes away a part of my personal freedom.

That’s why freedom is essential for all cultural works that matter.

Becoming part of my life means that I identify with it, that it means something to me. If there’s a really cool song I listen to all day, then it becomes part of my life.

If I then can’t change and share it, when my tastes change, that part of my life is locked and my freedom taken away. Works which don’t mean something to me can’t take much of my freedom away. But if a cultural work means something to someone out there — to anyone — then it has to be free to avoid stealing that one fans freedom.

So any unfree cultural work is either useless (doesn’t mean anything to anyone) or it’s a tool for cultural slavery (stealing our freedom).1

And I think Stallman is simply afraid. In Software he has the confidence that his work will be improved by others. In culture he doesn’t. I think that’s part of his life, and the only way to change that is to show that free culture is a success for political movements, too.

It’s hard to allow your child to spread its wings and fly on its own, and I think that for him, his manifests which spawned the free software movement are his children.

1. At the same time, though, a cultural work which doesn’t get written doesn’t have the potential to help people progress. So if an unfree work helps people throw off other shackles, then the net gain for freedom might be positive. Just always keep in mind, that being unfree has a cost for every user of the work – which includes all your fans. If your work is unfree, it is worth less than if it were free licensed.

# The internet means unlimited copying. What we make of it depends on us

Comment to is the web too good for us on a BBC blog:

But the web was not really free in the beginning. While its structure was open for everyone and websites bloomed and blossomed by copying code and design from others, the content of sites stayed closed by copyright.

There were many thoughts of freedom in the original web, but the structure gave more freedom than the law, and the easy copying inside the new medium still didn't reach the slow legal body of our offline communities.

Online, though, laws were first ignored, then bent and finally used to create new rules within the laws themselves.

Thus came free software a quarter of a century ago, even before the web was officially called, but already with its basic property of cheap infinite copying, when coders realized that the traditional copyright didn't fit their way of cooperating and curtailed their creative work. It spread and became the base and foundation of todays internet infrastructure, with Apache webservers on GNU/Linux computers serving its content - unbeknown to most of its users.

And from the same spring came creative commons, about 20 years later, used by artists who realize that the traditional rules do more harm than good to them.

The new digital world began before the internet was started by making the copy an integral part of even looking at data, but it grew with the internet which pushed the effects of this new technology right into the face of our societies. And so the digital world which currently finds its most well known expression in the internet is an ownership breaker by design, and many battles were fought over this most beloved and most hated feature.

You can no longer control what people do with things you put into the internet, as long as you allow them to see them. Once they saw them, if even for a moment, they could have a copy. You can only use social rules to keep them from passing on their copies, or take over their computers.

Even while I write this comment, I don't do it on your website. I write it in a local copy of your website which is stored by my browser, and I could go on writing it long after your website disappeared, as long as my computer kept the copy.

The only way around this is to go back to the analog age, where showing doesn't equal handing out a copy, or to allow some entity complete control over our computers to enforce certain rules - and over our lives which more and more move towards the digital space.

To come back to the question: The web is not too good for us. It provides more openness than many people want to provide, and far more than the law offers, but this openness gave rise to movements which shaped the openness into freedom by establishing the rule that whatever is freed must never be shackled again. They took the single inherent freedom of copying and added the freedoms of changing and using. From that source came free software which drives the internet and the Wikipedia which provides the worlds largest publicly accessable knowledge base. Creative Commons walks a similar path by always allowing the copying of the creative works, but it allows for much more control by the creator.

The internet globally removes the restriction on copying which is inherent in our analog world. Our societies and legal systems, though, will take time to adapt. If we're lucky they'll accept the internet as freedom and adapt as free software and the wikipedia did. If we're unlucky they'll try to limit the openness, either through technology or through laws. They could turn that openness from an openness for people into an openness of people, because copying doesn't only go one direction. They can just as well copy a record of every move me make and use this to create an almost perfect surveillance system with all its implications on freedom.

And they wouldn't necessarily need to establish the punishing based rules we currently have as laws. They could just as well use digital shackles, which not just disallow some action but make it impossible. The rules could be like a car which makes it impossible for me to drive faster than the law allows while my child bleeds to dead on the backseat.

So the web is neither good or bad. It's simply a world which operates on slighly different rules than the physical world, and we're still only learning the implications, promises and dangers of that tiny change of rules.

# using drupal for documenting software -> blogging with a structure

-> an answer to Blog posts are no replacement for documentation by flameeyes.

Hi flameeyes,

I kinda know your problem: It's far easier to write a number of Blog posts than to write a structured book up front - and I think two major parts of that are, that a weblog provides many more "Yes, I've done it!" moments than a book and that a blog has a much lower barrier to entry.

I rather know it from the other side, though: I wrote a (german) roleplaying ruleset in a wiki, and I got very little feedback and often slacked.

My solution to that was to switch to Drupal which provides a book-style structure with (automatic) blog-style news. I now write articles which can stand for themselves but which are automatically organized by section and keyword.

I also do that for my personal page, but I think the RPG is a much better example (my personal pages are organized by content type/topic (song, poem, story, technical article, ...), while the RPG articles are more connected):

On the righthand side you see the book navigation. the equivalent on my main page about programs would for example be:

A similar structure should be useful for your documentation of programs. You can even first write an uncathegorized blog post and later sort it into its place (and also move it around freely afterwards) - for example when you realize that you write more about the topic.

That way you can start with writing something about a new program, and give that program its own cathegory when you see that you're writing about it more often.

Another advantage of this is, that I began to check every single text, if it's interesting to read (cathegory pages with only a few lines of text can easily be set to not appear on the frontpage - it's simply one checkbox to untick :) - once they grow into articles in their own right, they can then be "republished" to the frontpage with updated publish-date, so they appear as new posts).

# When you're happy with a free project, write a thank you!

From the Gentoo Forums:

I agree that spreading a positive
message is good, but I've always
been nervous to send thank you
notes out to people I've never
met.
Worse, I don't want to potentially
overload an inbox with a mes-
sage that isn't going to help all
that much. Hopefully it would be
`

I try to remember to send "thank you"s from time to time.

Just remember that all these people are doing this in their free time, and one of the pillars of motivation is feedback and knowing that what you do is important.

For example I recently (two months ago) sent a mail to the developer of TortoiseHG in which I wrote him, that to me his Program is a revolution for version control systems, because it allows version control even for users who don't know much about their system (and added an example where I managed to use his program to work in a DVCS together with a mostly computer illiterate Windows user - and get going in just 15 minutes).

I could almost feel the happy beaming in his reply where he said even this alone would make it worth all the effort he spent on it.

And I remember my own almost unbelieving joy at having people tell me that the pen-and-paper roleplaying system I write is the best system for their one-shots. It brightens up the whole day and makes me smile much and easily :)

Naturally contributing often feels even better (people who join in, are one of the highest compliments to the project), but when that isn't possible (we all have limited time-budgets), a friendly mail - or better still: A friendly public post which will also lead others to the program - is a great way to help your favorite project!

And if it already gets very much positive feedback, you could look at all the other projects you enjoy and see if one of them could get a bit more feedback. We live through diversity, and every little program adds its share.

Especially for people who get little feedback, such a message helps very much. If nothing else, it helps the developer to see that his work has an important impact. And if the feedback is unexpected, that's even better. People who gets tons of feedback might get used to it, but people who get very little feedback can really flourish - or at least enjoy a happy smile for a few hours and think fondly of what they accomplished and look forward to doing more.

# Why EMI locks channels: It’s a battle about control

To Why I Steal Movies… Even Ones I'm In by Peter Serafinowicz.

I think there’s a very simple reason why EMI remotely encumbers a channel: It’s a battle about control. The battle about who will control where, when and how people can enjoy works of art.

That battle goes against the fans (who want to enjoy stuff and pay for it on their own terms) and the artists (who want people to enjoy their stuff and pay for it).

It benefits those who want to pull money out of the revenue stream (which goes from fans to artists) even though the almost free distribution via the internet makes them mostly obsolete.

And online piracy isn’t theft. It’s unauthorized copying which, as Peter Serafinowicz very nicely explains, can even help the artists make more money. The fans get more, the artists get more, only one loses: The one who wants to take freedom from fans and artists alike.

# writing together – collaborative editing is easy

→ comment to The next wave in scholarly word processors?

What I’d like to see is more people using version tracking systems.

With these you have a discussion which can be merged easily when it gets branched. I use it for anything I do, and I could use it together with an only-windows-and-GUI user with ease, installing TortoiseHG for both and Lyx for him (LaTeX made easy – you don’t have to see the sources).

Just right click in a folder, call synchronize and pull and your work gets merged.

For publishing to the web and to PDF I’d use Markdown with Markdown to LaTeX:

Maybe with markdownify for pages which already are HTML:

Besides: A simple Mercurial repository with URLs as document identifiers would allow forking the web :)

# For religious spammers: Shut up and help save our *planet*

-> the_gdf just got spam from a raving christian. Since I am a moderator there, I got that spam and rejected it. But because I was in a good mood, I felt compelled to answer :)

- insert random ravin' lunatic the-world-is-going-to-end talk -

*gg*

Have fun!

Me, instead, I'll rather go with the 6th world of the inkas - they were there earlier than your book.

The alternative is to just believe in science: Ecologists told us 30 years ago

"We're destroying our environment. if we keep doing this, 30 years from now the earth will warm and we'll have weather catastrophies, epidemies (through warmer climate) and much more".

Well, now it's 30 years later and we have weather catastrophies, epidemies and much more.

Also more than 30 years ago left wing economists warned us "if we keep distributing money unevenly and letting big companies run free, our economy will crash again".

Well, it's more than 30 years later, and guess what? They were right. We now have a worldwide economic crisis.

Oh, and more than 14 years ago, people tried to tell the american government "If you keep bulding up terrorists to fight against russia, these will turn around at some point and attack you". Then, 7 years ago people (including me) said "if you attack afghanistan, you will help the terrorists to find new cannon fodder and that way strengthen terrorism".

And now it's 7 years later, and the taliban and "international terroristst" are stronger than ever.

So get up from your book and look at the world. If we don't act, we let people turn our world into our own hell, so don't waste your time but act to save our world!

I don't care what your god says will happen after our dead, but if he really created this world he will be damn pissed at YOU for letting it get destroyed - and I assume that you do care about that.

And here's a little hint: Every creator god is likely to see it the same way, so even if you are wrong and some other religion which believes in a creator god is right, the only way to be on the safe side is to help keep our planet alive.

And if there is no god, then my children will thank me for helping to save their future.

That said: Don't ever spam the_gdf again or my lawyer will be happy to get a chance to sue you. You've been warned.

Which also means: Noone but me read your mail, and noone will, since it isn't going to get through.

I hope you enjoyed my answer :) I got slightly angry at the end, which I take as a warning to ignore these kinds of emails completely from now on.