Bitcoin transaction in timelines

Investigation bad people might involve bitcoin, the blockchain technology is very popular among criminals, as it is easy to use and „untraceable“ [1]. E.g. in most ransomware cases like „Ryuk“ [2] the company Crowdstrike has listed several bitcoin wallets, that they attribute to the threat actor.

How can that information help your investigation / your intelligence gathering? IN certain ways, you could track your own wallets for transactions to these wallets. Another aspect, that this blogpost will cover on is the timeline aspect of it.

As bitcoin transactions make use of the blockchain, who is public by design, it is possible to:

  • tell, how many bitcoins a certain wallet currently holds
  • see transactions from the past

The second aspect is what I want to focus on, because if we have a look at the transactions, we might be able to identify the point in time a certain group was active and enhance our other DFIR activities enriched with that information. The transaction log is like your journal of your bank account, it tells basically who is transferring money to a wallet and where the bitcoins are transferred to.

In the example above, the bitcoin wallets we are interested in are (Source Crowdstrike Blog post):

BTC AddressTotal ReceivedNo ReceivedTotal Value (USD)

Source of transaction information

There is a whole bunch of public webpages who give transaction history for a given wallet, but as it should be an automated step, the goal is to have a page with an API, after some searching I found: .

Making the call

Doing the API call to get transaction information is pretty simple:

GET /api/v2/address/{NETWORK}/{ADDRESS} 

That will give you the following information

  "status": "success",
  "data": {
    "network": "DOGE",
    "address": "DM7Yo7YqPtgMsGgphX9RAZFXFhu6Kd6JTT",
    "balance": "31.03885339",
    "received_value": "25828731.93733507",
    "pending_value": "0.0",
    "total_txs": 225,
    "txs": [ ... ]

Which is exactly what we need, with some Python JSON parsing, it is easy to get the info we want – the code I am using is available on

After that we have an CSV with the date, the transaction happened, the raw information from the API and some meta data, enough to bake into a timeline.


The script is already made to output CSV files ready for importing them into Timesketch, as I found it to be the ideal tool to work with data points related to timestamps. Importing the CSV is straight forward and explained in the official documentation page [3].

The timeline csv looks like the following:

CSV of BTC history

Making it pretty

Importing it into Timesketch, the timeline looks very nice:

BTC transactions in Timesketch

Added Value

Now what is the added value for investigations? The above is another layer of data points /evidence. It can be used to weight limit findings in your organisation, e.g. you assume you are hit by a phishing campaign, if your phishing campaign was seen a lot earlier or a lot later than the transactions above display, it is unlikely you are hit by the same campaign. It can also be used to make a case against individuals if enriched by host forensics – your imagination is the limit.


I hope the article is helpful and the scripts can be used, let me know via comments within the blog, issues on github or twitter messages if you have any questions, improvements.

Thx for reading

Further reading / references

  • [1]
  • [2]
  • [3]

Autotimeliner to CyberChef to Timesketch

As you might know, I love to combine several OpenSource tools to get things done. One thing I wanted to play for some weeks is Autotimeliner by Andrea Fortuna.This tool is made to extract events from an Memory Image to combine it into a timeline. If you have a timeline, what comes next? Of course, putting it into Timesketch. So let’s give it a try.

We start with a memory dump from a Stuxnet infection from Download the four files, extract them and you are good to go.



Installation is pretty easy, install Volatility either via pre-compiled binary or install it manually, see the Volatility installation wiki for further information.

Test it running: -v


To install sleuthkit run:

brew install sleuthkit


sudo apt-get install sleuthkit

Installation Autotimeliner

Simply clone the GitHub repository:

git clone

Run it

python -f /Users/foobar/Downloads/ -p WinXPSP2x86 -t 2009-10-20..2018-10-21

That might take some time depending on your hardware.

Now you have an csv file around 5.6 MB.

                _     _______ _                _ _
     /\        | |   |__   __(_)              | (_)
    /  \  _   _| |_ ___ | |   _ _ __ ___   ___| |_ _ __   ___ _ __
   / /\ \| | | | __/ _ \| |  | | '_ ` _ \ / _ \ | | '_ \ / _ \ '__|
  / ____ \ |_| | || (_) | |  | | | | | | |  __/ | | | | |  __/ |
 /_/    \_\__,_|\__\___/|_|  |_|_| |_| |_|\___|_|_|_| |_|\___|_|

- Automagically extract forensic timeline from volatile memory dump -

Andrea Fortuna - -

*** Processing image /Users/foobar/Downloads/
*** Using custom profile: WinXPSP2x86
*** Creating memory timeline......done!
*** Creating shellbags timeline......done!
*** Creating $MFT timeline......done!
*** Merging and filtering timelines......done!
Timeline saved in /Users/foobar/Downloads/

The format used for the dates is not compatible with Timesketch:

more /Users/foobar/Downloads/
Date,Size,Type,Mode,UID,GID,Meta,File Name
Tue Oct 20 2009 12:08:04,0,ma.b,---a-----------,0,0,84995,"[MFT STD_INFO] Python26\Lib\SITE-P~1\setuptools-0.6c11-py2.6.egg-info\TOP_LE~1.TXT (Offset: 0x8a28c00)"
Tue Oct 20 2009 12:08:04,0,ma.b,---a-----------,0,0,85000,"[MFT STD_INFO] Python26\Lib\SITE-P~1\SETUPT~1.EGG\DEPEND~1.TXT (Offset: 0x75e4000)"
Tue Oct 20 2009 12:08:06,0,m..b,---a-----------,0,0,84985,"[MFT STD_INFO] Python26\Scripts\EASY_I~1.PY (Offset: 0x91b9400)"
Tue Oct 20 2009 12:08:06,0,ma.b,---a-----------,0,0,84986,"[MFT STD_INFO] Python26\Scripts\EASY_I~1.MAN (Offset: 0x91b9800)"
Tue Oct 20 2009 12:08:06,0,ma.b,---a-----------,0,0,84987,"[MFT STD_INFO] Python26\Scripts\EASY_I~1.EXE (Offset: 0x91b9c00)"
Tue Oct 20 2009 12:08:06,0,ma.b,---a-----------,0,0,84988,"[MFT STD_INFO] Python26\Scripts\EASY_I~2.MAN (Offset: 0x1042f000)"
Tue Oct 20 2009 12:08:06,0,m..b,---a-----------,0,0,84989,"[MFT STD_INFO] Python26\Scripts\EASY_I~2.PY (Offset: 0x1042f400)"
Tue Oct 20 2009 12:08:06,0,ma.b,---a-----------,0,0,84990,"[MFT STD_INFO] Python26\Scripts\EASY_I~2.EXE (Offset: 0x1042f800)"
Tue Oct 20 2009 21:21:26,0,...b,---a-----------,0,0,66083,"[MFT STD_INFO] Documents and Settings\Administrator\Desktop\SysinternalsSuite\ZoomIt.exe (Offset: 0x1a8a5c00)"
Wed Oct 21 2009 00:02:28,76800,m...,---a-----------,0,0,65342,"[MFT FILE_NAME] Program Files\NTCore\Explorer Suite\Tools\DRIVER~1.EXE (Offset: 0x14b9c800)"
Wed Oct 21 2009 00:02:28,76800,m...,---a-----------,0,0,65342,"[MFT FILE_NAME] Program Files\NTCore\Explorer Suite\Tools\DriverList.exe (Offset: 0x14b9c800)"
Wed Oct 21 2009 00:02:28,76800,m...,---a-----------,0,0,65342,"[MFT STD_INFO] Program Files\NTCore\Explorer Suite\Tools\DRIVER~1.EXE (Offset: 0x14b9c800)"
Wed Oct 21 2009 18:25:52,780800,m...,---a-----------,0,0,65338,"[MFT FILE_NAME] Program Files\NTCore\Explorer Suite\TASKEX~1.EXE (Offset: 0x14b1b800)"

so we need to adjust that. In the past, I used an own developed python script for that, but that does not really scale, so I considered another option.


An open source tool by GCHQ:

A simple, intuitive web app for analysing and decoding data without having to deal with complex tools or programming languages. CyberChef encourages both technical and non-technical people to explore data formats, encryption and compression.


git clone

Now open it

From the CSV that was generated, use your favourite tool to extract the first column of the csv which should look like that:

Tue Oct 20 2009 12:08:04
Tue Oct 20 2009 12:08:04
Tue Oct 20 2009 12:08:06
Tue Oct 20 2009 12:08:06
Tue Oct 20 2009 12:08:06
Tue Oct 20 2009 12:08:06
Tue Oct 20 2009 12:08:06
Tue Oct 20 2009 12:08:06
Tue Oct 20 2009 21:21:26
Wed Oct 21 2009 00:02:28

Now use the following CyberChef Recipe


And paste them all into input. It will result in a file you can download with the output.

Now the output txt has two CSV columns, you need to combine them with your autotimeliner csv to have the following headers:

datetime	timestamp	timestamp_desc
2009-10-20T12:08:04+0000	1256040484000	stuxnet.vmem_Mem_Dump_Timeline
2009-10-20T12:08:04+0000	1256040484000	stuxnet.vmem_Mem_Dump_Timeline
2009-10-20T12:08:06+0000	1256040486000	stuxnet.vmem_Mem_Dump_Timeline
2009-10-20T12:08:06+0000	1256040486000	stuxnet.vmem_Mem_Dump_Timeline
2009-10-20T12:08:06+0000	1256040486000	stuxnet.vmem_Mem_Dump_Timeline
2009-10-20T12:08:06+0000	1256040486000	stuxnet.vmem_Mem_Dump_Timeline

Now the csv should like like:


2009-10-20T12:08:04+0000,1256040484000,stuxnet.vmem_Mem_Dump_Timeline,Tue Oct 20 2009 12:08:04,0,ma.b,---a-----------,0,0,84995,[MFT STD_INFO] Python26\Lib\SITE-P~1\setuptools-0.6c11-py2.6.egg-info\TOP_LE~1.TXT (Offset: 0x8a28c00)
2009-10-20T12:08:04+0000,1256040484000,stuxnet.vmem_Mem_Dump_Timeline,Tue Oct 20 2009 12:08:04,0,ma.b,---a-----------,0,0,85000,[MFT STD_INFO] Python26\Lib\SITE-P~1\SETUPT~1.EGG\DEPEND~1.TXT (Offset: 0x75e4000)
2009-10-20T12:08:06+0000,1256040486000,stuxnet.vmem_Mem_Dump_Timeline,Tue Oct 20 2009 12:08:06,0,m..b,---a-----------,0,0,84985,[MFT STD_INFO] Python26\Scripts\EASY_I~1.PY (Offset: 0x91b9400)

There is one little caveat, you need to add „“ around the message, because some values might break the Import process.

That can now be imported into Timesketch

Et voila, a timesketched Memory Dump

Combining Virustotal, PassiveSSL and Timesketch


Playing with Timesketch for a while and working on some OSINT timelines I was tired to investigate MD5 and domains / ips all manually so I tried to automate some of the work. Why is that important? If you have a list of hashes, domains and IPs, you of course can check your SIEM, EDR solution etc – but what if you have a hit? Would it benefit your investigation to at least have an idea of the timeframe something was used by attackers or seen in the wild?

Most shared indicators are lacking the timeframe, so we need to add those values by external information on our own.


There is no need to further explain Virustotal, it is basically a huge dataset of malware and information about domains and ips.

In particular information about a specific point in time a domain was seen to point to an IP and back is good to know to build your timeline.

E.g. if you have pointing to all the time, only on one day it was pointing to – hits in your infrastructure should be higher escalated if seen during that day, out of that time window it might still be important, but not as urgent as during that day.

In regards to hash intelligence, Virustotal is nice, because if you add the info, when the last scan date of a file was, you can at least tell, that the file was known after that day.

I asked Virustotal to add more information they already have to the API and we will have wait till it is exposed:

  • First seen in the wild
  • First uploaded to VT
  • PE compile time


Alexandre Dulaunoy and Eireann Leverett have given a talk at the FIRST conference in Berlin back in 2015, which took my attention, but it took some time till I really had time to implement something to use the idea.

The basic idea is that, out of several sources, passive ssl services such as CIRCL passiveSSL collect certificates and expose information via API.

For timeline analysis in particular, the following dates are important as they might shine some light of attacker activity:

  • first seen in the wild
  • last seen in the wild
  • not use before
  • not use after

If you now add all of the information above, you might be able to get a better idea, when an IP / Domain / File was active.

This information should then be fed into a Timesketch investigation.


Using some sample data from APT33:

Combining with the python script below with the following indicators:


Domains resolving to IPs

It is transparent when the hashes have been last scanned and what Ips resolve to the domains mentioned in the report.

The other thing is that right before some malware was mentioned by Fireeye in the report, SSL certificates became invalid:

SSL Certificate

Of course the individual SSL certificate can also been investigated:


The example is available on github:

Next steps

  • Waiting for VT to expose more things
  • Improve the script
  • Introduce multiple pDNS providers

Timesketch on an Raspberry Pi3


Does not work at the moment


Playing with Timesketch ( for a while I was wondering if it is possible to install Timesketch on a Raspberry Pi 3 to do some basic analysis, no heavy GB plaso imports and such.

A raspberry Pi is around 40 $, so pretty cheap and can be ordered almost everywhere on the planet, and you might already have some PIs from previous projects like:

I have also written about Timesketch / and or maintaining the following Github repositories:

Basic installation

I used the Noobs Image to install the raspberry using a 128 GB Micro SD card to have enough storage.


Trying to install Java will cause some Java issues because you need to install it manually, follow:

sudo mv /usr/lib/jvm/java-8-openjdk-armhf/jre/lib/arm/client /usr/lib/jvm/java-8-openjdk-armhf/jre/lib/arm/server

Installing Elastic Search

Follow that article:

Installing Timesketch

Simple, SSH to your raspberry pi and follow:

When installed elasticsearch:

vi /etc/elasticsearch/elasticsearch.yml

Add the following:



This one is a bit tricky because it might fail with:

Collecting pycypher==0.5.9
Could not find a version that satisfies the requirement pycypher==0.5.9 (from versions: )
No matching distribution found for pycypher==0.5.9



sudo apt-get install docker-compose

So pycypher does kill the posibility to use Timesketch on a raspberry at the moment:

 Getting page
  Looking up "" in the cache
  Current age based on date: 30
  Freshness lifetime from request max-age: 600
  The response is "fresh", returning cached response
  600 > 30
  Analyzing links from page
  Could not find a version that satisfies the requirement pycypher (from versions: )
Cleaning up...
No matching distribution found for pycypher

Amazon Fire HD als digitaler Bilderrahmen

Das perfekte Weihnachtsgeschenk wäre doch ein digitaler Bilderrahmen der sich aktuelle Fotos immer aus der cloud zieht sodass man ganz einfach neue Bilder mit den Eltern oder Großeltern teilen kann.


Genau dieses Ziel hatte ich, möglichst günstig und anwenderfreundlich sollte es sein. Meine Wahl fiel dabei auf die Fire HD Tablets von Amazon. Diese sind nicht nur schön günstig, sondern von der Ausstattung auch ausreichend.


Nach dem Kauf und der Lieferung muss eine Aktualisierung der Amazon Fire OS durchgeführt werden. Das dauert ein paar Minuten.

Google Play Store

Der nächste Schritt ist das installieren von Google Play Store bzw. einigen notwendigen Abhängigkeiten, ich habe mich dabei an die folgende Anleitung gehalten:

Hat man das gemacht kann man sich die App „Fotoo“ im Google Play Store installieren.

App: Fotoo

Es ist zu empfehlen, die Premium-Variante als InApp Kauf zu erwerben, es schaltet einige coole Features frei und hat keinen Session Blocker (heißt nach einiger Zeit bekommt ihr einen zwei Minuten delay, in denen kein neues Bild angezeigt wird)

Bild 1: Fotoo your session will resume in… Go premium


Es ist Empfehlenswert, sich Fotoo in der Premium Variante zu kaufen. Nutzt man den gleichen Google Account auf mehreren Fire Tables, muss man die Premium Variante auch nur einmal kaufen.


Die Einstellungen in Fotoo selbst sind in dem folgenden Artikel schön beschrieben

Einziger Stolperstein dabei, wie man den Developer Mode im FireHD aktiviert:

In Android: Settings –> Device Options –> Tap Serial Number Field  7 times

Jetzt ist der developer mode aktiviert und man kann das Display auf „stay awake“ schalten. 


Ab dann hat der Bilderrahmen folgende Eigenschaften

  • Schaltet sich automatisch bei Anschluss des Ladekabels an
  • Bilder werden automatisch gewechselt
  • Bilder werden über die Cloud verwaltet

Stolperstein in Google Photos

Einziger Wermutstropfen, Google Fotos erlaubt es aktuell noch nicht, geteilte Alben über Apps abzurufen, d.h. ihr müsst die Alben in euer lokales Album kopieren, oder eben auf einen Dropbox oder Google Drive Ordner zurück greifen.

Fertiges Vergleichsprodukt

Ein vergleichbarer fertiger Bilderrahmen wäre z.B. der NIXPLAY Seed Digitaler Bilderrahmen WLAN 8 Zoll W08D Schwarz, aktueller Preis sieht man im unteren Banner. 

Dem gegenüber hier ein paar Amazon Fire HD Angebote, welche auch für andere Zwecke genutzt werden könnten.

Ablauf um Bilder mit der Familie zu teilen

Wir gehen mal von drei Familien aus, bei der Familie A Bilder mit Familien B und C teilen wollen.

Familie A macht Fotos wie es ihnen gefällt, laden die schönen Fotos in Google Photos hoch, arbeiten gemeinsam an einem Album. Irgendwann sagt Familie A, wir wollen diese Fotos jetzt Familie B und C zeigen, also teilen sie das Album mit Google Account 1 und Google Account 2 von Familie B und dem / den Google Accounts von Familie C.

Google Account 1 bekommt jetzt einen Hinweis, dass er neue Fotos freigegeben bekommen hat, er öffnet die Einladung, sieht die Fotos und klickt auf „Add to library“ (siehe Screenshot). Dadurch wird das Bild in eure eigene Library kopiert und Fotoo kann auf diese nun zugreifen.

Bild 2: Add to library

Danach öffnet Account 1 Person sein Fire HD Tablet, öffnet Fotoo (sofern noch nicht geöffnet), öffnet die Settings und fügt das neue Album zur Fotoo Show hinzu – fertig.

Bild: Fotoo Photo Streams Quellen Auswahl

Wenn jetzt neue Bilder zu dem geteilten Album hinzugefügt werden, muss der Empfänger erneut den Button zur lokalen Library hinzufügen durchführen, man kann sich also überlegen, immer das gleiche zu nutzen oder immer neue Alben zu teilen.

Amazon Account entfernen

Wenn man das Fire HD Tablet verschenkt, muss in den bisherigen Schritten ein Amazon Konto hinterlegt sein, um die Apps zu installieren usw. Wenn man bis zu diesem Schritt gekommen ist, kann gefahrlos der Amazon Account von dem Fire HD entfernt werden. Solange der Google Play Store mit Google Installiert bleibt, bleibt auch die Fotoo App bestehen.

Den Amazon Account entfernt man über Einstellungen –> Mein Konto –> Abmelden

Dann kann das Tablet gefahrlos verschenkt werden, ohne, dass das Tablet Einkäufe im eigenen Namen machen könnte.

Amazon Werbung entfernen

Wurde ein Fire HD Tablet mit „Spezialangeboten“ gekauft um etwas Geld zu sparen, kann man getrost einfach das Amazon Konto entfernen, damit entfallen auch die Werbeangebote.


Um den Nutzern des Bilderrahmens eine Anleitung zur Hand zu geben habe ich ein Google Docs Dokument angelegt, welches auf aktuellem Stand gehalten wird :

Next steps

  • Nächster Schritt wird dann der Bau eines Ständers mit Lego.
  • Induktives Aufladen
  • Optik an einen Bilderrahmen anlehnen


  • Günstiger digitaler Bilderrahmen
  • kann auch als normales Tablet genutzt werden
  • Sehr gute Bildqualität
  • Schaltet sich automatisch bei Anschluss des Ladekabels an
  • Bilder werden automatisch gewechselt
  • Bilder werden über die Cloud verwaltet
    • Google Drive
    • Google Photos
    • Dropbox
    • Microsoft OneDrive
  • Bilder können auch lokal auf dem Gerät gespeichert werden
  • Verteilbar auf verschiedene Familien
  • Bilder können remote aktualisiert werden
  • Digitaler Bilderrahmen kann WLAN


Out of my attempt to reverse engineer the Komand API (a security orchestration tool) I found myself writing some python helper to use the API. Maybe it is useful for some people, so I decided to OpenSource it.

It is hard to understand why a tool, thats main purpose it to connect APIs does not have an API documentation / client itself.

Usage should be pretty simple, clone the repository and good to go:

usage: [-h] [-v] [-wm] [-j JOB]

optional arguments:
-h, --help show this help message and exit
-v, --verbose increase output verbosity
-wm, --workflow_map show workflow map
-j JOB, --job JOB show job status

Feel free to open Issues or Make Pull Requests. The repository is hosted on Github:

100 days on the board of directors of FIRST

There is this thing looking back after 100 days of starting a new challenge. This post is doing the share my perspective on my 100 days on the board of directors of FIRST (Forum of Incident Response and Security Teams).
On June 28th, 2018 the annual general meeting of FIRST elected five people to serve on the board of directors for a two year term and I was one of the five individuals.
Still remember the day as it was yesterday, I was very nervous going into the AGM knowing that outstanding people throwing their hat into the ring. In my diary I wrote the great relieve I felt after the results where called out.

Kuala Lumpur

Right after the election the first board meeting was called to order from the chair Thomas Schreck and we had to elect the new officers and start think about different tasks to be taken by the new elected people. Been a guest to board meetings before, I thought I am use to the structure and Robert’s rules that are used to run the meeting – but it is a different story calling out „aye“ and „nay“ to reflect your position when a decision is needed. Being new on the board means you will get an adhoc bootcamp of „duties and obligations of the board of directors by the FIRST lawyer and also some organisational topics and infrastructure to get you up to speed, such as a mail address and access to various only tools, all within hours.

San Fransisco

This first physical board meeting was a new experience, so let me share it with you.

I have never been to San Fransisco before, so that alone was mind blowing to be at the center of the digital revolution. Anyway the reason or that trip in September was to bring 8 people (two board members joined virtually) from around the globe together to meet, discuss and work on FIRST and for the community that FIRST is representing.

Let me say those meetings are intense, I am use to attend meetings – in most meetings you either need to concentrate for an hour or two and then the meeting is closed or it is a workshop setup where most of the content is already agreed / prepared in advance. For FIRST board meetings, you have to pay attention for eight hours straight, most coffee breaks are exploited with continuing the conversation and lunch is also about FIRST. As a non native speaker that is even more intense to follow. But we did get things done, we worked on topics that will enable FIRST to further grow and also using the resources we get from members and participants of our events even more targeted.

Even on the travel days, we managed to squeeze in some 1on1 meetings to brainstorm on topics on a detailed level that will sooner or later be proposed to the board of directors and the members.

That trip showed me how much enthusiasm every individual on the board has, they are truly committed which is great to see and also a prerequisite, as everyone has his tasks and duties to keep FIRST running.

Recognition of FIRST

Before joining the board, I truly believed the fact that FIRST is a key player in addressing some of the challenges global population is facing, e.g. fake news, cyber warfare and privacy. After 100 days, I can now say that it is a matter of fact that more and more organisations value FIRST by asking for our opinion, input or expertise by training policy makers and our efforts with our valued partner organisations. We are still on a long journey to prepare for that and be able to answer all that demand on a level that we feel comfortable with.


If you read thus far and think serving on the board is a tough job, you are right, but I haven’t covered one particular aspect which is the central point of every meeting: Nora Duhig.

Every meeting has an agenda (obviously) and needs to have minutes. Imagine 10 adults who are experts in their professional area discussing and arguing on all aspects starting from finance over contracts to nifty details of infrastructure (hosting infrastructure on prem. or in the cloud, which technology to use…). For transparency reasons, every meeting has to have meetings, so someone must keep track of everything, and that is Nora. It is impressive to observe her ability to follow the discussions, writing minutes while keeping the ability to be pulled into the discussion out of the blue at any time – because she has been attending board meetings way longer then most current members combined and it is critical to get the reason a certain decision was taken in the past to make decisions for the future by either stick to that decisions or change the strategy, having that context is gold.


It is hard to imagine how complex an not-for-profit-organisation that „only“ enables a community is. This organisation has 30 years of history, that includes some small things that we as a board need to work on to transform things we have done in the past into a modern way to operate an organisation. FIRST is doing business with entities literally all around the globe because of the membership spreading and the events we host or co-host.

I am in no way saying I am now settled at the board as the planning phase for the FIRST conference 2019 and already 2020 and 2021 (yes not a typo!) are increasingly taking more time on board calls and the other communication channels that we use almost on a daily base. So I am looking forward to the challenges we have to tackle as a group and I am thankful for that opportunity.

Statistics last 100 days

– 2 board meetings in Kuala Lumpur
– 3 virtual board meetings
– 1 physical board meeting in San Fransisco (3 days + various side meetings).
– 2 virtual meetings with the membership committee
– 3 calls as the liaison for special interest groups (SIGs)
– was active on 16 of the last 30 days in our internal chat
– 50+ mails written to the board mailing list
– 300 mails received via board mailing list

Thanks to Serge Droz for the picture shown above.

Backfischfest Blog 2018

Backfischfest Blog

In den letzten Jahren hat sich der Backfischfest Blog oder auch Backfischfest Vlog um die Band Die Döftels zu einer Institution in Worms entwickelt.

Die witzige Art die Tradition auf die Schippe zu nehmen und hinter die Kulisse zu schauen macht Freude und ist jeden Tag auf dem Festplatz oder der Fischerwääd Gesprächsthema.

Aus dem Backfischfestblog sind auch solche Geschichten wie die „Terrence-Hill-Brücke“ entstanden.

Unterstützt wird der Backfischfest Blog von einigen Wormser Unternehmen. Schnitt und Kamera übernehmen rawk und Steven Amendt

Hier die Videos

9TageTicket 10 Jahre

Dieses Jahr zum Backfichfest Worms feiert das 9TageTicket, an dem ich mit einigen Freunden beteiligt bin das 10-jährige Jubiläum.
Was anfangs noch ein Scherz unter Freunden war um zu zeigen, wie oft man auf dem Backfischfest war, hat sich mittlerweile zu einer Institution entwickelt, die von Wormsern wie auch den Schaustellern gerne angenommen wird.

Die Arbeit an dem Ticket macht jedes Jahr wieder Spaß, es ist zu einer Tradition geworden, am Tag vor der Eröffnung gemeinsam die Tickets zu drucken und zu schneiden und dann den ersten Abend auf dem Fischfest gemeinsam an der Ausgabe zu sitzen, bekannte Gesichter sehen und einfach die Zeit genießen, die auch manchmal etwas stressig ist.

Ich freue mich auf jeden Fall wieder drauf.


Raspberry Pi EyeFi Server

I tried to ceate a Raspberry Pi as a standalone Photo catching device for multile EyeFi Cards.

Turns out that is not possible at the moment using EyeFi Mobi cards.

That is what I tried:


– Raspberry Pi

– EyeFi Mobi card

– Edimax USB Wifi Dongle

– Camera


– Raspian install


– git clone the eyefiserver2
– follow


Start the script

sudo start /etc/eyefiserver.conf /var/log/eyefiserver.log


[03/26/16 01:32PM][runEyeFi] - Eye-Fi server started listening on port 59278
tcp        0      0 *               LISTEN      873/python  

Seems okay

Upload Key

The first issue was the upload key.
Connected two different eyefi cards with OSX and Windows 7 and was unable to find an upload key other then 00000000000000000000000000000000


/Users/$USERNAME/Library/Application Support/Eyefi/Eyefi Mobi/

But there is a SQL database in:

And you can do the following:

sqlite3 offline.db
SQLite version 
Enter ".help" for usage hints.
sqlite> SELECT o_mac_address, o_upload_key FROM o_devices;

Hm but still, using that upload key (was reducted) the eyefiserver2 did not work.

And I was unable to get a connection from my camera to my pi.

There is an issue reported in github:

That referenced the following Whitepaper:

So at the moment the problem has not been solved,an workaround would be using an Mac / Windows System, or to upgrade to the larger EyeFi Version:

Feel free to comment your solutions below.

Further reading:

Raspberry PI and Eye-Fi