Rearview Mirror: Side Projects incl. Particle Simulation, 3D Replicator & Augmented Reality
Particle Simulation on IBM Cell Blades
One project back at university was to write a simple particle simulation on IBM Cell Blades. It’s main intention was not so much the simulation itself, but more the learning experience to write software against a multi-core architecture like the Cell Broadband Engine Architecture with its different PowerPC architecture consisting of Power Processing Elements (PPEs). While the CPU design was very competitive for its time, as end users might remember from the PlayStation 3 which also used a Cell processor with 6-8 PPEs (technically 8 – 1 disabled, 1 reserved for the system) it took some time for developers to adapt to the unfamiliar way of developing against it. It certainly was a different experience for us young students as an excursus into mainframe land. The following picture shows my point sprite based OpenGL rendering of a particle simulation run.

Intraoperative Head Localization for Neurosurgery
For a few years I considered to obtain degrees both in medicine and computer science – I either wanted to combine neuroradiology, neurosurgery or laboratory medicine with a CS background, but ultimately decided to restrict myself to pure computer science, since I wanted to stay focused on AI and cloud. However, it was a great idea to do medicine and I hold the disciple in highest regards. I was fortunate enough to attend an outstanding course at Helmholtz Institute where we would analyze CAT scans and segment 3D sections of a skull by density. This way we could isolate spherical orientation objects attached to the head for neurosurgery and by very precisely localizing them we could infer the exact orientation of the head as needed for such critical procedures. The project was not only interesting because of the vision aspects, but also because it allowed us insights into how rigorous requirements analysis and quality assurance needs to be for medical software.

Database for Neuromuscular Diseases Search & Semantics as well as
Rule-Based Decision Support System for Cerebrovascular Disorders
I did multiple projects with the Medical Informatics chair at Uniklinik RWTH Aachen. I used Hibernate as a persistence provider and Hibernate Search as a search framework to enable complex queries against structured medical data and integrated them with a GAT/GXT web application. Another cool side project was implementing rule-based decision support for cerebrovascular disorders via the business rules management system (BRMS) Drools by Redhat.


3D Replicator
In another project I participated in, we built a 3D replicator, i.e. a 3D scanner and printer unit that could scan an object on one side and reproduce it on the other. While the 3D printing component was based on an off-the-shelf 3D printer, we had to design the scanner component from scratch. Luckily, RWTH Aachen is not only a top university for computer science, but also for electrical and mechanical engineering. The interdisciplinary work to jointly design an industry-grade turntable and a combination of different microcontrollers to steer it, but also to power the computer vision and user interface components was very illuminating. [One of the things I loved about RWTH Aachen was how often they forced us to collaborate across disciplines – with other engineering disciplines, but also with physicians, for instance.]
Here is the software architecture I came up with which supported multiple actuators to control the turntable and line laser movement, multiple GUIs which even included a WebGL renderer I wrote, but we finally went with my Qt-based interface with (Thingiverse integration so you could also download models instead of always having to scan them), since you can run Qt on embedded systems without an X servers which allows for sophisticated GUIs without unnecessary overhead. Essentially, while I had already taken computer vision and thus supported the basic epipolar geometry – essentially, when you know the configuration of multiple cameras, epipolar geometry means you only need to search a single line in the second image to find the corresponding pixel for a point in the image of camera 1 – we ended up using line laser triangulation which is not only simpler in concept, but also very precise.



Computer Vision and Augmented Reality on the Nokia N900
The N900 was a very cool device for developers. It had a command line, it ran on Linux (Maemo) and you could program it in hard C++. For instance, here is a GStreamer-based one-liner visual pipeline I ran from the command line to detect and blur faces:

So it was only natural to attempt an early augmented reality application on the device. I extended the Qt Affine Transformation demo, so I could warp images, detect FAST features on them, and learn decision trees on the descriptors for multi-object tracking (MOT). I then wrote a runtime mobile app to execute them. Unfortunately, I cannot claim the core scientific work behind this which was done by Simon Taylor at the University of Cambridge who was incredibly helpful in explaining the approach to me. Back in the day his approach stood out, because it was 4.5x faster than Wagner’s contemporary approach while using only about 10% of the space and still reaching 99.5% detection rate – you can read more about in his paper “Multiple Target Localisation at over 100 FPS” (British Machine Vision Conference, BMVC 2009) that he co-authored with Tom Drummond.

It is a good idea to use sphere tessellation by icosahedron subdivision to get an equal coverage of the objects. Similarly, it is wise to subdivide the target into section and enforce a minimum number of features for each region to make sure also less structured sections are sufficiently covered.

FAST already leverages the Bresenham circle whose gradient yields an orientation…

which can then be used to orient the descriptor and improve rotation invariance.

Applied together, this yields reproducible feature coverage with clean orientations. I highlighted one of the descriptor patches in neon green for illustration. While I would build this system completely differently nowadays, this was a good solution before deep learning and made it possible to run multiple object tracking on early smartphones.

Own Window Manager
This was one of major exercises for Designing Interactive Systems which I implemented with my colleague Jan Schnitzler. We were asked to build our own primitive window manager, GUI toolkit and eventing system to learn how we would write these from scratch if we ever had to roll our own solution. It was quite insightful to be forced to do everything yourself and certainly deepened my understanding of how systems like X, Wayland, KDE, Gnome, Windows and Qt work.

RWTH Aachen is really good at having these comprehensive courses – in another one about hardware we implemented our own operating system for microcontrollers which we used to build an atomic clock receiver module with an optical connector and we also programmed some FPGAs along the way which was both insightful regarding hardware programming, but esp. about operating systems – nothing teaches you these concepts better than having implemented your own scheduler etc.
Game with Physical Representation
Last but not least, this was a fun project together with my colleagues Karolski, Sundararajan, Hommelsheim and Bormann – we implemented a blocks game in which you physically rotated your phone to tip over a set of blocks in a game world thus moving around a maze to reach a target location. We would read out the accelerometer, gyroscope and magnetometer information from the phone, infer the phone’s rotation and then send it to our game we wrote with JMonkeyEngine and Bullett as a physics engine to control the block structure. We had also built a physical cube we could put our phone into and then use the cube as a physical proxy for the block structure on-screen.

Leave a Reply
Want to join the discussion?Feel free to contribute!