nylki.uber.space Open in urlscan Pro
2a00:d0c0:200:0:b9:1a:9c:9e  Public Scan

URL: https://nylki.uber.space/
Submission: On July 14 via automatic, source certstream-suspicious — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

TOM BREWE

Hi, My name is Tom. Here you can find a selection of professional and personal
projects I have worked on in the last few years. Feel free to contact me for any
questions or inquires.

I like to work with JavaScript and web technologies to develop and implement all
kinds of applications and prototypes. Many of the projects I have been involved
in deal with experimental interaction designs, data visualization, VR/XR,
graphics programming (2D and 3D) and computer vision.

Contact me via mail or at LinkedIn.
Find more things I've been working on at github.


PROCESS.ANNOTATOR

2015 – 2018

The "process.annotator" is a university research project developed at the
Cluster of Excellence "Image Knowledge Gestaltung" at Humboldt University
Berlin. It was mainly developed by Anouk Hoffmeister (design, concept) and Tom
Brewe (technical implementation, code) with additional hardware prototyping
support from Sebastian Zappe.

The goal of the project was to explore how to annotate physical objects that
also have a digital counterpart, like 3D-scanned museum artefacts. We developed
a functional prototype to explore several different annotation methods,
including physical annotations with a custom-built bluetooth pen, annotation in
VR and by speech input.

The app syncs annotations with a CouchDB backend and allows multiple local or
remote users to work with the digital/analog-hybrid artefacts and their
annotations.

Technologies and libraries used: JavaScript, Electron, WebVR, CouchDB/PouchDB,
Polymer, Speech-To-Text, Bluetooth
Links:
Website (HU Berlin)
videos: 1 2 3 4
image: 1


REALTIME DATA VISUALIZATIONS

2015 – 2018

In another interdisciplinary research project at HU Berlin I was responsible for
the technical implementation of various interaction designs by Friedrich
Schmidgall including many data visualizations.

One such visualization was for an experiment at the institute, in which office
users were wearing hardware beacons (OpenBeacon) for a defined period of time.
The low-energy signals of those beacons can be used to triangulate approximate
locations of beacons and their users. The video shows a demonstration of the
realtime data visualization I developed for this project.

Technologies and libraries involved: JavaScript, D3, MongoDB, OpenBeacon


MIDIOLA

2015

MIDIOLA is a mobile app that pays homage to music rolls for early 20th century
player pianos. With our web-app, digitized music rolls from the collection
Deutsches Museum Munich can be played back with your cellphone.
Rolls can also be scanned with the smartphone camera to create music on the
spot. It is our aim is to put the obsolete medium of the Pianola music roll back
on the map. The historic technology should be made accessible to a broader
public and be experienced in an audio-visual installation.

The software was created by a four person team with a background in coding,
design and art history (Luca Beisel, Tom Brewe, Joscha Lausch, Mohammad Moradi).
As an entry for the Coding DaVinci 2015 hackathon for open cultural data it won
the prize for “Best Design”.

Technologies and libraries involved: JavaScript, cordova, Angular, WebAudio
Links:
Website Award Press Text (FU Berlin)
image: 1 2


LINDENMAYER.JS

2015 – 2018

Lindenmayer is a JS-library to create procedual graphics like trees or fractals
in the browser. My incentive to write this library was to provide other people
and myself with a powerful yet easy to use tool for utilizing L-Systems without
restricting the user in the way the results are visualized or used.
You can use it to create fractal, procedual 2D & 3D graphics or even sound.

Lindenmayer is also used in the aframe-lsystem-component to create procedual
graphics in WebVR/XR.

Technologies and libraries involved: JavaScript, rollup
Links:
github A-Frame Blogpost


THE DUNESDAY MACHINE

2018

"The Dunesday Machine" is a realtime graphics demo I released 2018 at the Berlin
based computer arts and multimedia festival Deadline in the "combined PC intro"
competition. It is a 8K JavaScript intro, meaning the complete executable file
including music, graphics, camera paths is less than 8 Kilobytes small and works
offline without external resources from the internet.

In early 2019 it got nominated for the Meteoriks Awards in the category "New
Talent".

The video shown here is just a recording of the demo/intro. For the realtime
rendered version please visit the pouet link below.

Technologies involved: JavaScript, WebGL/GLSL, WebAudio, sphere tracing
website (pouet)
image video


SHIFTED DIMENSIONS

2017
Shifted Dimensions is a VR-game created for the js13kGames game jam in 30 days.
All entries to the js13kGames competition have to be smaller than 13 kilobytes
when zipped. Because of the size limitation detailed models or bigger textures
can't be used and would have to be created procedually.
In Shifted Dimensions the player's goal is to catch invisible artefacts floating
in space around them. Those artefacts can be made visible by using the VR
controller as a torch which simultaneously acts as a magnet to pull objects
closer to the player.
technologies and libraries involved: JavaScript, A-Frame, GLSL
Links:
website github


DYNAMUSEUM

2015-2016

Dynamuseum is a personal project that aimed to enable art enthusiasts to
interactively explore artefacts stored in online image databases via a 3D
first-person experience. Images are dynamically fetched via the APIs of
Europeana, Rijksmuseum or the Victoria and Albert Museum. When standing in front
of virtually framed pieces metadata and image descriptions are being displayed
to the user. I developed the game-like experience as a web app utilizing the 3D
library three.js.

technologies and libraries involved: JavaScript, WebGL/three.js
Links:
vimeo


DOCUMAT

2013 – 2015

Documat is a prototype developed at HU Berlin by Anouk Hoffmeister (Design) and
Tom Brewe (Code). The goal was to scan, document and tag physical objects with
intuitive hand gestures. The prototype featured OCR-Detection and automatic
cropping when scanning text pages or book covers. All actions could be performed
via basic hand gestures. For hand gesture recognition we used the "Leap Motion"
sensor.


technologies and hardware involved: Processing (Java), OpenCV, Leap Motion
(Hardware)
Links:
Leap Motion Blogpost
Tom Brewe - Berlin 2022
Mastodon