I have been looking into TouchOSC as an example application that can send and receive Open Sound Control (OSC) messages. I have installed it on my iPhone 5 and on my iPad 2. Using the inputosc/outputosc modules I can read/write OSC messages with EEGsynth. The only thing that I have not figured out yet is how to broadcast OSC messages.
We are planning a performance in which we also want to use visualizations and it would be great if we can control those using EEGsynth. The AVmixer software lists on its website that it supports OSC, hence the question arose whether we can get these to work together…
The TouchOSC editor for PC/Mac/Linux complements the TouchOSC smartphone/tablet app. AVmixer comes with a template layout for TouchOSC that resembles the desktop interface. Using the TouchOSC editor you can upload this template on your smartphone/tablet.
It turns out that the AVmixer software itself does not support OSC, but only MIDI using a software driver. Another complement to TouchOSC is the TouchOSC MIDI bridge which, combined with the AVmixer TouchOSC temple layout, provides the solution. It works like this:
TouchOSC app -> TouchOSC MIDI bridge -> AVmixer
This means that we can also use AVmixer with EEGsynth, since we can already read/write MIDI messages. However, the software MIDI interface only works between all software running on the same computer, which means that it would not work if EEGsynth is running on another computer ( e.g. Raspberry Pi) than the AVmixer software. In that case, the TouchOSC MIDI bridge is still useful and we can do it like this:
EEGsynth/outputosc -> TouchOSC MIDI bridge -> AVmixer
So to summarize the answer to the initial question: yes, we can use EEGsynth with AVmixer.