Factorsynth is a new kind of musical tool. It uses machine learning techniques to decompose any input sound into a set of temporal and spectral elements. By rearranging and modifying these elements you can do powerful transformations to your clips, such as removing notes or motifs, creating new ones, randomizing melodies or timbres, changing rhythmic patterns, remixing loops in real time, creating complex sound textures...
Factorsynth is available as a Max For Live device, compatible with Ableton Live 9 and 10 on Macintosh machines.
|SYSTEM REQUIREMENTS||Mac OS only|
|Ableton Live 9 or 10|
|Max For Live (included in Live Suite editions, available as an extra in Intro and Standard editions)|
Factorsynth is based on some advanced machine learning algorithms, which are new to the world of Max For Live devices. I've tried to make the interface intuitive
enough, but it takes some time to get used to its workflow. You can get a taste of it by reading the user manual.
For any usage or support related questions, please contact firstname.lastname@example.org.
Will there be a Windows version?
I'm currently working on a Windows version, which I hope to make available during the second half of 2018.
Why is it Mac only? Aren't Ableton Live and Max for Live cross-platform?
Yes, they are. But Factorsynth is based on a custom Max external object that uses some Apple native libraries. I need to adapt this object in order to make it cross-platform.
Can Factorsynth's parameters be controlled by Live's automation envelopes or MIDI mappings?
Most of them can (element levels, operation buttons, solo buttons, factorize buttons, main mixer levels and mutes). The ones that cannot be controlled by Live are: number of components, individual buttons on the switchboards, analysis parameters and reset buttons.
How does Factorsynth work?
Factorsynth is based on a modified version of an algorithm called Non-Negative Matrix Factorization (NMF). Simply put, NMF can automatically extract interesting patterns from data. It has been used in fields such as computer vision and movie recommendations. I had to heavily adapt and tweak it in order to meet the real-time needs of music production.
Can Factorsynth remove a full voice/instrument from a mix?
That's unlikely, unless your voice or instrument plays only a few sustained notes, with no effects and no vibrato. Factorsynth can extract interesting sound events, such as individual notes, attack noises, impulses or rhythmical structures (watch the demo video to get an idea), but it's not aimed at separating full instruments. That's the job of source separation, which is a harder thing to do! On the other hand, Factorsynth can often nicely separate drum sets and individual drum instruments (kick drums, hi-hats, snares...).
I started the Factorsynth project in 2014, first doing research on creative applications of a data analysis method called matrix factorization (hence the name). I have since then released several prototype versions for command line and plain Max. These old versions were not real-time capable but have been used by several composers of electronic and electroacoustic music for detailed sound editing and spatialization. Here are some recent works that have used Factorsynth:
You can find these experimental versions here.