Factorsynth is a new kind of musical tool. It uses machine learning techniques to decompose any input sound into a set of temporal and spectral elements. By rearranging and modifying these elements you can do powerful transformations to your clips, such as removing notes or motifs, creating new ones, randomizing melodies or timbres, changing rhythmic patterns, remixing loops in real time, creating complex sound textures...
| "Quite magical." |
The Pro Audio Files
| "A powerful instrument for breaking down audio loops in to timbral and spectral elements that can be remixed and exported in astoundingly cool ways."
| "The incredible sound quality of the synthesis and the extremely intuitive interface have made Factorsynth an
irreplaceable tool for my electronic compositions." |
Maurizio Azzan, composer
Factorsynth is available as a Max For Live device, compatible with Ableton Live 9 and 10 on Macintosh and Windows machines.
|SYSTEM REQUIREMENTS||Mac OS or Windows|
|Ableton Live 9 or 10|
|Max For Live (included in Live Suite editions, available as an extra in Intro and Standard editions)|
|CURRENT VERSION||v1.2, 21/09/2018 (Change log)|
Factorsynth is based on some advanced machine learning algorithms, which are new to the world of Max For Live devices. I've tried to make the interface intuitive
enough, but it takes some time to get used to its workflow. You can get a taste of it by reading the user manual.
For any usage or support related questions, please contact firstname.lastname@example.org.
Can Factorsynth's parameters be controlled by Live's automation envelopes or MIDI mappings?
Most of them can (element levels, operation buttons, solo buttons, factorize buttons, main mixer levels and mutes). The ones that cannot be controlled by Live are: number of components, individual buttons on the switchboards, analysis parameters and reset buttons.
How does Factorsynth work?
Factorsynth is based on a modified version of an algorithm called Non-Negative Matrix Factorization (NMF). Simply put, NMF can automatically extract interesting patterns from data. It has been used in fields such as computer vision and movie recommendations. I had to heavily adapt and tweak it in order to meet the real-time needs of music production.
Can Factorsynth remove a full voice/instrument from a mix?
That's unlikely, unless your voice or instrument plays only a few sustained notes, with no effects and no vibrato. Factorsynth can extract interesting sound events, such as individual notes, attack noises, impulses or rhythmical structures (watch the demo video to get an idea), but it's not aimed at separating full instruments. That's the job of source separation, which is a harder thing to do! On the other hand, Factorsynth can often nicely separate drum sets and individual drum instruments (kick drums, hi-hats, snares...).
I started the Factorsynth project in 2014, first doing research on creative applications of a data analysis method called matrix factorization (hence the name). I have since then released several prototype versions for command line and plain Max. These old versions were not real-time capable but have been used by several composers of electronic and electroacoustic music for detailed sound editing and spatialization. Here are some recent works that have used Factorsynth:
You can find these experimental versions here.