Fragment provides a uniquely approach to sound synthesis and composition with the capability to generate audio and visuals in real-time and at the same time.
A vast range of sounds and music can be created with fine control over harmonic content and spatial dynamics.
Fragment canvas is the source of the visuals and also the source of the sound synthesis.
Visuals are generated by the user by giving instructions to the GPU with a simple high-level programming language called GLSL, the Fragment GLSL script (also called a fragment shader) is called by the GPU for every pixels on the canvas. This approach allow very fast real-time manipulation of the pixels data.
Fragment has only one fragment shader which has the particularity to be shared between the users of an online session, it update and compile on-the-fly as you or other peoples type, this is the collaborative nature of Fragment, visuals and sounds can be produced online.
Some Fragment settings are also synchronized between users such as slices, some global settings and uniforms.
Fragment is an image-synth, the sound synthesis source of data is the pixels also called a fragment.
With Fragment, all sounds is operated from a frequency level by drawing (with GLSL code) over a canvas, frequency is represented on the vertical axis and stereo/mono amplitude is represented by the intensity of each pixels, parts of the canvas is then captured by user-positioned slices.
Since the sound synthesis source of data is the pixels, we need a way to capture them, this is done by slicing the canvas, pixels data (1px wide) are then captured from the slice at the browser display refresh rate and are then translated to notes from the RGBA pixels value, the notes are then interpreted and played by one or more synthesis method in real-time.
The canvas represent frequencies (exponential map) on the vertical axis, the horizontal axis generally represent time.
One of the unique feature of Fragment is the time visualization of any sounds, the horizontal axis span several pixels, thus enabling the user to see a limited sound past, present and future.
There is many type of sound synthesis available within Fragment, all work from the same concept, the pixels.
Fragment default sound synthesis is additive, with additive synthesis, Fragment become a fully capable spectral synthesizer able to do re-synthesis based on imported images, videos or audio files, an image-synth where any number of partials can be generated, there is no limits except the canvas height and the computing resources available.
Granular synthesis is a sound synthesis technique that operates on the microsound time scale, it is based on the same principle as sampling.
Fragment secondary sound synthesis is granular, the grains source are based on audio samples, this method is only available with the audio server and provide asynchronous and synchronous granular synthesis.
Just like additive synthesis, re-synthesis can be done with granular synthesis and most granular parameters can be manipulated by the user.
The combination of granular synthesis and additive synthesis provide powerful sound capabilities.
Subtractive synthesis start from harmonically rich waveforms which are then filtered.
Subtractive synthesis is only available with the audio server, it is somewhat slow and there is only one low-pass filter (Moog type) implemented.
There is three type of band-limited (no aliasing!) waveforms : sawtooth, square, triangle
Phase modulation (PM) is a mean to generate sounds by modulating the phase of an oscillator (carrier) from another oscillator (modulator), it is very similar to frequency modulation (FM).
PM synthesis in Fragment work by giving an oscillator index (based on image-height) to pixel, this oscillator will be used as a modulator.
PM synthesis use a high quality low-pass filter (Moog type).
Fragment can also act like a regular sampler through granular synthesis method.