GAMUT

Visual Stuff

After finishing the audio analyzing part I started to learn Jitter and try to create visuals for my project.

 

Fist I started with a single dot changing size according to the amplitude.

Screen Shot 2013-03-17 at 17.23.25

 

Then I drew the dot as a sphere using max’s openGL features.

Screen Shot 2013-03-17 at 17.24.23

 

After that I increased the object quantity to 25 to represent various ranges of frequencies. The Y axis on the screen represents the amplitude value.

Screen Shot 2013-03-17 at 17.24.53

 

After that I found a patch that creates particles. Right now I am tinkering with that to understand how particles work in Max to implement it to my project.

Screen Shot 2013-03-17 at 17.25.37

Jitter and OpenGL

Jitter, a software package first made available in 2002 by Cycling ’74, enables the manipulation of multidimensional data in the context of the Max programming environment. An image can be conve- niently represented by a multidimensional data ma- trix, and indeed Jitter has seen widespread adoption as a format for manipulating video, both in non- real-time production and improvisational contexts. However, the general nature of the Jitter architec- ture is well-suited to specifying interrelationships among different types of media data including au- dio, particle systems, and the geometrical represen- tations of three-dimensional scenes.  It can draw hardware-accelerated graphics using the OpenGL standard.

OpenGL is a cross-platform standard for drawing two- and three-dimensional computer graphics, de- signed to provide a common interface for different types of graphics hardware. It is used in a variety of applications from video games to the Mac OS X Window Manager. It consists of two interdependent parts: a state machine with a complex, fixed inter- nal structure for processing graphical data, and an API (application programming interface) in the C programming language for interacting with the state machine. The state machine defines a sequence of steps by which image textures can be applied to the faces of geometric primitives and rendered as seen by a virtual camera to create a final image. Many parameters that control the final image are defined, including a complex lighting model and extra processing steps that can be applied to the final rendering.

Some or all of the operations defined by the OpenGL state machine may be implemented by hard- ware GPUs (graphics processing units). Owing to the high degree of parallelism that can be applied to the task of drawing graphics, the past decade has seen affordable GPUs increase in speed dramatically faster than CPU speeds. This has prompted software devel- opers to move more and more drawing tasks to the GPU, and even some non-drawing tasks such as audio processing.

Spectrum Analysis

By analyzing the sequences of samples and the data contained therein, information can be extracted and plotted on a graph as a spectrum. A general definition of spectrum is “a measure of the distribution of signal energy as a function of frequency”. More simply stated, spectrum is “the combination of the frequencies and their amplitudes that are present in a sound”.  Strategies for spectrum analysis fall into two basic categories: static, which is like a snapshot of a spectrum, and time-varying, which is akin to a motion-picture film of a spectrum over time. The most common way to display a time-varying spectrum is to plot a sonogram or spectrogram.

Sampling

The most important part of the digital audio representation process is sampling. The samples are measures of the original signal taken at certain equal intervals of time. They are stored as binary numbers. The values of the sampled waveforms are joined by the digital-to-analog converter (DAC) and a smoothing filter that ‘connects the dots’ between the discrete samples to produce a waveform that looks and sounds like the original signal.