GAMUT

Sound Signal Basics

Just as an image can be described as a mixture of colors a sound object can be described as a blend of elementary acoustic vibrations. In the physical definition, sound is defined as changes in air pressure transmitted as vibrations through the air. These vibrations are represented as compression waves, and so are characterized by the common properties of waves including frequency, period, wavelength, and amplitude.

If the pressure of the wave varies in a repeating pattern then the sound has a periodic waveform. Sounds with no intelligible pattern are called noise. Most sounds fall somewhere in between these two extremes. A cycle is one repetition of a periodic waveform. The fundamental frequency is the number of these cycles that occur per second. In acoustical terminology it is usually measured in Hertz (Hz), which is equivalent to ‘cycles per second’. Frequency corresponds to the perceived pitch of the waveform. Wavelength, or period, is the length of the cycle. As wavelength increases, the frequency in Hz decreases. Therefore, sounds with longer wavelengths have a lower frequency (i.e. pitch) than sounds with shorter wavelengths.

 

 

Digital Visualizers

Digital media manages to unite music and visual arts because both are created out of bits of electronic data. In the 1990s, interest in visual music reemerged with artists of the digital generation. Widespread availability of powerful and user-friendly personal computers led to the development and resulting popularity of music visualizers, which generate animated imagery based on music. The 1999 Windows Media Player application Visualizations created metamorphosing designs as visual representations of any given music played through it. Such applications are now common in other digital media players like Winamp and iTunes.

Music Visualization

Synesthesia, or synaesthesia, is a condition where the brain mixes up the senses. People who have synesthesia are called synesthetes. There are lots of types of synesthesia but what I will be focusing on is sound synesthesia which is hearing sounds in response to seeing motion.

The phenomenon of synesthesia has influenced and inspired numerous artists and musicians to create cross-modal works that attempt to illustrate correspondences between senses. Examples of such visual music include abstract art, color organs, experimental abstract film, and music visualizers. Digital technology now makes it possible to break down rudimentary information into discrete packets of numbers and/or electrical signals. This enables computers to automate mappings of various analogous structural characteristics from one type of media to another. This is a technique used by many modern music visualizers. In most examples of visual music, the visuals are informed by and created in response to the music. However, the majority of these works were not created in real- time. Visual music artists listened to a piece of music, then created images based on their personal interpretations and responses to the expression of the music. They had the opportunity to scrutinize the overall emotion of the music, and to mark musical events of importance such as beats, rhythms, and phrases changes. They could then reference this data in the creation of images. Non-real-time generation of visual music allows for more careful and nuanced analysis of the technical and expressive elements of the music. It also results in more carefully considered visuals. However, non-real-time visual music lacks the performance aspect of real-time visualization. The problem is that real-time digital visualizers are supplied with insufficient information to effectively inform their visuals.

How I came up with the idea for this project

I am an amateur musician and enjoy music. Also, as a designer I enjoy visuality. For me, combination of both means doubling the enjoyment. This is why I always liked the idea of audio visualizer. However, as I started to dig deeper into them I realized most of them are not great at visualizing the audio precisely. From the technical point of view, they visualize the audio signal exactly but in reality, when you listen the music and watch the visualized output of it most of the time it really does not feel like they are in a harmony. The idea of this project came up with this problem in mind.

VSXu

VSXu is an OpenGL-based, modular programming environment with its main purpose to visualize music and create graphic effects in real-time. Its intention is to bridge the gap between programmer and artist and enabling a creative and inspiring environment to work in for all parties involved.

http://www.vsxu.com

Decision on the software

I started looking up for a software/s that I can use for this project. I realized Max/Msp is a great platform for analyzing audio signals and working with them. I could use max/msp for audio part of my project and use another software to create visuals. Since I have littile experience with processing, it was one of the candidates. However, using a node based app is easier for me because otherwise I have to use write lines of codes. Another software I could use is Quartz Composer. There are lots of tutorials for it and the interface is easy to use. Plus, it is also node based.
As a result, I decided to use Max/Msp/Jitter because I think using single software will make life easy for me. It is always easy to learn how to use one software than two.

Clavilux 2000

I stumbled upon this project by Jonas Friedemann Heuer. Clavilux 2000 is “an interactive instrument for generative music visualization” as Heuer calls it.

 

The visual concept of Clavilux 2000 is quite simple. For every note played on the keyboard a new visual element appears in form of a stripe, which follows in its dimensions, position and colour the way the particular key was stroke: The length and vertical position show the velocity, the stripe’s width reflects the length of each note.
By mapping the color wheel on the circle of fifths, the colours finally give the viewer and listener an impression of the harmonic relations. Notes belonging to one specific tonality always get colors from one specific area of the color wheel. Therefore each key gets it’s own color scheme and “wrong” notes stand out in contrasting colors. The more different tonalities a piece has, the more colorful the visualization will be.

Read more here…

Target Audience

Music business is not just about the music in the era we live in. It is a combination of visuality and sound. Plenty of professional musicians pay huge amounts of money for special effects and shows on stage in order to make it a complete show and for spectators to have a great experience.

Professional musicians ranking high in the business are able to afford all of these elements in their shows but that usually not the case with the musicians who play in small venues. Those people are the target group of the project. They are usually amateur or semi-professional musicians who try to keep up with the music business and grow their fan base.

This project designed to make life easier for the musicians who want to support their stage performance with quality visuals but cannot afford expensive stage gear.

Concept

My project is about transcribing live music into visuals. When the project is ready, we will see a visual interpretation of a song that is being played live. In this interpretation, it will be possible for us to see what sounds coming out of every instrument as abstract visuals. Similar to how every instrument play different things but when they are combined it makes a song, the generated visuals of instruments will also create another visual element when they are combined. I did not come up with a name yet and I am currently researching on the topic and trying to find a suitable software to work on.