Since my last post, many improvement happened in my patch. Every week, step by step, i reorganized the patch and also added new things to it. Here is how it looks now:
Above is my general patch now. It took a little time to arrange and clear the unnecessary things, but it is more organized now. I have already explained how i worked the pngs and how i was changing one animal to the other. Now i have created a patcher to control which animal will be on the screen and which sound will belong to that animal. After the week i have done this, i felt the need of something that will prevent the same animal keep coming on the screen again and again. Because my animals were not in a queue, they were coming to the screen randomly so sometimes this was causing problem because it’s boring to see the same animal everytime. So we have created a matrix system to achieve this. It doesn’t work very neat at the moment but i will have a look at it later. I have done these in the patcher called ‘pickanimal’ and it looks like the following:
After choosing the animal in that patcher, i had to define what that animal would do which means i had to arrange the positioning of the mesh and i had to set up the system that will play the sound. I defined all these in the patcher ‘animal’ which looks like the following:
One of the most challenging parts was to put the background behind. For that i’ve created the patcher ‘background’ and with the help of Ekmel hoca 🙂 And ‘jit.gl.videoplane’ object solved the problem. Instead of putting just one ocean image behind, i put a few images with slight differences and with a counter object, it imports all the images continiously. This way i have achieved a lighting effect in the ocean. It looks like this:
And of course there had to be something for children to show them where exactly their hands are. For this, i have created a ball that will follow the hand, this way they will try to put the ball on the animal. Later, i will change the ball image into a hand illustration maybe, because the ball doesn’t look very nice. So, the ‘ball’ patcher is like this:
The synapse patcher, i have already explained in my previous posts. It just tracks the left hand positions and send them as coordinate numbers. It looks like the following:
The render patcher is where the magic happens 🙂 All these information of positioning and texturing is gathered here, it creates a plane and transfers the pngs as textures on it and renders. It looks like this:
So far, i had only worked with one png sequence, but what i need to do is to be able to change into another png sequence when the coordinations of the hand is on the png. I was already able to recieve the hand coordinations through synapse with Kinect, and also i was able to compare the kinect coordinations with png coordinations. If the coordinations are the same, i was receiving ‘1’ value. So, i should use this value to trigger the second png sequence and activate the sound which will give the information about animals. So far i could only manage to achieve the changing part. I will look for the sound part next week.
Below it is how my pngreader patcher looked like with one png sequence:
And here below it is how it looks like with 2 png sequences:
To work with two png sequences, all the frames of them and the max/msp file have to be in the same folder. The number object which connects to the gate defines whether it is going to work the first sequence or the second sequence. With ‘receive compareStatus’ object, i am receiving the comparison of kinect coordinations and png sequence’s coordinations. It can only be 0 or 1. So if it is a 0 it means the hand is not on the png, by adding it a +1, it will reach to the value 1 which will work the first png sequence. If the hand is on the png, compareStatus will give a 1 value, so by adding a +1 to it, it will reach to the value 2. This 2 value will work the second png through gate.
The only problem in this system is that, i can only display the 2nd png sequence when the hand is on the png. However, i need it to stay in the second png after the hand has been on the png once. So i need some kind of a ‘lock’ that will help me.
So a ‘sel’ object and another ‘gate’ helped me to solve this problem. Below is how it looks:
Continuing with Max/MSP part of my project, my next goal was creating a loop with specific positions and timings of my png sequence. I was already able to move the png sequence where i want on the plane, but this time i had to go in a more organizing way otherwise it was getting complicated. First, i decided on the steps:
– stay in the initial position (-0.8 , 0) for 3 seconds,
– then move to point(0,0) in 3 seconds,
– stay there for 2 seconds,
– move to point (o, 0.3) in 5 seconds,
– after 4 seconds, flip in 1 second,
– go to the beginning.
To create these steps, i split my events into patcher objects to organize them. So this is how it looks like :
In p event1, i just defined the initial position of the png. With the delay object, it stays at (-0.8 , 0) for 3 seconds. ‘s trigger1’ sends a trigger message without patch cords. Below, it’s how it looks like:
In p event2, ‘r trigger1’ receives the trigger message from the previous event and starts the second event which moves the png to (0,0) in 3 seconds. ‘Sel’ object pics 0 value, so when the png is at (0,0), it will stay there for 2 seconds and then activate ‘s trigger2’ to start the last event.
In p event3 below, the png moves from (0,0) to (0, 0.3) in 5 seconds. After 4 seconds delay, it flips in 1 second. In the message that i am sending to the line object to send to ‘prepend scale’, changing the second value into a minus value changes its direction. Finally, sending these to ‘s trigger3’ finishes the loop. By adding a ‘r trigger3’ into the beginning of the first event turns this into a loop.
With this exercise that i have tried with only one animation, now i can try to add more events and move my other sea animals the way i want by changing the coordination values.
For the last two-three weeks, i was working on my animations. After designing and coloring the characters, i had to study on the movements on animals such as seacow, seahorse, jellyfish etc. Firstly, i started to create my animation by drawing frames in photoshop, but later i moved on to Adobe Flash. Because with the onionskin feature, it was easier to see and control what i draw. I have finished 6 animations last week and started coloring. They dont have a finished, neat look at the moment, but i think i can clean and correct some parts since i have got the main keyframes of them. That’s why i decided to move on with the Max/Msp, because i think its what is going to take my time the most. This week i will try to work one of my png sequence in max/msp and try to change it into another sequence with the coordinates from Kinect camera.
So here are the animations:
This time what i tried to do is to control the png sequence and move it where i want. This is supposed to be a process done manually for now, later i will try to turn this into a loop. For now, i just saved some of random files as pngs in photoshop and used them as examples. After i put them in the same folder where Max is and name them properly. I went to the import movie part and picked one of them. In the PngReader part i needed to type how many frames was it. This part was from last week. It looks like this:
What i added is the position values of the pngs. In the ‘positioning patcher’ (p positioning), i moved X from -1 to 0.8 and i moved Y from -1 to 0.8. This way my pngs were going from the bottom left corner to the top right corner. In order to do this, i used ‘line’ object. ‘Line’ object generate ramps and line segments from one value to another within a specified amount of time. In another saying, we could say to the line object ‘move this from here to there in this amount of time’. So, in my patched i changed my X and Y coordinates in 1000miliseconds. And after the line, comes the X and Y coordinates in flonum objects. After these, object i had to use ‘Pak’ object to pack the X and Y coordinates to send. And i connected this to ‘prepend position’ object. As always, i had to connect this to jit.gl.gridshape object so that it can work.
This process is only active when the toggle is on. My next goal must be learn how to create a loop for this so that it can trigger itself and continue forever.
Another crucial thing for my project is to know how to show an animation in Max/MSP. Because the children will be playing with the animation and interact with the character. First of all, what i can do was to save my frames of the animation as pngs and import these pngs as a texture of the plane that i created in Max. For this plane i used jit.pwindow. It is a tool to draw pixels or graphics within a patcher and it displays jit.matrix data as well as OpenGL 3D graphics. OpenGL was what i needed. Then, I needed to put all the pngs and put them in the same folder where my Max patch is located. And also i needed to give the pngs proper names and it should be the same name in Max, too. This way they will work as a sequence.
To turn on the animation, in the ‘texture’ part, i need to click on ‘import movie’ and pick the first png of the animation. In the part ‘PngReader’, i need to turn the animation on with the toggle there. And also i need to type how many frames it has in the ‘number’ object i have there. I need to repeat this action everytime i first open this Max file.
In order to show the pngs i have seperately, i was thinking that i could make the background of the plane white and project the whole thing on a white wall. This way my animation would not have any dark frames etc. For this, i need to go to the ‘Render’ part and change the values of background color there.
Another thing was positioning of the pngs. That’s why i put a ‘positioning’ patcher there. With this patch i can change where exactly i put the pngs on the screen and even can scale them. I dont know if i would use the rotation option but depending on the script of my game, i may use it. Thats why i put a flonum connected to a ‘prepend rotate’ object.
Also, this positioning part will be the place where i move my animation, time it and make it a loop, and also compare its coordinates with the coordinates that i got from Kinect (which are already in my previous patch).
Here you can see how all three parts looks together :