The premise is understanding the interactions which take place between musicians in a jam will help researchers design better synthetic music. (Science Daily: Get down with the digital)
Ed: this will destroy whatever is left of music after AutoTune has demolished singing as a credible art form.
Your call. Check out the article.
To industrial designer Gustavo Ostos Rios, music improvisation is all about the emotion but he and his two supervisors in the Department of Industrial Design at Eindhoven University of Technology in The Netherlands have now found a way to understand the complex interactions that take place between instrumentalists and singers during a jam with the aim of using those insights to add greater emotional expression to a performance involving digital instruments.
- SD
Ed: what will humans do in a world with emotional synthetic music?
Something else, I imagine.
"In human-computer-interaction (HCI), we are more and more moving away from designing interactions for single users, towards designing interactions for networked groups of single users; we move from a 'one user-one technology' paradigm towards a 'multiple users-multiple technologies' paradigm," Mathias Funk, one of the co-authors explains in the International Journal of Arts and Technology. Examples of this shift include the familiar social media and social networking sites, like Twitter and Facebook, many people use on a daily basis as well as the likes of Wikipedia and other collaborative ventures, such as citizen science projects.
- SD
Ed: that rubbish about paradigms doesn't go one millimeter toward the fact every innovative piece of music ever written came to us from a single person. It may take an orchestra to play it but one person wrote it.
No point in arguing that.
In the department of Industrial Design, Funk and his colleague Bart Hengeveld research novel musical instruments that translate this idea towards musical performance. Much emphasis is placed on the solo performance or endeavour in some of the creative arts, such as painting and sculpture and the audience is usually detached from the art, viewing it and interacting with the "product" some time after the creative process has ended. Live music is different, there is usually more than one person involved in creating a performance and the audience is present the whole time. As such, there is a shared emotional response that can, in the case of improvisational performance, take the music in new directions. More commonly, the changes in direction are driven by the musicians and how they interact with each other, but audience response can nudge them too.
- SD
In effect, they are trying to analyze 'liveness' but the only possible purpose for that is to program it into something else ... which isn't live at all but they want it to seem that way.
What do you know, they're selling something.
The team has developed a three-layer model that illuminates the relationship between band members and the audience as a system, where emotions, expressivity and generation of sound give shape to improvisation. The team has used this model to focus specifically on how individual emotional arousal can be used as input to control as a group their digital musical instrument, EMjam. The system, the team says, "builds on the construct that when paying attention at a concert it is possible to see performers' expressions; a guitarist playing a solo and reaching a peak at a certain point of it; a bass guitar player following with his face the lines played; a drummer making accents with the whole body; and in addition to this, the audience responding to the performance."
- SD
I bet you didn't see that coming (larfs).
Here's a little on how to use it.
The instrumentalists receive a wrist band with skin conductance sensors that can in a sense measure the musician's emotional state. The percussionist has a wristband controlling the rhythm generated by the EMjam, the bass guitarist has harmony controller and the guitarist or keyboard player a controller for melody. EMjam then uses the music software Ableton Live to add a parallel second layer of sound as a result of every individual input. The team adds that the same approach might be used to add expression to a light show or other visuals to accompany the music.
- SD
We have one tiny question: when live human musicians are playing anyway, why should we need yet another piece of kit with blinking LEDs and nothing to say for itself when this device only breaks even with what we do already. The only circumstance in which that has any value is if they're shitty musicians who shouldn't be trying to play in the first place.
Ed: this will destroy whatever is left of music after AutoTune has demolished singing as a credible art form.
Your call. Check out the article.
To industrial designer Gustavo Ostos Rios, music improvisation is all about the emotion but he and his two supervisors in the Department of Industrial Design at Eindhoven University of Technology in The Netherlands have now found a way to understand the complex interactions that take place between instrumentalists and singers during a jam with the aim of using those insights to add greater emotional expression to a performance involving digital instruments.
- SD
Ed: what will humans do in a world with emotional synthetic music?
Something else, I imagine.
"In human-computer-interaction (HCI), we are more and more moving away from designing interactions for single users, towards designing interactions for networked groups of single users; we move from a 'one user-one technology' paradigm towards a 'multiple users-multiple technologies' paradigm," Mathias Funk, one of the co-authors explains in the International Journal of Arts and Technology. Examples of this shift include the familiar social media and social networking sites, like Twitter and Facebook, many people use on a daily basis as well as the likes of Wikipedia and other collaborative ventures, such as citizen science projects.
- SD
Ed: that rubbish about paradigms doesn't go one millimeter toward the fact every innovative piece of music ever written came to us from a single person. It may take an orchestra to play it but one person wrote it.
No point in arguing that.
In the department of Industrial Design, Funk and his colleague Bart Hengeveld research novel musical instruments that translate this idea towards musical performance. Much emphasis is placed on the solo performance or endeavour in some of the creative arts, such as painting and sculpture and the audience is usually detached from the art, viewing it and interacting with the "product" some time after the creative process has ended. Live music is different, there is usually more than one person involved in creating a performance and the audience is present the whole time. As such, there is a shared emotional response that can, in the case of improvisational performance, take the music in new directions. More commonly, the changes in direction are driven by the musicians and how they interact with each other, but audience response can nudge them too.
- SD
In effect, they are trying to analyze 'liveness' but the only possible purpose for that is to program it into something else ... which isn't live at all but they want it to seem that way.
What do you know, they're selling something.
The team has developed a three-layer model that illuminates the relationship between band members and the audience as a system, where emotions, expressivity and generation of sound give shape to improvisation. The team has used this model to focus specifically on how individual emotional arousal can be used as input to control as a group their digital musical instrument, EMjam. The system, the team says, "builds on the construct that when paying attention at a concert it is possible to see performers' expressions; a guitarist playing a solo and reaching a peak at a certain point of it; a bass guitar player following with his face the lines played; a drummer making accents with the whole body; and in addition to this, the audience responding to the performance."
- SD
I bet you didn't see that coming (larfs).
Here's a little on how to use it.
The instrumentalists receive a wrist band with skin conductance sensors that can in a sense measure the musician's emotional state. The percussionist has a wristband controlling the rhythm generated by the EMjam, the bass guitarist has harmony controller and the guitarist or keyboard player a controller for melody. EMjam then uses the music software Ableton Live to add a parallel second layer of sound as a result of every individual input. The team adds that the same approach might be used to add expression to a light show or other visuals to accompany the music.
- SD
We have one tiny question: when live human musicians are playing anyway, why should we need yet another piece of kit with blinking LEDs and nothing to say for itself when this device only breaks even with what we do already. The only circumstance in which that has any value is if they're shitty musicians who shouldn't be trying to play in the first place.
No comments:
Post a Comment