default search action
NIME 2011: Oslo, Norway
- 11th International Conference on New Interfaces for Musical Expression, NIME 2011, Oslo, Norway, May 30 - June 1, 2011. nime.org 2011
- Matthew Montag, Stefan Sullivan, Scott Dickey, Colby Leider:
A Low-Cost, Low-Latency Multi-Touch Table with Haptic Feedback for Musical Applications. 8-13 - Greg Shear, Matthew Wright:
The Electromagnetically Sustained Rhodes Piano. 14-17 - Laurel Pardue, Andrew Boch, Matt Boch, Christine Southworth, Alex Rigopulos:
Gamelan Elektrika: An Electronic Balinese Gamelan. 18-23 - Jeong-Seob Lee, Woon Seung Yeo:
Sonicstrument: A Musical Interface with Stereotypical Acoustic Transducers. 24-27 - Scott Smallwood:
Solar Sound Arts: Creating Instruments and Devices Powered by Photovoltaic Technologies. 28-31 - Dan Overholt:
The Overtone Fiddle: an Actuated Acoustic Instrument. 30-33 - Niklas Klügel, Marc René Frieß, Georg Groh, Florian Echtler:
An Approach to Collaborative Music Composition. 32-35 - Nicolas E. Gold, Roger B. Dannenberg:
A Reference Architecture and Score Representation for Popular Music Human-Computer Music Performance Systems. 36-39 - Mark A. Bokowiec:
V'OCT (Ritual): An Interactive Vocal Work for Bodycoder System and 8 Channel Spatialization. 40-43 - Florent Berthaut, Haruhiro Katayose, Hironori Wakama, Naoyuki Totani, Yuichi Sato:
First Person Shooters as Collaborative Multiprocess Instruments. 44-47 - Tilo Hähnel, Axel Berndt:
Studying Interdependencies in Music Performance: An Interactive Tool. 48-51 - Sinan Bökesoy, Patrick Adler:
1city1001vibrations: Development of a Interactive Sound Installation with Robotic Instrument Performance. 52-55 - Tim Murray Browne, Di Mainstone, Nick Bryan-Kinns, Mark D. Plumbley:
The Medium is the Message: Composing Instruments and Performing Mappings. 56-59 - Seunghun Kim, Luke K. Kim, Songhee Jeong, Woon Seung Yeo:
Clothesline as a Metaphor for a Musical Interface. 60-63 - Pietro Polotti, Maurizio Goina:
EGGS in Action. 64-67 - Berit Janssen:
A Reverberation Instrument Based on Perceptual Mapping. 68-71 - Lauren Hayes:
Vibrotactile Feedback-Assisted Performance. 72-75 - Daichi Ando:
Improving User-Interface of Interactive EC for Composition-Aid by means of Shopping Basket Procedure. 76-79 - Ryan McGee, Yuan-Yi Fan, Reza Ali:
BioRhythm: a Biologically-inspired Audio-Visual Installation. 80-83 - Jon Pigott:
Vibration , Volts and Sonic Art: A Practice and Theory of Electromechanical Sound. 84-87 - George Sioros, Carlos Guedes:
Automatic Rhythmic Performance in Max/MSP: the kin.rhythmicator. 88-91 - André Gonçalves:
Towards a Voltage-Controlled Computer Control and Interaction Beyond an Embedded System. 92-95 - Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto, Shigeki Sagayama:
Polyhymnia: An Automatic Piano Performance System with Statistical Modeling of Polyphonic Expression and Musical Symbol Interpretation. 96-99 - Juan Pablo Carrascal, Sergi Jordà:
Multitouch Interface for Audio Mixing. 100-103 - Nate Derbinsky, Georg Essl:
Cognitive Architecture in Mobile Music Interactions. 104-107 - Benjamin D. Smith, Guy E. Garnett:
The Self-Supervising Machine. 108-111 - Aaron Albin, Sertan Sentürk, Akito van Troyer, Brian Blosser, Oliver Jan, Gil Weinberg:
Beatscape , a Mixed Virtual-Physical Environment for Musical Ensembles. 112-115 - Marco Fabiani, Gaël Dubus, Roberto Bresin:
MoodifierLive: Interactive and Collaborative Expressive Music Performance on Mobile Devices. 116-119 - Benjamin Schroeder, Marc Ainger, Richard E. Parent:
A Physically Based Sound Space for Procedural Agents. 120-123 - Francisco García, Leny Vinceslas, Josep Tubau, Esteban Maestre:
Acquisition and Study of Blowing Pressure Profiles in Recorder Playing. 124-127 - Anders Friberg, Anna Källblad:
Experiences from Video-Controlled Sound Installations. 128-131 - Nicolas D'Alessandro, Roberto Calderon, Stefanie Müller:
ROOM #81 - Agent-Based Instrument for Experiencing Architectural and Vocal Cues. 132-135 - Yasuo Kuhara, Daiki Kobayashi:
Kinetic Particles Synthesizer Using Multi-Touch Screen Interface of Mobile Devices. 136-137 - Chris Carlson, Eli Marschner, Hunter McCurry:
The Sound Flinger: A Haptic Spatializer. 138-139 - Ravi Kondapalli, Ben-Zhen Sung:
Daft Datum - An Interface for Producing Music Through Foot-based Interaction. 140-141 - Charles Martin, Chi-Hsia Lai:
Strike on Stage: a Percussion and Media Performance. 142-143 - Baptiste Caramiaux, Patrick Susini, Tommaso Bianco, Frédéric Bevilacqua, Olivier Houix, Norbert Schnell, Nicolas Misdariis:
Gestural Embodiment of Environmental Sounds: an Experimental Study. 144-148 - Sebastián Mealla C., Aleksander Väljamäe, Mathieu Bosi, Sergi Jordà:
Listening to Your Brain: Implicit Interaction in Collaborative Music Performances. 149-154 - Dan Newton, Mark T. Marshall:
Examining How Musicians Create Augmented Musical Instruments. 155-160 - Zachary Seldess, Toshiro Yamada:
Tahakum: A Multi-Purpose Audio Control Framework. 161-166 - Dawen Liang, Guangyu Xia, Roger B. Dannenberg:
A Framework for Coordination and Synchronization of Media. 167-172 - Edgar Berdahl, Wendy Ju:
Satellite CCRMA: A Musical Interaction and Sound Synthesis Platform. 173-178 - Nicholas J. Bryan, Ge Wang:
Two Turntables and a Mobile Phone. 179-184 - Nick Kruge, Ge Wang:
MadPad: A Crowdsourcing System for Audiovisual Sampling. 185-190 - Patrick O. Keefe, Georg Essl:
The Visual in Mobile Music Performance. 191-196 - Ge Wang, Jieun Oh, Tom Lieber:
Designing for the iPad: Magic Fiddle. 197-202 - Benjamin Knapp, Brennon Bortz:
MobileMuse: Integral Music Control Goes Mobile. 203-206 - Stephen David Beck, Chris Branton, Sharath Maddineni:
Tangible Performance Management of Grid-based Laptop Orchestras. 207-210 - Smilen Dimitrov, Stefania Serafin:
Audio Arduino - an ALSA (Advanced Linux Sound Architecture) Audio Driver for FTDI-based Arduinos. 211-216 - Seunghun Kim, Woon Seung Yeo:
Musical Control of a Pipe Based on Acoustic Resonance. 217-219 - Anne-Marie Skriver Hansen, Hans J. Anderson, Pirkko Raudaskoski:
Play Fluency in Music Improvisation Games for Novices. 220-223 - Izzi Ramkissoon:
The Bass Sleeve: A Real-time Multimedia Gestural Controller for Augmented Electric Bass Performance. 224-227 - Ajay Kapur, Michael Darling, Jim W. Murphy, Jordan Hochenbaum, Dimitri Diakopoulos, Trimpin:
The KarmetiK NotomotoN: A New Breed of Musical Robot for Teaching and Performance. 228-231 - Adrián Barenca, Giuseppe Torre:
The Manipuller: Strings Manipulation and Multi- Dimensional Force Sensing. 232-235 - Alain Crevoisier, Cécile Picard-Limpens:
Mapping Objects with the Surface Editor. 236-239 - Jordan Hochenbaum, Ajay Kapur:
Adding Z-Depth and Pressure Expressivity to Tangible Tabletop Surfaces. 240-243 - Andrew J. Milne, Anna Xambó, Robin C. Laney, David B. Sharp, Anthony Prechtl, Simon Holland:
Hex Player - A Virtual Musical Controller. 244-247 - Carl Haakon Waadeland:
Rhythm Performance from a Spectral Point of View. 248-251 - Josep M. Comajuncosas, Alex Barrachina, John O'Connell, Enric Guaus:
Nuvolet: 3D Gesture-driven Collaborative Audio Mosaicing. 252-255 - Erwin Schoonderwaldt, Alexander Refsum Jensenius:
Effective and Expressive Movements in a French-Canadian fiddler's Performance. 256-259 - Daniel Bisig, Jan C. Schacher, Martin Neukom:
Flowspace - A Hybrid Ecosystem. 260-263 - Marc H. Sosnick, William T. Hsu:
Implementing a Finite Difference-Based Real-time Sound Synthesizer using GPUs. 264-267 - Axel Tidemann:
An Artificial Intelligence Architecture for Musical Expressiveness that Learns by Imitation. 268-271 - Luke Dahl, Jorge Herrera, Carr Wilkerson:
TweetDreams: Making Music with the Audience and the World using Real-time Twitter Data. 272-275 - Lawrence Fyfe, Adam R. Tindale, Sheelagh Carpendale:
JunctionBox: A Toolkit for Creating Multi-touch Sound Control Interfaces. 276-279 - Andrew Johnston:
Beyond Evaluation: Linking Practice and Theory in New Musical Interface Design. 280-283 - Phillip Popp, Matthew Wright:
Intuitive Real-Time Control of Spectral Model Synthesis. 284-287 - Pablo Molina, Martín Haro, Sergi Jordà:
BeatJockey: A New Tool for Enhancing DJ Skills. 288-291 - Jan C. Schacher, Angela Stoecklin:
Traces - Body, Motion and Sound. 292-295 - Grace Leslie, Tim R. Mullen:
MoodMixer: EEG-based Collaborative Sonification. 296-299 - Ståle Andreas Skogstad, Yago de Quay, Alexander Refsum Jensenius:
OSC Implementation and Evaluation of the Xsens MVN Suit. 300-303 - Lonce Wyse, Norikazu Mitani, Suranga Nanayakkara:
The Effect of Visualizing Audio Targets in a Musical Listening and Performance Task. 304-307 - Adrian Freed, John MacCallum, Andrew Schmeder:
Composability for Musical Gesture Signal Processing using new OSC-based Object and Functional Programming Extensions to Max/MSP. 308-311 - Kristian Nymoen, Ståle Andreas Skogstad, Alexander Refsum Jensenius:
SoundSaber - A Motion Capture Instrument. 312-315 - Øyvind Brandtsegg, Sigurd Saue, Thom Johansen:
A Modulation Matrix for Complex Parameter Sets. 316-319 - Yu-Chung Tseng, Che-Wei Liu, Tzu-Heng Chi, Hui-Yu Wang:
Sound Low Fun. 320-321 - Edgar Berdahl, Chris Chafe:
Autonomous New Media Artefacts (AutoNMA). 322-323 - Min-Joon Yoo, Jin-Wook Beak, In-Kwon Lee:
Creating Musical Expression using Kinect. 324-325 - Staas de Jong:
Making Grains Tangible: Microtouch for Microsound. 326-328 - Baptiste Caramiaux, Frédéric Bevilacqua, Norbert Schnell:
Sound Selection by Gestures. 329-330 - Hernán Kerlleñevich, Manuel Camilo Eguia, Pablo Ernesto Riera:
An Open Source Interface based on Biological Neural Networks for Interactive Music Performance. 331-336 - Nicholas Gillian, Benjamin Knapp, Sile O'Modhrain:
Recognition Of Multivariate Temporal Musical Gestures Using N-Dimensional Dynamic Time Warping. 337-342 - Nicholas Gillian, Benjamin Knapp, Sile O'Modhrain:
A Machine Learning Toolbox For Musician Computer Interaction. 343-348 - Elena Naomi Jessop, Peter Alexander Torpey, Benjamin Bloomberg:
Music and Technology in Death and the Powers. 349-354 - Victor Zappi, Dario Mazzanti, Andrea Brogni, Darwin G. Caldwell:
Design and Evaluation of a Hybrid Reality Performance. 355-360 - Jérémie Garcia, Theophanis Tsandilas, Carlos Agón, Wendy E. Mackay:
InkSplorer: Exploring Musical Ideas on Paper and Computer. 361-366 - Pedro Lopez, Alfredo Ferreira, João Madeiras Pereira:
Battle of the DJs: an HCI Perspective of Traditional, Virtual, Hybrid and Multitouch DJing. 367-372 - Adnan Marquez-Borbon, Michael Gurevich, A. Cavan Fyans, Paul Stapleton:
Designing Digital Musical Interactions in Experimental Contexts. 373-376 - Jonathan Reus:
Crackle: A Dynamic Mobile Multitouch Topology for Exploratory Sound Interaction. 377-380 - Samuel Aaron, Alan F. Blackwell, Richard Hoadley, Tim Regan:
A Principled Approach to Developing New Languages for Live Coding. 381-386 - Jamie Bullock, Daniel Beattie, Jerome Turner:
Integra Live: a New Graphical User Interface for Live Electronic Music. 387-392 - Jung-Sim Roh, Yotam Mann, Adrian Freed, David Wessel:
Robust and Reliable Fabric, Piezoresistive Multitouch Sensing Surfaces for Musical Controllers. 393-398 - Mark T. Marshall, Marcelo M. Wanderley:
Examining the Effects of Embedded Vibrotactile Feedback on the Feel of a Digital Musical Instrument. 399-404 - Dimitri Diakopoulos, Ajay Kapur:
HIDUINO: A firmware for building driverless USB-MIDI devices using the Arduino microcontroller. 405-408 - Emmanuel Fléty, Come Maestracci:
Latency Improvement in Sensor Wireless Transmission Using IEEE 802.15.4. 409-412 - Jeff Snyder:
Snyderphonics Manta Controller, a Novel USB Touch-Controller. 413-416 - William T. Hsu:
On Movement, Structure and Abstraction in Generative Audiovisual Improvisation. 417-420 - Claudia R. Angel:
Creating Interactive Multimedia Works with Bio-data. 421-424 - Paula Ustarroz:
TresnaNet Musical Generation based on Network Protocols. 425-428 - Matti Luhtala, Tiina Kymäläinen, Johan Plomp:
Designing a Music Performance Space for Persons with Intellectual Learning Disabilities. 429-432 - Tom Ahola, Koray Tahiroglu, Teemu Ahmaniemi, Fabio Belloni, Ville Ranki:
Raja - A Multidisciplinary Artistic Performance. 433-436 - Emmanuelle Gallin, Marc Sirguy:
Eobody3: a Ready-to-use Pre-mapped & Multi-protocol Sensor Interface. 437-440 - Rasmus Bååth, Thomas Strandberg, Christian Balkenius:
Eye Tapping: How to Beat Out an Accurate Rhythm using Eye Movements. 441-444 - Eric Rosenbaum:
MelodyMorph: A Reconfigurable Musical Instrument. 445-447 - Karmen Franinovic:
The Flo)(ps: Negotiating Between Habitual and Explorative Gestures. 448-452 - Margaret Schedel, Phoenix Perry, Rebecca Fiebrink:
Wekinating 000000Swan: Using Machine Learning to Create and Control Complex Artistic Systems. 453-456 - Carles Fernandes Julià, Daniel Gallardo, Sergi Jordà:
MTCF: A Framework for Designing and Coding Musical Tabletop Applications Directly in Pure Data. 457-460 - David Pirrò, Gerhard Eckel:
Physical Modelling Enabling Enaction: an Example. 461-464 - Thomas Mitchell, Imogen Heap:
SoundGrasp: A Gestural Interface for the Performance of Live Music. 465-468 - Tim R. Mullen, Richard Warp, Adam Jansch:
Minding the (Transatlantic) Gap: An Internet-Enabled Acoustic Brain-Computer Music Interface. 469-472 - Stefano Papetti, Marco Civolani, Federico Fontana:
Rhythm'n'Shoes: a Wearable Foot Tapping Interface with Audio-Tactile Feedback. 473-476 - Cumhur Erkut, Antti Jylhä, Reha Discioglu:
A Structured Design and Evaluation Model with Application to Rhythmic Interaction Displays. 477-480 - Marco Marchini, Panagiotis Papiotis, Alfonso Pérez, Esteban Maestre:
A Hair Ribbon Deflection Model for Low-intrusiveness Measurement of Bow Force in Violin Performance. 481-486 - Jon Forsyth, Aron P. Glennon, Juan Pablo Bello:
Random Access Remixing on the iPad. 487-490 - Erika Donald, Ben Duinker, Eliot Britton:
Designing the EP Trio: Instrument Identities, Control and Performance Practice in an Electronic Chamber Music Ensemble. 491-494 - A. Cavan Fyans, Michael Gurevich:
Perceptions of Skill in Performances with Acoustic and Electronic Instruments. 495-498 - Hiroki Nishino:
Cognitive Issues in Computer Music Programming. 499-502 - Roland Lamb, Andrew Robertson:
Seaboard: a New Piano Keyboard-related Interface Combining Discrete and Continuous Control. 503-506 - Gilbert Beyer, Max Meier:
Music Interfaces for Novice Users: Composing Music on a Public Display with Hand Gestures. 507-510 - Birgitta Cappelen, Anders-Petter Andersson:
Expanding the Role of the Instrument. 511-514 - Todor Todoroff:
Wireless Digital/Analog Sensors for Music and Dance Performances. 515-518 - Trond Engum:
Real-time Control and Creative Convolution. 519-522 - Andreas Bergsland:
phrases from Paul Lansky's Six Fantasies. 523-526 - Jan T. von Falkenstein:
Gliss: An Intuitive Sequencer for the iPhone and iPad. 527-528 - Jiffer Harriman, Locky Casey, Linden Melvin:
Quadrofeelia - A New Instrument for Sliding into Notes. 529-530 - Johnty Wang, Nicolas D'Alessandro, Sidney S. Fels, Bob Pritchard:
SQUEEZY: Extending a Multi-touch Screen with Force Sensing Objects for Controlling Articulatory Synthesis. 531-532 - Souhwan Choe, Kyogu Lee:
SWAF: Towards a Web Application Framework for Composition and Documentation of Soundscape. 533-534 - Norbert Schnell, Frédéric Bevilacqua, Nicolas H. Rasamimanana, Julien Blois, Fabrice Guédy, Emmanuel Fléty:
Playing the "MO" - Gestural Control and Re-Embodiment of Recorded Sound and Music. 535-536 - Bruno Zamborlin, Giorgio Partesana, Marco Liuni:
(LAND)MOVES. 537-538 - Bill Verplank, Francesco Georg:
Can Haptics Make New Music ? - Fader and Plank Demos. 539-540
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.