G DI N N TE EX TH E AL M ICS FR O U E:M C E IE N TH G ERP T O IN XTENDING L ND EX A E SI C XT OMY E P H CK R : F AC K E BA CE B TH D N D L AN E RI AHN OM AITAL E AL R GT XP IT E: F DI HEGI E G E C T DI AL D I EN E THI TO IC HE R ALS A L U TO XP E K IC I T M G E T E BA C E L TH HY S I L E D TH CA PI C A ND H K S SI A RO M E T AC HY U L E D TH M A : F H AN T O B P E IT E T D L AL N L HE G C G A O A CA T T H DI E N UIN IT T IC L IG E RI ND IG AL ICAL HYS TA USN H E E D GI M E DI T P T E SI C X E E P I X D E TH TE N TO E EL T H HY T H TH HE T H O E: EX L A K P T CA IC C T O M T NG L NC I ES S B A AL TH FR O E: T O DI A IE HY U M L N IC R P AN D ICS OM E : NC E HE L HY FR EN C IER TH T TA P :G R I PE N G I E CE E X L RIENCE A TE YS ESIC EX PH EXPY E PH L EACK HI I H P E T CA THD D T EN X L L CA B M USI GEN E M RI E A I D O NT H O E L IC ICA US AN R M DI K EX T R P F E N O : F X IC A US U S M : C T E S M E A L E TH TE A L CE L U E M E TH AL GI T C X BN G D A M I E N E NDA N H H C I I A N SI C RI E IC E T T G I D R D K A L Y US TH G G DI N C E N N U S PE EN A L T A H B A I P XP E M NG D I DI N M : X T N E E CE E EX T D I IG K E H DI N IG D C EN E XT TH AL N L AL K A D E A CA L T G EN T T E C IE A C C L E H BX X G I CI N T E E K IN S R SI S I A A T D L S I X B I T TH N A U D E K CK A C D UN M XP E Y U O C N C B H M D I G O T A SI M L TE CK E E P N D T L U BA B A D T H E A L A TA M X A L E D D AN EX T A H ND AL E C I E B N N G IC T TA L A L A C I T H I IC S IG E A S Y D H H E CK NFD AL RL AIT OACK IN S M L SI G TO Y H E T TA A A A D U O A I P G OB L T IT IG B N MMR IT UE F M L E D L PH H N TA GI EG D D T HE A T I: IG E A C E H O D AL L DGIT I I H D D HE I H T NI AN EX T CE D TH C T C A NE G N E SI T YS T M L TE SI IC A D H HE TO TAL CK IN E H Y H MT I T ING PH NG P O FRO CA S LE T I A R R I EX HY UT D I E S P M ITATH TO TO L DIG B N PE TO ND E D H : F : K D E H N T CE HY AC HE HE DIGL L CA N T X L TE E P EA I E A X E A T TE M N B T T H CA C S H L E L IC EX M X O EN C IE EO R R H NDT M NG H E T SI L Y YS I HY A K A S E IO T A O I T M A H P T GI T C IC K R F R E O H A S PH Y C : F C K HE GA T IN PE X P OM AL : F R ND TO R I C P P BHE L DI UA M E D B E B A C D X E R IT E T E AL : F US E T M N E L EH SI C E N E FH A T H ND EN ND RO TE L G CA E: DI EN C X M T M E I C NC E M O Y T H A I O A I I K S E H O FR PH O TA L T M L ER A L : F EX T IC S NC E R C HY RI T TO R : E T GI IN G FR O TA E L S U E H E A P G F E L I GI EX P A IT C CK UN CA M E M I T P B PE N D : A I ER O EX D E X DI CA L NC T H CA DI EN CE I L IG IE B M T H D D E I D A R YS O T H XP L L A N T EN S AN RI E M S E T N E E D R M A A M L T YO HYE R T H X E H IC E P N PH F O G E: R C IC ALE I T S H X A IN O C A X H L P F P O R U T E L E CE : F D CA L SI US IT FR S I E E P A IT EX E : HE T L AC K PE TO M OE T AL T A TH N E EN SIX I E C PH Y E M IG : U CK HD E M T DI G C T A B E AL H L IC I N T U C A G I EN PM C ARI TO SI D AL IC G IC USNE FR HY A IC YYS IN SYS MXP : E P L S H D H EE CE H ITA U P M E EN P THN T G H T E TOE DI HE T EX TH NG ALRI OM T M IE R HE G O CK M ND SICXP : F T IN FR A RO TE Y I G D ICROM R E X M E H E N HE B M E DIE F XPE RIE L E E TH TH G TH IE T ND OR G FR TH H : E P E T M IN O E N A : O THET CE L X THN E E G RO ND T P L TL X DI A CE L OIE CA L M H E IN F E A E N IT N A T RR USI A O T H D : T IC L E G E C L FE ICS FR TO T E T I I IP M G TEN C EX YS ICA X D ER A: N N YS : K H S E E P H SIC EEX E UM E AL DI TO EX E C D B F H L H E NC P RI K H P Y N O T IC N L E BA C U XE M AC T E E PHE : O I E CE T :N L E N CE ND E E X P E E CA G TH I E YS T A PIC : X T H E B T I A L TH E R A T N A C K H S IN R H X S E E N D H M T ND L IC M H XP E L IE CA X E L G E EN C T U D N P P E Y NC L A A T A R SI E RI S T E A RI B A L O G A C U O E I P Y K E IT O M E M N DIE EX K HE C PH IE A IC TA F R IN ALN A : D YS I R M R M L IG X H C P G PE F O D A E P A X I N FR H X T L T E S : D X T E T E A M B H PE U IG I E N IT R B E D C E G P H HE E F ICS E E D L E E A : G C D I C : H L L E K T M D T EX SI RO N EXM H E E IE N X D HE T G EN CE U T H K T MAN A HIC T CA IT A NC IN AC K U A E I E D B C M : F L RO A L T TH ERA G K H E T IN R ND E IE HE T O AC OM AL S O S I IG RI EN E B R T U T U D E T ND B A HE NC GI T : F IC N O P C T OM N L I M E S D I T EX BA TO R TE EX P ER T A D F IG E L M P D T E I U P G IC NA D H C E TH E X EX A N C N LE L A G R I D M L D L E N E K CA A N A E: F X L X IN S A I N E E T E A E D Y L K HE T S TH O L A CK T L I P E H I H X C SI IC A ICY C K I C L N PH TA C T NG H G T C A A I B IG I L IT A EN D T R A S L N C S A E I EX T E SO PE K BG K E H Y U TA Y IE A U ICB X T HE I G BA O DI E P N D D C H P M I H M S D D T N H ND I CA US D G L T I E I XT A EX IN AC N A T G P E R E D U EE I E T E N E S M A N H D E LIC A L D B A B G H H D E P N H M CK M H A OT TX T E Y E L T HE K USM X H H A O C SI C N L H X A O L T T IC A D TE N AIT AN D IN T T E T E T E T E P T A A D M G H L A L G TH B A FR O TAO T T M Y S X N N T M T N T A L K R E IK E G G L G O I A I D : TO B E H U E L I L E R O C I G N E L IG I IC AC : F C H N DI A D H PI M K TA DT C T A T F ND TO R I IG NDS IN A S B E A D I A L N T E E C I E GI EX E: E L : F U D E D L EN C CA D Y C B M N HE A HYS IC L G T H BA I G H I N N T T D K C X T CA E T N A I SI HE PHE M EX IE N D RO TE T H YS TA DI C H T P M G D E D O E N IAC EIE S N H E T E X XT E I R Y T E L R A F X O H I N O N N H T H B K Y IE T O K E DI G PE PH O H A E L E: E T E G R X R T T P CP A I EC K L H E D T F ND I A T L T C H R T C E I X T L A L I C A T X : A O C TO D PEN BA P E G L A K E E H A S N N B C H L T O M U E G E A ICI I M T H HE E E X E P I A C B S O T E IT T SI T IGT K NC X G L Y AL A L E D H X D A L I C D BA T C M SI FR M D Y I A A AN T E N S AN D TO SI O Y : HE IG I E E R ND H F R OM O AC IE E D C PH IC IT A M A L TE I C L HY L N L U FR H E T D TH X P A P E:E F R T L B K ER C HE S E YS G P C D A Y H H D I US I E L : P IT A RO IC X P A A A M : N G E O A H C E A N X B T H T P G : F S E E IT AL IC E E H E IE N H L N U CT T T C C P E M I C K H IG IT YS TH N T R D I T A O IC IG I M RI E N YS I AIE D TOL L N E LM HEO TH HE D CE E M A T D EH E B M MN E IG PH G RI O XPEE L N EN T INGS A D RO A A T H DU PE RI H IT C L A T FR O TP H E O H I E R TI E AL DM E F X E IG S A IC M : M T: E G T D ERI TG AN FR T E H ND XPH : F AL EX IC NE TH E E P U T S O E EC L X H D M GIT Y R C FRO AL NC DI TO PE IN L E: TO T T TE E CE IC CK YS T TH X TO M EN E E I H F N CA L M H E D P : IE : I N X A L H F I H E S E A L E ND O SIT NC AL T O EX A EN U BA P EI E : L R S TCA RO T E E C R C E Y T IC E R C I K A HE U I O H X A L T IG IE C L I M E C F G T TH N P E EN P E YS IC X D ER YS I A : F CKC US E R E N D H ACT EN SI C PA M UNS : T EX E L IN O M RI EX RI HE K PH E E I CE A M P H A B IY E E M C A D T O E L E T AC US CK TH EXP H P HYS N BH E E D HE EX T L M RH C N P M E I N L G ITA RO AND EP AL T E EN I E L R P PH A X M B H A O L H P R A T A IN G F X AIC G T RIN HYS A F X C E T E B T A T E E C D I : L E BS I G E EXT C : E I O D H L G I AD P YSI E L S DC U L CA AL FR AN M T N A SI M P D EU N P TH X A IN S EN C IT L DM N I X E K H EN IC M IC E: T U E A NAL O G A C U O E I D T N G E D E H C P I IS E S FRC T IN AL S M FR OM L G N M X H E I IC AE T N L T A E R U H U N I E: ND IT Y E : R CA DI E E E T RI D S ALT O E UTH E BEX A H E M T M E IG E H H E F I E X TH CK T PE H T M I O K EX T M SI C RO N D T EX P I CE NG E R D N XT IG E P T NC E: SD U T H E L T K G BA A EX HE I G R C U F A M H I D TH PE E IE E E H NG IE NC M O C IN O D IC L T T E E: F A K M : L L T D H TB AC E TA FR O A R IG N G R E T A D E C IC N E IN E X T PE K H D E IE N N S A I O C T M N P R T H B AL E A H Y IC CA L NG THD N C N B BTH N IG E: S DI XTU D AL T EX BA O DTO R TE EX PE G TIC N X AL P S I DI O IE GA ND E N L F AG I D NC M EN E E IC A L D L : X L X IN S E IT HE U YSM EN TR S A E E Y L H L PER INA IN E E IE E T CK T U IC C AN CA E CA L DX H A K IGC T E P T A DL P H H X X NA ND X T ER T E BA E M YS SI L SI N CK SI CA TEN P IT C D H EX IC E EIT E E O P G K K U A E A E Y U I G BA E OM T YS L XTIG T L T X DIN C N AC THE PH M IT H I B M S X H I G P ER D U E T D D H R G A EX A L E A NE E I E T F I PH C TD E IC A L D B A B G H H D E P N H M ECK M H AN O E: ND USI KK S IC A EN D AL D IN T T E H EX A T E A O T L T C OU C N T L H R A L C E A T AC M YS I T AN ITS X A ND M G H L BN T M TA NG T F TO T M B O A D : I CAG I I IE N L B X T D E H U E L I L E R D O C I G N E L IG I R E N IC A D H P M K A D TAN T E C IT I X T : FE EN T O G D S A S E G E L : F R SI DI N A C A D E A H A G H I E T U TE N I L N C Y P Y G T TH B I T A E E ND A IE SI E PH X H IN D D CK N C EX IC NC E M TH E X TE GI T R Y TH E P D M G NDO E TO E H E L N R IN A H TH E A IE S O H A E T L B H X I P P D D E R AC K HY RI E T TO C K E D X E T T IC XT F E L S: EN G A K E H M E E TA L O PT IC A O N P B PET A X E IN A L B AC TH A U T I S L E D H X D C D CA L T C I C RO M N G L L I B M S F E E X DI A H Y CA A ALC I T A N T E N S AN D TO SII M L TE Y : N U R O HY E ER C K HE SI P S GI IC AL O CA X PH AL A AL M : F P C P A T HY N B H E Y DIH US IT FR S I E E IT AL IC E E X C E H IE E D TO E P T P E M IG U KE: M C H IG T S H N T R N OM E TH E D C A T D T H H H E E B M IG I PH Y E L G IE M P A L A T FR T TO T TH IE N H OT D RO TH E D IN ER R EX A IC M : OM L NG E D F T S O E I O E R NG N F HE H N X P A : T : GI Y GiRl WNCeinFRbe CA D T XP I AL E T O T E E E I M T L CD P H : F E : OE I E YSrIg EN AL E ND IT NC LE T O X A NE C T C L G E CA L R E IC IE TH N PE R NC PH EX I A T I F E YS IC R I SI A : CK S ER IE X I E X D E C E U M R E R E HE HCK P U S K H XP PH Y I C A M P O E E T Y S N B E EX R P AL P BA HE M A C T E H E D O L E P I N THF EX IC X S E O M D T HE BD T CA T H E ER A L U AL FR AN M T L GN A SI L A O G OM T H XP A N C M IC : L R N A C U E I T DI SI E S CE TA I L: F D A YS I M FR O M AL DI G N U U E TH IM N N T E : R C T M G E RI E IG CE E IG I PH TH C E F I D N T E N E : US TH E EX IN H E E IE X DD T P H R E E TH N G IE NC M O EN X H NG E T PE CK T M ND I R PE RI E HE TO T L XT ND I AL T X A E C L E B A O TO E XFR XT E XP E NG CE S I T US I A IC CA L ND AL L IX A C E : E A E Y E S I I C L N D H M Y US AL S EN C CK SI CA TE P Y A U AC K TH E PH M IT H R I B M S I EXE G PB E D U NG E I E P N E M K I T H TH E D TH H G E X A L T HE BA C EN D M N TH M L TA NGO I T T FR D O R O IC A I I G NDG D N EX E: N T I N I A TE AL F S E: U E D C X M T E ND N E IC NCE S HE T H I E X XT E R CK HY E K PE A P ER I T TO E B NG A C K P EX E I A L B AC AN D TH E X ND IC ND B M AL TE Y S A AL O C X H AL GI T I PFR US E E K GI T DI CE : M AC T H I E B M D IE N TH D RO TH E R G ANE N : FP EX D I TA L CE TE N IG I EN X D ER I E CK TH E XP A EB TO AL SI C HYP 1. Introduction It is widely perceived that the computer has enriched and advanced the art form of music. Digital technology brought new palettes of sounds, compo- sition techniques, and production methods; innovations in digital compres- sion and distribution changed music consumption and listening practices; for performers, novel musical instruments and controllers have been developed based on a variety of sensing, interaction, and mapping approaches. But after more than two decades of research in computer music, a fundamental ques- tion must be asked – has digital technology truly innovated and enriched the expressive, emotional, and creative core of the musical experience? It is not clear that the answer to this question is as positive as we music technologists would like to think. D During the last ten years, inspired and motivated by the prospect of inno- AN AL GI T I E vating the core of the musical experience, I have explored a number of research TH OM CA L R SI : F U directions in which meaningful use of digital technology bears the promise HET L M HE L OM A E T L I C TA R S TH O A I F U K NG T I AL of revolutionising the medium. The research directions iOdentified T O T L L – gC G CUSIA A est : M DuI raCEl E BA IC ND E N H S E AL E : MIC CK IC S E TH IE T L G AN D Y A TH C NCH L Y S BA A Y H O ER N L P A E G E S I T I IE PH D I C PH G T P D TA HE IT ALG T expression, collaborative networks, TaH ndIN HD coTnst MrU ucERtionE istAN leaUSrnHiEn g IN– cAaL n ElXeadEN I H M GIT D GI E AL G N NG HE PN X T AL E T D IC AL XT DI EE I H E T I E I N E T M E YS C E E THH D E T THT E IGI E D T D T L M I TH O T H SI K H M O TH T H T to mus M D H L N E X EN NG CiKcal experRiOen G T A TE T IIcN es EthaM t cICannX ot CKbe XfacNiDlitat Aed RbO y tIGradG itioR naElX m PeanUs. TC heT O NGSIC : F DE IN E: F K HE E M BAH TO FRM : DIL TO O AL BA : F NDE T O S E E U E HR U BA K TE M NC T END C T H D L NNC A T N A E E AL B M C : F R IC D E K C E O F M CK D C X E E O T IE O G A C T CK S E N H AC N T T E: D L I N X IC E A US H A T B Y N C B E M fiTrstA L dirOMectiDonE IbE uiElXR K ds L CIC AonEN the E no A I TBtAio EAN B TH R EXL ER N FR IN A YS IE ETH TND ALn thND atA CKthr AGougPEh nAoveK l seP nsing D H R KE D G N LI N X IC C X A CE :andN mGII app P ingE C PH I N TH L N IT R A P C YS R NGA A IT A B DI E S E BA L IT E N T D HE EX P BA HE PE R A A I G F X E I L G L D L Y E X E T AL N G SI C D I : AL E B H N G T D X I EN tec D h CE IT L D E P P NDX TAI D I IT A AN TE A IPH D CA DI R E H AL N M E IT U E A N E E E I C E AN SI E K T OMG X E C IC AD G L S P O K L IG EN M EX T TH IEnN iqDuIG esC A H T I H I HSI, nLew TexpALresXsiveD mTusic D al TAgestE ureU s cT an Lbe UdiscH ovXeredA thTOat LA E A E FaR re SnotA FR ACC D T H K O ER E U T M IC E O E GI CK ME M ITA M T E BE O L D AL E: U TI M I : B HE EX T C T P TH G E A L X E M K H IGI O S C TH T T D A H O IG H T A N IC C E DI D T G R U B T C A C N K IN B CA L R suEppoO rteHd b I Dy cu: F rre Mnt BaA coTOustCAic iTnO stHrEumD enGtTs. S: F uc DE h geG stuALresSI, uAnL coYnS stIE N TH E E N A TO AC I D I L T T E E E D I N N E IN IC U T H R raiGnedH byR ALB ND N YSL CA AL G H NC TH N L S AL T HA Y AO L DI C T D YS M I P E N T E TA D TE A N TL H SI C I E A IC H IC T A N N N E IG E P DI O P GIC N X A P U SI D TO RI NG ALN S P YS L IT TE RIE TO E PH H D H EX N T EX LA T E DIE I M Y T T T E LE E I T Y A L E L CA AL CK T DI G TH E I A M TH thHE Pe p T AEXhysIC i LcalP limNDita IEX E DIGT tionH of Ha E H P T co C PusStI ic sDI oG un X E L T A IEX d EXpP roIdCA EE M E Y E K E S ucCKtionTH , cIaN G n pTHrovOMideCA X ICD O FR SI inEKfiniYtS e pCSIo A s-THE US M HE IG B HE HRO G T CK Y S AL X E TH O T H PH H A C L M T U O T D D N H C E H R T B A H Y N U T B A O E L : C PHE M A M T E N : F H DI A M B A P I R M O O E K T M F M E O TD IC P D F X CA C E B E E A L T O L T CE EN FR sibD ilitHiesM UfSor CexpOres : O H T E N :BA T FRsO iv NCeE anR d cTF L US H A E ETA AL N T : AN T E AL : N : OM reaCAI tiveAL m MusMicT alA LexpNCeriCeK ncHeS I Y s EfRoIE Nr NnT H G oviNcD e aTHAL M s NwT H G ellYS ICas NG FRDI E: E GII IC IE X E M E O H S R E C AL O TH AN D E A E E A P H N C E SI C CE T P I N R N ER I C FR YS IT H O I RIR B E X I AG R PN E N T D Y P K IT TE G L Y E P E : H IG F I E D H E EN D IT F ND T E O H X C I G : G E H E X I T PE EL B A I D PE R D E tr E: F NaC ineDI N N d mITAIG us I PiHciaERns.EE P L XThERe I sNeCEconP G : D P N T L I E E RHE DE IN CE E X A M A XT D C XT T E LP E L C N M K P AH A N X H X A X IE T dT HresNDearch dH irection uO tilisI esE theE digitaEl network C DT C H EN T L TA R S H IE K O AC EX SI NI A E T RIE TE E D T E C E R E I O A I U K AM S L X L SI L M CPE O TO T R T I IG : F C T R R BL M M A O E AC : F D AL HY LO UM TA CA TO PE E H A A Rin an eT ffoOrt tCo cMrUeatICe nEXew F colAlL E X PE L S D E E B T P B E N C P A K R I X AE IG I SI L EX C O F S S L : IC K E ICH U A A T : U HE U A CE S abAC oraL tivSe exp U erEi encNC M es,TH allowiLng EpXD A laDyerNsC toA SI E GITHE H IEC L E Y T R NG AN N L UIC AL A IE TA ta M ke TH IT D D M SI A B AL C M T M IC N Y B A T O E I L S IC L R I KE D E S E M E C NG H E Y I C N IC EN HEH NG H U RI E H D IC PHP G T XP A Y A E G S L N D IT H US IT P DI TH H S N E N P X R O T B A I T T H P U Aan YSactRiIve rT oleDI in Tdet MermE iniEng aA ndUM infHlueI CA E L TE G M G F D TO H I E I E E G : O NG HE N E M L G G E P L T TA PH P E N T IN E IN H EX A E EN DncSiI ngC AnoEtX on Dly tTH EL I I X T T H M Y I E heTiHr ow DE n mAL uTsH icaINl ouCE t- TL AA D T T D M I T H H C ND N A L K IC EN M T H IG HE E ND EX N G A L RO IG T S I O A D T L E E N C D G FR O X PH U CK T O M G T S T E IE C C S O G A T I I F N E E M A O R IN O U L XT R YS I T T E T K : B IG I BA Y R N M C X EX : F X DI TH OpuStI buE t alACso tE haEtND of UtS heCEi: r cHoE -p T M A ENDeI rfoCE KN R U B M T N rmACers.TH ByHET usNDing TAL the: FneNtDwoLrk tE o inIC terEdepXPen H D DCE E A H S K E E P E E ANC TE O F K CK T N E B M A C T C T Y C d- HN T : M C D X E IE O E I N IX L E E A N BA E H R T XT RI ND RO GIN AL YS IE EX YS NG H BA L TH T AL DIE E A C H B A T L E E PK E P A : F D IT H R K H CA O IT ANR C N T D AK I E ently s D I E D I M T G L PE C S I G AN TAhL arNeD a C G P C K X L E N G P PE C P H N S O LA BnA d IcN onEXtrolSI muAC sicEal mTA atCeriaTEls iDIn aE groX upB,A muE sicENianTs cLaA n cUM omFR binA e DI TA EX R BA HY E I N L IG I L D ND L HY B L I EN X H E H T M : IC E I P P D A A N A D A IG I E HE T A E T L ND X EE S H I G D T P C R A RO IT H C E Y T D E CA L ND X N IT I A T IC N I D E K T M A M F G T I A H E E TE IG HE IG E A S L AL EX US H L U H E XP C O O IC L O K : DIA T R R C G IE N PH O E TH S L T A EX t DheiTr m D T H A E usITicalK id eM as TU C A M T E B FM I E O G C M intITo aE coOnstLantDly eALvolE:vingUS coA M GITllaEb E : F BA C HE DIN R HE TT E AL TOK H I E N O GI RO US C T H T T D BA H O IG TH T CA AN IC NC E D I NC orDatiIvE e mT usN icaXPl acTtiviICty N T LL H O E M S L A E D I F M BA O A O E D T F R D G AL I S E A R T ES O Y A C E E: E D T IC T H G : E A L Y IE T THE I L PE L X L R H C S I L T H H C H N AL tYShatA is nT ovA N IN E H IN SI C U H I N T O L el aD nd CinsTpiriDng.Y Th M T R G R A E A F P Y T A C N N E e thIGI irdE PresEearINch dT irePEctioIT n uEXtiliCA P CD O G I sesCK co :SnI stErucHtE ionHYS H MIE G L IC PH SI T TA EN IE TO E H P OR H D X T X L S CIN A YS E Y PAL I T R L XT T E TH E IEN L E D CA Y BA U -M N T P E RE D IT FH H H C G X E E L T A L E I H D IE M E TH : L XP N IGE P T P SI D I E P IC A E TH N G TH OM CAI EX I C CA H US P N HE R HRO T CE A E T D E M iHsE Y K X K I S I T t learHnE inCg, wE hicYSh bC ea D O FR S K Y S M H E A L T P E : FH M O M N C L H A S I A EX HE T RO T P T BA AL H B ROrM s tENhe TL pro:misU e oCf rePHE P T A CE M BA vo lMuU tioTOL ni ETsH ingTM mTuAI siIcNG eLdEXucCaE tioROn bFRy RIE U: M CK T F M O C D F E E D N O O M : O H T ND SI E N : E X IC EN H H H E A RO IG N A E : F E PE E A T R CE R TF L A U T H A CEB F A L K Y S RI T AN D T T IC NG F D E IC I E NC M M G S I : E T S R C E E X TH D AL : EN G M Y ND C E I CE : O A SI C R FpRN roYvCSI idiITnA L g HhE anM TAROdsG-I onR IaEN cc CBeA ssE PHto XpPE roDgIN raTmALI mO INFRablDe mH P us N D E H X TiEc mENCN E G T I D H E N N E ak TOing E MCK. ThE r UouXgPE h iIEnN teERI AE R racIC -L NING AL Y IE P IE E: PH I G : F P D D D P E N T L E IG : E H X RIE E T A TH AL E X S N TA PH R X R C E IN E E X A M A T D T T L B X P E U I E E N E C E L C X NC X K PE A G IC E L M T E IG XP E P H I E M D A E X D E CA L E I E X ER I L OtMiT HonTO T HwiTtNhD pENhysTI icaL A ROT S EE H IE E O AC C EX I N IN U S AL IC H E E R TO IC A l cGoI mpF utaU tio SCKnalT obRjectCKs, FlRearBnerLS P E s cYan A D T KX S I E: M A O PE A : D A H ALconEN str uME ctSI C peUrS soGnalAlCy THU CA X R L E P AL U D E T B P NM I E : F B C MA K EX C M E NC TH D AL X CE AN I IT XT TH U I B TO US L E IC I H E D S E M E D A C I E N L N N L U H IG E G E H N ND L M IC N US IE Pm S HY eanAC in L SBD ICAgfuHYS P l muH E sicTTG TO alX aR rt C NG A I A A I E TA M T D K IN H T TE A PE L S C L R I E A C DiI facTAts t I E MHYhatS enTAhanPE ceI GandH deO epeH n tBAI T hC eiNrD leGaT rnINGingE.X AL SIN T Y M ER HE N U E I U P A M H DI N AL E P X D T R E IN D K I H L L T IC L T E IG E M IG E E G : F O ND XT D N C IG P A EX T T E TA N S D E D L H N N A C M I WHE hilMe faE ciliYtatIiC A EX TH H E A T I CE A E T D E I O G T O XTR I P H USG FR E ngCK noTHA v E elO MmNuTG sicTHO al eICUS xpeO rieNn D ceENT TEL RI s tSIh L L CA atT AI canK noTtE beX a B E H SAC EX EK ANcD hieHvedT byUD TF M M E IN E: K E M B TO FR G BD C TH E D L : NDI T E M A Y C OH H C EX PE H DI D CK A AL TO R HT EN NC BA M T N EA CA C TE CAL HE SI K EX E P E AN A B IT F TtradiEtional means, the digital nI atuTre Y C H B D G CA L T I O G I N X A L H T L D N I I CE : G D L S E S G H I N EX R R NE EAN F DI TA I Y HY R K H IN E P of tBD he TICsA e MreseOarcIThA dANirecAL tio D SEn s YofteNn leDads P L E: E NX N IIG P PE C P ND H AN US O T IG L A H ILE A C E E X A E E T R A D TA GIT TH P ET T D ER T to flaGtI anE N d inX anEimaTH E B H T M AL M : F IC E GI I TO E P D T E S H I D H X E X I RI E L X O T E D K TH M teA spAeN akMer-gEeneR ratGIed THsouCnd,Y haTmp DerinE g tAL T E KE C O IC O K F I N H O E H C he MphyAL sicACal rich-P A O L : D G E PX T T T I O BE B L : FR S A FR CU T E I H S R ICD A E M GI E: BA C E IN R E L T O FN H D E H A O T HY : US NDness andAN visIC uaNCl exHE T S pre DsI sio P C E ANC n AoNDf a E T NRcI ouOsticE mEuXY IE T E E E T T siOc. InI my T mLM S L P MY A ICAosHtE reEcNCenHtE woALrk,AL there- PH ER NG T H ER I AL XP AL E X AL FR PH IC YS T RI T GI T C S SI C I IT E XP D TO X P IG L SI C K :C SI E H M E G I HE HY P O P IN D UM fore, I attemE pt ENT to L E AL D C HYX ICAcoCmA biHEne StI he P beNDnB AefitU C T P R X E M s EN M E HE F E ND H EE : E S I T U E H ER oI f dO igi T L E TTtHal comE puAtatiT on andTH acoustic R C S A T M ICY X T O G H U TO M TH F N P E A L NG EX P M E: O FR O IE EUS NK AL DI richness, by exploE H rM inICAgL t T H Ghe cMon IT IO L C FR : ER M C IC NR IGcepNDt oCAT f “rEoNI boE: tic CEmuXP sic A EHEianBshiYpS ”. T I define this YS DI N : F D TE SIE R CE X U E N IE N E T D H EX H N C TH L G N P P E N E M X P IER E R CA IN A L E CKH concept as a combinaTtionIE of K E E P I DEX R TOE AL mBAuC siTcH alC,L pePr EcepX tuUaSA X E l, aN ndAE GITE L M T I soc T ialBAOM D skills with the XP C D G I X I N IN A D US AL C H E E E F R AN E YS A D I T K : L DH AL M C S H A P TE N HE SI U G C T CE T U I T AN I X T M IN BA O N G L IG E G E M E D D T E I D TH EN N AL I D TA CK R I IN TH G T A IC P E HE IG BA N D G IN E X TA L S EX T D TE DI N ND ICK G PH Y L TO HE EX EN T E BA I A T D HE LIC X C A O XT E ND S T TH E T U SI E M Y AL AC K A O O M E H IC B TA L T R H P S GI A L : F T HE HY I IC G D S NC E IN T PM E HY E N D O H 299 P ER I R T XT E F XP : E O M E E K NC E FR TH AC RI E E: B C IN G XP E IE N ND E ER T E P EX EX CK BA capacity to produce rich acoustic responses in a physical and visual manner. The robotic musicianship project aims to combine human creativity, emotion, and aesthetic judgment with computational capabilities, allowing human and robotic players to cooperate and build off one another’s ideas. A perceptual and improvisatory robot can best facilitate such interactions by bringing the computer into the physical world both acoustically and visually. In this paper I will describe my projects portraying a musical journey that was initiated by my interest in extending acoustic music with digital technol- ogy and reached its most recent period by investigating the enhancement of digital music through physical-acoustical means. Each station in this jour- ney presents a different set of novel expressive and creative possibilities along with a set of limitations and constraints imposed by technology. 2. Related Work, Goals, and Challenges The field of New Interfaces for Musical Expression1 has received signifi- cant interest in recent years as researchers and musicians explore new sens- ing techniques, design approaches, mapping schemes, and sound generation methods to enhance and enrich musical expression. Research in this area can be categorised into two main areas – Imitated and Augmented Instruments, and Alternate Controllers. Building on the vast repertoire of familiar musi- cal gestures, researchers have created imitated and augmented versions of traditional instruments such as percussions, strings and woodwinds, among others. Alternative ways to play music have also been explored by using vari- ous sensing and mapping techniques such as in non-contact instruments wearable music and alternate tangible controller. Most of these instruments, however, have been created for particular compositions (usually by the inven- tor) and have been effective only within specific aesthetics boundaries. Only few controllers have shown durability and adaptability to multiple composi- tions in a variety of musical styles. Inspired by the tradition of great versatile acoustic instrument such as the piano, one of the main goals of my work was to develop controllers that are durable, versatile, and adaptable to multiple compositions, styles, and playing techniques. The second area of related work is in the field of Interconnected Musical Networks (IMNs) – live performance systems that allow players to influence, share, and shape each others’ music in real-time. Such systems, whether they operate in one physical space or over a wide-area network, provide an interdependent framework that can lead to rich social and musical experi- ences that are not supported by traditional group play. The development of 1 300 IMNs since the 1950s has been connected to the development of technological innovations – from John Cage’s early experimentations with interconnected transistor radios through the use of networked PCs by groups like the League of Automatic Music Composers and the Hub, to the current proliferation in collaborative Internet music. These experiments, however, usually require advanced musical skills and understanding by players and audiences, and often lead to inaccessible “high art” musical outcome. More recent collabo- rative musical installations for novices on the other hand, tend to simplify the musical experience for novices and are not geared to interdependently connect between novices and professionals. To address this gap, my work attempts to explore novel interdependent musical interactions that would provide both novices and experts with rich and inspiring, yet intuitive and easy to follow, collaborative musical experiences. The educational goal of my research is informed by related work in the field of constructionist learning. The constructionist approach emphasises the unique ability of digital technology to provide personal and configurable learning experiences to a wide variety of learners. The approach was con- ceived by Seymour Papert, who demonstrated how learning is most effective when students construct personally meaningful technological artifacts. Other researchers have elaborated on Papert’s ideas, showing how interaction with digital physical objects enhances children’s and adults’ learning. In music, however, little has been done to develop constructionist systems that attempt to connect between figural expressive musical experiences and formal aspects of theory and technique. In conventional music education systems, when music students are introduced to formal theory, certain important expressive aspects that came naturally in the early figural mode are temporarily hidden when learners try to superimpose analytical knowledge upon felt intuitions. My work attempts to utilise constructionist-learning methods to bridge the gap between the figural and formal learning modes through hands-on inter- action with programmable musical controllers. And lastly, I introduce the concept of robotic musicianship, taking up Rowe’s concept of machine musicianship. In this research area, scholars develop interactive systems that analyse, perform, and compose music with computers based on theoretical foundations in fields such as music theory, computer music, music cognition, and artificial intelligence. Several effective approaches for the design of such interactive musical systems have been explored over the years by researchers and musicians such as Dannenberg2, Cope3, Lewis4, Pachet5, and others. Such digital interactive systems, however, 2 Dannenberg 1984 3 Cope 1996 4 Lewis 2000 5 Pachet 2002 301 are limited by the inanimate and flat nature of their digital musical repro- duction. Current research directions in musical robotics, on the other hand, focus mostly on sound production and rarely address social aspects such as listening, analysis, group improvisation, or collaboration. Both “robotic instruments”6 – mechanically automated devices that can be played by live musicians or triggered by pre-recorded sequences – and “anthropomorphic robots”7 – hominoid robots that attempt to imitate the action of human musi- cians – function mostly as mechanical apparatuses that follow deterministic rules. The motivation for establishing the field of robotic musicianship is to develop robots that can produce rich acoustic sound and visual cues, while utilising computational power and techniques of machine musicianship that are not possible with traditional acoustic instruments. 3. The Projects 3.1 The Musical Playpen (1997-1998) The Musical Playpen was the framework for my preliminary experimenta- tion with gestural musical interaction in a constructionist-learning environ- ment. The instrument was designed for toddlers and infants in an effort to explore whether very young children can participate in a meaningful, active musical experience. The environment allows young children to control two high- level musical aspects – contour and rhyth- mic stability – in an environment which is both familiar and fun: a 1.5-x-1.5-m playpen filled with 400 colourful plastic balls (Fig. 1). The playpen was designed to generate musical responses in correlation Fig. 1. A child playing in the to children’s activity. Players’ movements Musical Playpen around the playpen propagated from ball to ball and triggered four piezo-electric sensors that were hidden inside four balls, one in each corner of the playpen. The balls’ ability to transmit hits to neighboring balls, combined with the sensors’ high sensitivity allowed for almost any delicate movement around the playpen to be captured by at least one sensor. The analog signal was then digitised and sent to a Macintosh computer running Max/MSP where it was mapped to musical output played from speakers below the playpen. Two oppo- site corners were mapped to control the melodic contour of an Indian raga, 6 For example, see Dannenberg et al. 2005, Jordà 2002, Singer et al. 2004. 7 For example, see Takanishi et al. 1998, Toyota 2004. 302 so that the more energetic the players’ movements in these corners were, the higher the played Indian raga pitches became. Children could therefore cre- ate melodic phrases and manipulate their curves by changing the intensity of their body movements in these corners. Player’s physical activity in the other two corners were mapped to an algorithm that controlled the tempo, rhythmic variation, and timbre of percussive sequences in an effort to provide access to controlling rhythmic stability. The more energetic the players were near these corners, the more versatile and uneven the rhythmic values became. The tempo curve also fluctuated more sharply, as did the rate of timbral change. A number of observation sessions were conducted with the playpen at MIT and at the Boston Children’s Museum from 1998 to 1999. These sessions have shown a wide range of responses to the environment and the high-level musical control that it offered. For example, a 1-year-old infant started her session by triggering a sequence of notes as she was placed near one of the melodic curve corners. The infant looked in the direction of the sound source and tried to move her hand towards that corner, seemingly trying to repeat the music she heard. When she succeeded and another melodic phrase was played, she smiled, took one ball and tried to shake it, obviously without audible results. Frustrated, she then threw the ball towards a rhythmic cor- ner, generating a short percussive sequence. She approached this corner while moving her torso back and forth, laughing when discovering that her movements controlled the music. After a short break the infant started to move her body again back and forth, gradually accelerating her movements, generating less and less stable percussive sequences. Only after repeating this behaviour in another corner did the infant seem to be ready to use more expressive, less restricted gestures all over the playpen. These responses can indicate that with the right instruments and con- trols, young children can have access to spontaneous, expressive music- making as well as to more serious and thoughtful musical explorations. These findings encouraged me to develop a new set of instruments, which I entitled “The Squeezables”, in an effort to continue and develop models for high-level musical control, and to explore novel methods for networked group collaboration with older players, who can express and discuss their impres- sion of the experience. 3.2 The Squeezables (1998-1999) In the Squeezables project, I attempted to add the concept of musical networks to my initial interest in gestural controllers and constructionist education. The goal of the project was to allow a group of players, novices and proficient musicians, to interdependently collaborate in constructing a 303 meaningful musical composition using unconventional expressive gestures. The instrument consisted of six squeezable and retractable gel balls mounted on a small podium, which players could simultaneously squeeze and pull to manipulate a set of low- and high-level musical percepts. The combination of pulling and squeezing allowed players to utilise familiar and expressive ges- tures and to control multiple synchronous and continuous musical param- eters. Several materials were tested and for the final prototype, soft gel balls were chosen, which proved to be robust and responsive, providing a sense of force feedback control that derived from the elastic qualities of the gel. Buried inside each ball was a 0.5-x-2.0-cm plastic block covered with five pressure sensors, protected from the gel by an elastic membrane. The analog pressure values from these sensors were transmitted to a digitiser and converted to MIDI. Pulling gestures were sensed by six variable resistors installed under the table. An elastic band connected to each ball added opposing force to the pulling gesture, helping to retract the balls back onto the tabletop (Fig. 2). In an effort to evaluate the high-level algorithms in the instrument, a number of Fig. 2. Three networked play- straightforward mappings were designed to ers play The Squeezables control relatively low-level musical param- eters. For example, one of the balls formed a one-to-one connection between squeezing and pulling gestures to the modulation rate and range of two low-frequency oscillators, respectively. For other balls higher-level algorithms were developed to control percepts such as contour and stability. For example, pulling and squeezing gestures of the “Arpeggiator” ball controlled a combination of musical parameters including tempo, pitch commonality, dissonance and rhythmic variation, so that the more the ball was squeezed and pulled, the more unstable an arpeggiated sequence became. To facilitate a coherent hierarchical interconnected inter- action, the balls were divided into five accompaniment balls and one melody soloist. The five accompaniment balls provided players with autonomous control – no input from the other balls influenced their output. However, these balls’ output was mapped not only to the accompaniment parameters but also to transform the sound of the “melody” ball. While pulling the “mel- ody” ball manipulated its own contour so that the higher it was pulled, the higher the melodic curve became. The actual pitches, as well as the MIDI velocity, duration and pan values, were determined by the level of pulling and squeezing of the accompaniment balls. This allowed the accompaniment balls to “shape” the character of the melody while maintaining a comprehen- sive scheme of interaction among themselves. 304 To experiment with these mappings I composed a short piece for three players. The piece, which was featured in Ars Electronica 20008, starts with a high-level of instability and builds gradually towards a repetitive rhythmic peak. Special notation was created for the piece – two continuous graphs were assigned to each one of the six balls. One graph indicated the level of squeezing over time and the other indicated the level of pulling. The proc- ess of writing and performing the piece served as a useful tool for evaluat- ing the mapping and sensing techniques used. In addition, discussions were held with novices and professionals who played the instrument. In general, children and novices were more inclined to prefer playing the balls that pro- vided high-level control such as contour and stability. They often stated that these balls allowed them to be more expressive and less analytical. Proficient musicians, on the other hand, often found the high-level control somewhat frustrating, because it did not provide them with direct and precise access to specific desired parameters. Some experts complained that their personal interpretation of the high-level controllers for stability differed from the one implemented in designing the instrument. Both novices and professional players found the multiple-channel synchronous control expressive and chal- lenging and the pulling and squeezing gestures comfortable and intuitive. These gestures allowed delicate and easily learned control of many simul- taneous parameters, which was especially compelling for children and nov- ices. The organic and responsive nature of the balls was one of the features mentioned as contributing to this expressive experience. When asked about the interdependent networked connections, one melody ball player described her experience as a constant state of trying to expect the unexpected. To another player, the experience felt like controlling an entity with a life of its own. In a manner similar to chamber music group interaction, body and facial gestures served an important role in coordinating the accompaniment players’ gestures and establishing an effective outcome. Such collaborations turned out to be especially compelling for children, who found the accompa- niment balls conducive to social interaction, intuitive and easy to play with. Some complaints were made, however, regarding the difficulty for individual accompaniment players to create their own musical phrases without being constantly subjected to interdependent transformation from the group. Other criticism addressed the lack of discrete input, which prevented players from generating and controlling specific musical events in detail. 8 305 3.3 The Musical Fireflies (1999-2000) The Musical Firefly project was designed to address some of the weak- nesses in the Squeezables. In particular, it aimed to facilitate a more discrete and autonomous interaction that would allow for clearer interaction schemes and more focused constructionist-learning goals. The project attempted to provide players with expressive hands-on experiences that can be easily transformed into an analytical and formal exploration of music and math- ematics. Through simple tapping gestures players could input rhythmic pat- terns and embellish them in real-time by adding multiple rhythmic layers. This functionality provided players with figural and formal familiarisation with musical concepts such as accents, beats, patterns, and timbre. During the multi-player interaction, a wireless network was formed between Fireflies, which allowed players to synchronise patterns and trade instrument sounds. This interactive group experience was designed to lead to deeper internalisa- tion of advanced musical concepts such as the correlation between mono- rhythmic and polyrhythmic structures. Access to and manipulation of LOGO code for customising the controllers provided an introduction to MIDI pro- gramming and electronic sound. Advanced players could, therefore, deepen their learning experience by reprogramming the controllers and adjusting their functionality to match personal musical interests and abilities. The 3D printed Musical Firefly’s case was designed to be held by two hands while thumb-tapping two top-mounted buttons. Signals from the buttons were sent to an embedded “Cricket” Microchip PIC microproces- sor. An infrared communication port allowed for communication with other Fireflies as well as for downloading LOGO based application programs. The played rhythmic patterns were converted into musical messages using Cricket LOGO general MIDI commands and sent through the Cricket’s serial bus port to the MidiBoat – a small General Midi cir- cuit that supported up to 16 polyphonic channels, 128 melodic timbres and 128 percussive timbres. The audio from the MidiBoat was then sent to the top- Fig. 3. Two players interact with mounted speaker. each other with the Musical Interaction with the Musical Fireflies Fireflies occurred in two distinct and sequential modes – the Single Player Mode, where players converted numerical patterns into rhythmical structures, and the Multi Player Mode, where collaboration with other players enhanced the basic rhythmic structures into polyrhythmic compositions (Fig. 3). In Single Player 306 Mode, players could trigger and play with two default percussive sounds. The left button triggered accented notes and the right button triggered non-ac- cented notes. The patterns of accented and non-accented notes were recorded and after two seconds of inactivity, played back in a loop, using an adjust- able default tempo. This activity provided players with a tangible manner of entering and listening to the rhythmical output of any numerical pattern they envisioned, leading to an immediate conceptualisation of the mathemat- ical-rhythmical correlation. For example, Figure 4 depicts the playing of the numerical pattern 4 3 5 2 2: Fig. 4. A pattern of accented and non-accented notes as played by the Musical Fireflies. Accented note played by the left button; accented note played by the right button ° = non During playback, players could enter a second layer of accented and non- accented notes in real-time, using a different timbre. Each tap on a button triggered a note aloud and recorded its quantised position so that the pattern became part of the rhythmic loop. Pressing both buttons simultaneously at any point stopped the playback and allowed the player to enter a different pattern. In Multi Player Mode, when two loop playing Fireflies “saw” each other (i.e., when their infrared signals were exchanged), they automatically synchronised their rhythmic patterns. (A similar interaction occurs when the Firefly insects synchronise their light pulses to communicate in the dark). This activity provided participants with a richer, more complex rhythmical composition and allowed for an interactive introduction to polyrhythm. Figure 5 depicts how a 7 beat pattern played by one Firefly and a 4 beat pattern played by another diverge and converge as the patterns go in and out of phase every 28 beats, the smallest common denominator: Fig. 5. Two patterns (7/4 and 4/4) played by two Fireflies divergence and convergence as they go in and out of phase every 28 beats 307 While the two Fireflies were synchronised, players could also initiate a “Timbre Trade” in which instrument sounds were exchanged between the devices. Pressing either the left or right button traded both layers of the accented or non-accented timbres respectively. Each Firefly continued to play its original pattern using the new received timbre. This interaction pro- vided players with a higher-level of musical abstraction as they separated the rhythmical aspect of the beat from the timbre in which it was played. Because the Fireflies network became richer after the interaction (i.e., each instru- ment contained four different timbres) the system encouraged collaborative play where players were motivated by trading, collecting and playing games by sending and receiving different timbres from their peers. Observations of play sessions with the Musical Fireflies have been con- ducted followed by discussions with the players. Participants were asked about the expressive and the educational aspects of the session as well as for their suggestions for improvements. A software version of the application was prepared and tested. Both novices and experienced users found the concrete aspects of playing with a physical object compelling in comparison with the graphical user interface of the software version, mentioning the unmediated connection that was formed with the instrument as contributing to the crea- tion of personal connection with their music they created. Listening to the music from distinct physical sources also helped players to follow the inter- action in a more coherent manner in comparison to listening to computer speakers. The observations and interviews also led to the identification of points for improvement and future work. For example, it was clear that the focus on a specific constructionist learning activity hampered the open-ended expressive gestural interaction goal of the project. Moreover, the simple inter- action using only two discrete buttons and the low-quality MIDI sounds led to a disappointing musical outcome, consisting mostly of monotonous inter- locking clicks with no pitch, time-based rhythmic values, rests, or continu- ous transformation. The network interaction in multi-user mode, while effec- tive for learning, did not provide a satisfactory collaborative experience. The restricted interconnectivity of the system, where discrete timbre-trading was the only interpersonal act, did not provide long-lasting rich play value and led players to lose interest in the interaction after a few trades. In addition, due to the limitations imposed by the line-of-sight infrared communication, the application only allowed for synchronisation and timbre trading between two players at a time. Many interviewees expressed their wishes to interact and collaborate in larger groups comprised of several simultaneous players. 308 3.4 The Beatbugs / “Nerve” (2001-2003) For the Beatbug project, new hardware and software applications were developed in an effort to address the weaknesses identified in the Musical Fireflies. The binary buttons were replaced with a piezo electric sensor that could sense hit strength, providing more expressive physical interaction through large full-arm drumming gestures. The single user application was enhanced to record rhythmic values, rests, pitches, and amplitudes, allow- ing for more versatile and expressive musical input. Two new bend sensors were added to the design, allowing players to continuously modify and trans- form the recorded musical phrases using low- and high-level transforma- tion algorithms (Fig. 6). In addition, the embedded MIDIBoat was replaced with a high-quality software synthesiser, which significantly enhanced sound quality and versatility. Several important enhancements were also made to improve the multi-user collaborative interaction. The network was enhanced to support up to eight simultaneous Beatbugs, while coloured LEDs were installed in each Beatbug to help convey complex multi-user interactions in a visual manner. The interpersonal application was improved to provide longer lasting collaborative interactions, allowing players to continuously develop each other’s music by bending and manipulating the Beatbug antennae. In order to support these improvements, the new Beatbugs communicated with each other through wires via a central computer system, which was titled the “Nerve Center”. To showcase the improved system, a musical composition was composed, titled “Nerve”, which was presented in workshops and con- certs as part of the Tod Machover’s Toy Symphony project. In an effort to provide a familiar and fun interface for children and novices, the “Nerve” Beatbug was designed as a bug, having a speaker for a mouth, two bend-sensors for antennae, and a velocity-sensitive piezoelectric sensor on its back. White and coloured LEDs mounted in its translucent shell provided visual feedback when hit or played through. An embedded Microchip PIC microcontroller was responsible for reading input from the sensors, controlling the LEDs, and com- municating with the central system via tail-like cable that carried MIDI, trigger, audio, and power. The piezo electric sensor Fig. 6. Manipulating the measured when and how hard it was hit, Beatbugs antennae while the two antennae allowed for subtle control over different aspects of the sound. Bending the antennae caused a proportional change in the colour of three LED clusters, and a ring of white LEDs flashed each time the bug was hit, providing additional visual feedback 309 to the player and audience. The embedded processor was responsible for operating the sensors and LEDs, while the central computer system control- led the actual musical interactions and behaviours. The “brain” of the system was written in Max/MSP environment. Controlling all of the behaviour from the central computer made it easy to quickly experiment with a broad range of interaction schemes. Similarly, sound synthesis occurring on the central computer and played through the corresponding Beatbug’s speaker, provided high quality sound with an embedded, self-contained feel. For the software synthesiser, ‘Reason’ by Propellerhead was chosen, providing a broad pal- ette of timbres and continuous control over multiple sound parameters. Up to eight Beatbugs could be connected to one central rack, which consisted mostly of standard off-the-shelf equipment including an audio interface, a MIDI interfaces, an 8-channel amplifier, and a mixer. The only non-standard device in the system was a custom patch box, which provided power to the bugs and converted the 10-pin connector in each cable to MIDI in, MIDI out, trigger, and audio in (Fig. 7). Fig. 7. The Nerve Beatbug system’s schematics 310 Similarly to the Musical Fireflies, players interacted with the “Nerve” Beatbug in two distinct modes – Single Player Mode, and Multi Player Mode. In Single Player Mode, each player could enter a short rhythmic pattern over a predefined metronome beat. The system automatically played back the recorded pattern in a loop through the corresponding Beatbug’s speaker. A quantisation algorithm pushed the notes towards the closest quarter, eighth or triplet note. While the entered pattern was playing back, the player could manipulate the pattern by bending the two antennae. The left antenna contin- uously transformed the pitch and timbre using a variety of predefined scales and audio effects. The right antenna added rhythmic ornamentation to the pattern by controlling the values, length, accentuation, and feedback level of a delay line. The goal of these transformation algorithms was to allow players to modify the pattern but to keep the feel of the original motif, supporting the “motif-and-variation” nature of the interaction. In Multi Player Mode players could form large-scale collaborative compositions by interdependently shar- ing and continuously developing each other’s motifs. Each Beatbug player could play a rhythmic motif that was then automatically sent through the stochastic computerised “Nerve Center” to another player in the group. The receiving player could decide whether to further develop the received motif (by continuously manipulating pitch, timbre, and rhythmic elements with the two bend sensor antennae) or to keep the motif in his or hers personal bug (by entering and sending a newly generated motifs to a different random player in the group). The antennae transformations were recorded and layered in each cycle until a new pattern was entered. The tension between the sys- tem’s stochastic routing scheme and the players’ improvised real-time decisions led to an interdependent, dynamic, and constantly evolving musical outcome. In a different section of Multi Player Mode, after all players entered their patterns, the system awaited a series of simulta- neous hits by all players that led to ran- dom segmentation of the participants to sub-groups, allowing players to interde- Fig. 8. A Beatbug workshop at MIT Media Lab pendently collaborate with a gradually growing number of co-players.9 During 2002-2003 the “Nerve” Beatbugs were featured in workshops and concerts in Berlin, Dublin, Glasgow, Boston and New York in collabora- tion with local symphonies and educational programs (Fig. 8). During each week-long workshop, children and orchestra members were introduced to 9 See a video clip of the interaction as performed in concert at . 311 the Beatbugs, explored the system, and rehearsed towards a public concert. The workshops also featured a new constructionist pedagogy developed in collaboration with Kevin Jennings. The pedagogy was designed to allow play- ers to physically create and phrase rhythmic patterns and transform them by employing melodic, timbral, and rhythmic contours. The balance among aural, kinesthetic and social modalities provided the children with a rich and highly immersive musical environment. A report by Project Zero from Harvard’s Education School said that “[the project] provided an overwhelm- ingly positive experience either from the musical, social and personal stand- point… the experience provided a good foundation on which to build one’s musicianship, social skills, self-confidence, and general learning dispositions focusing, listening, and practicing.” Several problems and areas for improvement became apparent as well. The musical mappings in Single Player Mode, although more versatile than in the Musical Fireflies, were still limited and unsatisfactory for many profi- cient musicians, who expressed their interest in creating and manipulating more advanced and non-quantised melodic and harmonic musical content. Novices too showed interest in controlling more sophisticated musical mate- rial even if they could not create it themselves. In multiplayer interactions, the velocity sensing pie- zoelectric sensor and the large scale of the system encouraged players to use wide playing gestures and expressively point to indi- cate their actions to each other and to the audi- ences (Fig. 9). However, while these large gestures Fig. 9. Large play gestures in a “Nerve” concert, brought elements of vis- Cambridge, MA ual expression and excite- ment to the performance, they were not sensed by the central system and therefore did not have audible consequences. In terms of hardware, it was clear that the central system was too large and complex, and that the 18-unit rack was not easily portable. An additional hardware weakness was the durability of the bend sensor anten- nae, which proved to be fragile, especially when large groups of energetic chil- dren experimented with the system during week-long workshops. 312 3.5 iltur (2003-2005) The iltur project utilised an improved version of the Beatbug controllers, which were enhanced both in hardware and software in an effort to address the weaknesses observed in Nerve. Hardware improvements included replac- ing the unreliable bend sensors with robust Hall effect sensors, installing 2D accelerometers to sense larger and more expressive arm gestures, and reducing the size and complexity of the system. The software was rewritten to address users’ requests to control and manipulate advanced melodic and harmonic content in a more expressive and gestural manner. The new appli- cation supported interaction between Beatbug players and proficient musi- cians, allowing Beatbug players to record live input from MIDI and acoustic instruments and to respond by transforming the recorded material gesturally, creating motif-and-variation call-and-response routines on the fly. The cen- tral computer host was programmed to analyse MIDI and audio signals and to allow Beatbug players to personalise the analysed material using a variety of transformation algorithms. Capturing and personalising richer musical content through expressive gestures gave Beatbug players the opportunity to create a more sophisticated musical outcome, while forming elaborate musi- cal dialogs with their peers. The main hardware improvement in the iltur Beatbugs was the addition of the 2D accelerometers. The accelerometers were used to sense tilting and shaking gestures, providing the central system with information regarding players’ large arm movements. Hardware improvements were also made in an effort to make the antennae more robust, utilising Hall effect sensors and magnets mounted under the antennae. This electromagnetic sensing method proved to be robust and effective, although it provided lower bending resolution in comparison with the original resistance-based bend sensors. Other hardware improvements addressed the system size and portability. As opposed to the complex 18-unit rack Nerve system, the new iltur system, utilised a laptop instead of a desktop, a software mixer instead of a physical one, and no MIDI drum controller, as audio from the piezoelectric sensors was captured directly through an audio interface. The system, therefore, was housed in a small 6-unit rack (Fig. 10). Play gestures and interaction in iltur were modified to allow for record- ing, triggering, and manipulation of MIDI and audio in real-time. Recording was conducted by simultaneously bending both antennae while tapping the Beatbug. The system then segmented the recorded phrases, looking for sec- tions of silence in the MIDI and/or audio buffers. The audio Beatbugs were programmed to detect onset notes, pitches, and amplitudes in real-time. The analysis algorithm was optimised for brass instruments and was used suc- cessfully with instruments such as trumpet, trombone and saxophone. Onset 313 Fig. 10. The iltur Beatbug system’s schematics identification and segmentation of MIDI was trivial due to the discrete nature of the MIDI protocol. After the system recorded and segmented the captured musical input, players could immediately trigger the recorded phrase by tap- ping the Beatbug again. Hit velocities were mapped to different segments in the phrase, allowing players to rearrange the recorded motifs. Two synthesis methods – Wavetable Synthesis and Granular Synthesis – were used for re- triggering audio. The Wavetable technique provided close resemblance to the sound of original recording but suffered from noise artifacts during continuous transformations. Granular Synthesis, on the other hand, provided harsher sounds in comparison to the original recording but allowed for smoother con- tinuous transformation. A number of different mapping schemes were experi- mented with for antennae bending and accelerometer-based gestures. Some of these algorithms utilised direct mappings between continuous gestures and fundamental musical aspects such as pitch, volume and tempo. Other mapping approaches allowed for the manipulation of higher-level musical per- cepts such as melodic similarity or rhythmic density. Shaking gestures were most successful when mapped to control vibrato and tremolo effects, while antennae manipulations were effective in controlling pitch. When interact- ing with a MIDI instrument, Beatbug players could also trigger the recorded 314 motif in inversion and retrograde by tapping the Beatbug while bending the left or right antennae, respectively. The audio Beatbugs allowed players to control transformations such as pitch bending, speed alteration, and filtra- tion, through a combination of bending, tilting, and hitting gestures. During group interaction, players could trade their motifs by simultaneously hit- ting the Beatbug while bending one of the antennae. Receiving players could then further transform the phrase and send it back to their peers. In comparison to the ran- dom involuntarily rout- ing scheme in Nerve, iltur players could trade their motifs only when simultaneously agreeing to synchronise their ges- tures. Three Jazz com- positions were written for the iltur system and Fig. 11. iltur 3 audio Beatbug players interact performed in cities such with a brass section (left) and a hip-hop vocalist as Atlanta, San Diego, (right) in Jerusalem, Israel Miami, Vancouver, and Jerusalem. iltur 1 featured MIDI interaction, iltur 2 focused on audio transfor- mation and manipulation, and iltur 3 introduced group interaction and motif trading. Voice manipulation experimentations were also conducted, allowing Beatbug players to interact with a hip-hop vocalist.10 Observations of and discussion with iltur players led to a number of find- ings regarding the improved Beatbug functionalities. For example, it was clear the iltur Beatbugs were more effective than the Nerve Beatbugs in providing richer musical experiences for individuals through a larger set of expressive gestures and more complex melodic and harmonic transformations. The new application also led to more meaningful and versatile collaborations between novices and professional musicians. Both players and audiences perceived the new accelerometer-based gestures as intuitive, expressive, and visually compelling. However, the introduction of gesture combinations (such as hit- ting the Beatbug while bending the antenna) was problematic for novices and children, who found it physically and mentally challenging. Novices and chil- dren also found the higher-level transformation algorithms (such as musical density and stability) less intuitive to control and preferred the simple and predictable one-to-one mappings between gestures and low-level musical 10 See videos at . 315 aspects. More proficient musicians, on the other hand, preferred to interact with the high-level musical operations, stating that these encouraged them to concentrate on the correlation between their actions and the musical output. In general, the effective- ness of the experience was closely related to the musical and harmonic context of the composi- tions. Due to segmenta- tion and audio stretch- ing, in a harmonically structured composition it was difficult for play- ers to improvise while following the harmonic Fig. 12. Interaction between two iltur 3 MIDI progression. Many play- Beatbug players ers, therefore, preferred free musical structures, stating that open-ended experience posed less boundaries and allowed more creativity and expression. 3.6 Haile (2004-2007) The instruments and controllers discussed above explored different ways in which meaningful embodiment of technology can enhance the musical experience by facilitating new expressive gestures, networked group collabo- rations and constructionist learning. Although these projects provided satis- fying results, the instruments were limited by the electronic reproduction and amplification of sound through speakers, which did capture the richness of acoustic sound. My most recent project – an interactive robotic percussionist named Haile – addressed this limitation by utilising a mechanical apparatus that converts digital musical instructions into acoustic and physical genera- tion of sound. Haile was developed in an effort to bring together the advan- tages of computational power with the expression and richness of creating acoustic sound using physical and visual gestures. The project aimed to combine that are not possible by humans with rich sound and visual gestures that cannot be reproduced by speakers in an effort to facilitate new musical experiences, and new music, that cannot be con- ceived by acoustic or means. As part of the project, a robotic percussionist that listened to and ana- lysed live musical input in real-time and reacted by generating relevant, but 316 at times surprising, acoustic responses was developed. The project posed challenges in areas such as perception modeling, mechanics, and interaction design. In perception, the main challenge was to implement models for low- and high-level musical percepts, allowing the robot to develop a meaningful representation of the music it listened to. In mechanics the challenge was to develop a dexterous robotic apparatus that would translate perceptually based performance algorithms into a rich acoustic and visually informative performance. In interaction design, our aim was to develop performance algo- rithms that would enable the robot to collaborate with human players in a meaningful and intuitive manner, using transformative and generative meth- ods both sequentially and synchronously. In order to support familiar interactions with human players, Haile’s design is anthropomorphic, utilising two percussive arms that can move to different locations and strike with varying velocities (Fig. 13). The first pro- totype was designed to play a Native American Pow Wow drum – a multi player instrument that supported the collaborative nature of the project. For pitch-oriented applications, the robot was later adjusted to play a one-octave xylophone. In order to match the aesthetics of these musical instruments, Haile was constructed from wood using a CnC cutting machine. Metal joints were designed to allow shoulder and elbow movement as well as leg adjustability for dif- ferent instrument heights. While attempting to create an organic look for the robot, it was also important that the technology was not completely hidden, so that co-players could see and understand the robot’s operation. The mechanical apparatus was therefore left uncovered and LEDs were embedded on Haile’s body, providing an additional repre- sentation of the mechanical actions. Haile’s right arm was designed to play fast notes, while the left arm was designed to produce larger and more visible motions that pro- duce louder sounds. Both arms could adjust the strikes sound in two manners: different Fig. 13. Haile, the percep- tual robotic percussionist, pitches were achieved by striking the instru- listens to and interacts with ments in different locations, and volume was a human player adjusted by hitting with varying velocities. To move to different vertical positions, each arm employed a linear slide, a belt, a pulley system, and a potentiometer to provide feedback. Unlike robotic drumming systems that allow hits at only a few discrete locations, Haile’s arms moved continuously over a distance of 10 317 inches (movement timing is 250 ms. from end to end). The right arm’s strik- ing mechanism was loosely based on a piano hammer action and consisted of a solenoid driven device and a return spring. The right arm stroked at a maximum speed of 15 Hz, faster than the left arm’s maximum speed of 11 Hz. However, the right arm did generate a wide dynamic range or provided easily noticeable visual cues, which limited Haile’s expression and interac- tion potential. The left arm was designed to address these shortcomings, using larger visual movements, and a more powerful and sophisticated hit- ting mechanism. The first phase of the project aimed at facilitating rhythmic collaboration between human drummers and Haile, addressing aspects such as rhythmic perception, improvisation, and interaction design. Perceptual models were developed for low- and high-level rhythmic percepts, from beat and density analysis, to rhythmic stability and similarity perception. Some relatively low- level perceptual modules included beat analysis, where domain detection was followed by autocorrelation of tempo and phase, and density analysis, where we looked at the number of note onsets per time unit to represent the den- sity of the rhythmic structure. Higher-level rhythmic analysis modules were also developed for percepts such as rhythmic stability, based on research by Desain, et al.11, and rhythmic similarity based on Tanguiane’s survey12. The stability model calculated the relationship between pairs of adjacent note durations, rated according to their perceptual expectancy based on three main criteria: perfect integer relationships were favoured, ratios had inherent expectancies (i.e., 1:2 was favoured to 1:3 and 3:1 was favoured to 1:3), and durations of 0.6 seconds were preferred. The similarity rating was derived from Tanguiane’s binary representation, where two rhythms are first quan- tised, and then given a score based on the number of note onset overlaps and near-overlaps. The main challenge in designing the rhythmic interaction with Haile was to implement the perceptual modules in a manner that would lead to an inspiring human-machine collaboration. The approach taken was based on a theory of interdependent group interaction in interconnected musical net- works. At the core of this theory is a categorisation of collaborative musical interactions in networks of artificial and live musicians based on sequen- tial and synchronous operations with centralised and decentralised control schemes. Based on this framework, six interaction modes were developed: Imitation, Stochastic Transformation, Perceptual Transformation, Beat Detection, Simple Accompaniment, and Perceptual Accompaniment. These interaction modes utilised different perceptual modules and were embedded 11 Desain et al. 2002 12 Tanguiane 1993 318 in different combinations in interactive compositions and educational activi- ties. In the first mode, Imitation, Haile merely repeated what it heard based on its low-level onset, pitch, and amplitude perception modules. Players could play a rhythm and after a couple of seconds of inactivity Haile imitated it in a sequential call-and-response manner. Haile used one of the arms to play lower pitches close to the drumhead centre and the other arm to play higher pitches close to the rim. In the second mode, Stochastic Transformation, Haile improvised in a call-and-response manner based on players’ input. Here, the robot stochastically divided, multiplied, or skipped certain beats in the input rhythm, creating variations of users’ rhythmic motifs while keeping their original feel. Different transformation coefficients were adjusted manu- ally or automatically to control the level of similarity between humans’ motifs and Haile’s responses. In the Perceptual Transformation mode, Haile ana- lysed the stability level of users’ rhythms, and responded by choosing and playing other rhythms that had similar levels of stability to the original input. In this mode Haile automatically responded after a specified phrase length. Imitation, Stochastic Transformation, and Perceptual Transformation were all sequential interaction modes that formed decentralised call-and-response routines between human players and the robot. Beat Detection and Simple Accompaniment modes, on the other hand, allowed synchronous interaction where humans played simultaneously with Haile. In Beat Detection mode, Haile tracked the tempo and beat of the input rhythm using complex domain detection function and autocorrelation, which led to continuously refined assumptions of tempo and phase. A simpler, yet effective, synchronous inter- action mode was Simple Accompaniment, where Haile played pre-recorded MIDI files so that players could interact with it by playing their own rhythms or by modifying elements such as drumhead pressure to modulate and trans- form Haile’s timbres in real-time. This synchronous centralised mode allowed composers to feature their structured compositions in a manner that was not susceptible to algorithmic transformation or significant user input. The Simple Accompaniment mode was also useful for sections of synchronised unisons where human players and Haile played together. Perhaps the most advanced mode of interaction was the Perceptual Accompaniment mode, which combined synchronous, sequential, centralised and decentralised operations. Here, Haile played simultaneously with human players while lis- tening to and analysing their input. It then created local call-and-response interactions with different players, based on its perceptual analysis. In this mode amplitude and density perceptual modules were utilised – while Haile played short looped sequences (captured during the Imitation and Stochastic Transformation modes) it also listened to and analysed the amplitude and density curves of human playing. It then modified its looped sequence, based on the amplitude and density coefficients of the human players. When the 319 rhythmic input from human players was dense, Haile played sparsely, provid- ing only the strong beats and allowing humans to perform denser solos. When humans played sparsely, on the other hand, Haile improvised using dense rhythms that were based on stochastic and perceptual transformations. Haile also responded in direct relationship to the amplitude of human players so that the louder humans played, the stronger Haile played to accommodate the human dynamics, and vice versa.13 As a creative outcome for these interactive applications, two compositions were written for the system, each utilised a different set of perceptual and interaction modules. The first composition, titled Pow, was written for one or two human players and a one-armed robotic percussionist. It served as test case for Haile’s early mechani- cal, perceptual, and interaction modules. The second composi- tion, titled Jam’aa (“gathering” in Arabic), built on the unique communal nature of the Middle Eastern percussion ensem- ble, attempting to enrich its improvisational nature, call- and-response routines, and virtuoso solos with algorithmic transformation and human- Fig. 14. A performance of Jam’aa in robotic interactions (Fig. 14). Odense, Denmark Jam’aa, was commissioned by Hamaabada Art Centre In Jerusalem, and later performed in invited and juried concerts in France, Germany, Denmark, and the United States.14 As part of our effort to expand the exploration of robotic musicianship into pitch and melody, Haile was later adapted to play a pitch-based mallet instrument. A one-octave xylophone was built for this purpose to accommo- date Haile’s mechanical design – the left arm covered a range of 5 keys while the right arm, whose vertical range was extendable, covered a range of 7 keys. (Fig. 15). Following the idiom “listen like a human, improvise like a machine”, computational models for melodic similarity were developed (“listen like a human”) as the fit function of a genetic algorithm based improvisation engine (“improvise like a machine”). The algorithmic responses were based on the analysed input as well as on internalised knowledge of contextually relevant 13 See a video excerpts of some of the interaction modes at . 14 See a video excerpts from Jam’aa at . 320 material. The algorithm fragmented MIDI and audio input to short phrases. It then attempted to find a “fit” response by evolving a pre-stored human-gener- ated population of phrases using a vari- ety of mutation and crossover functions over a variable number of generations. At each generation, the evolved phrases were evaluated by a fitness function that Fig. 15. Haile’s adaptation for measured similarity to the input phrase, xylophone and the least fit phrases in the database are replaced by members of the next generation. A unique aspect in this design was the reliance on a pre-recorded human-generated phrase set that evolved over a limited number of generations. This allowed musical elements from the original phrases to mix with elements of real-time input to create hybrid, and at times unpredictable, responses for each given input melody. Two compositions were written for the system and performed in concerts in Atlanta and Copenhagen. In the first piece, titled “Svobod”, a piano and a saxophone player freely improvised with a semi- autonomous robot (Fig. 16). The second piece, titled “iltur for Haile”, involved a tonal musi- cal structure utilising geneti- cally driven and non-genetically driven interaction schemes, as Fig. 16. A performance of Svobod in the robot performed autono- Copenhagen, Denmark mously with a jazz quartet.15 15 See a video clip of iltur for Haile at . 321 References Aimi, Roberto (2002): New Expressive Percussion Instruments, M.S. Thesis, MIT Media Laboratory. Aimi, Roberto/Young, Diana (2004): »A New Beatbug: Revisions, Simplifications, and New Directions«. In: Proceedings of the 2004 International Computer Music Conference, San Francisco: International Computer Music Association. Bamberger, Jeanne (2000): Developing Musical Intuitions, New York: Oxford University Press. Barbosa, Alvaro (2003): »Displaced Soundscapes: A Survey of Network Systems for Music and Sonic Art Creation«. Leonardo Music Journal 13, 53-59. Bischoff, John/Gold, Rich/Horton, Jim (1978): »Microcomputer Network Music«. Computer Music Journal 2(3), 24-19. Buchla, Don (2005): »A history of Buchla’s musical instruments«. In: Sidney Fels/ Tina Blaine (Eds.), New Interfaces for Musical Expression, NIME-05, Proceedings, Vancouver: University of British Columbia, Media and Graphics Interdisciplinary Center (MAGIC), 1. Cope, David (1996): Experiments in Music Intelligence, Madison WI: A-R Editions. Cycling74 (2005): Max/MSP (last access: October 2007). Dannenberg, Roger B. (1985): »An On-line Algorithm for Real-Time Accompaniment«. In: William Buxton (Ed.), Proceedings of the 1984 International Computer Music Conference, San Francisco: International Computer Music Association, 193-198. Dannenberg, Roger B./Brown, Ben/Zeglin, Garth/Lupish, Ron (2005): »McBlare: A Robotic Bagpipe Player«. In: Sidney Fels/Tina Blaine (Eds.), New Interfaces for Musical Expression, NIME-05, Proceedings, Vancouver: University of British Columbia, Media and Graphics Interdisciplinary Center (MAGIC), 80-84. Desain, Peter/Honing, Henkjan (2002): »Rhythmic stability as explanation of category size«. In: Proceedings of the International Conference on Music Perception and Cognition, Sydney: UNSW, CD-Rom. Feldmeier, Mark C./Malinowski, Mateusz/Paradiso, Joseph A. (2002): »Large Group Musical Interaction Using Disposable Wireless Motion Sensors«. In: Proceedings of the 2002 International Computer Music Conference, San Francisco: International Computer Music Association, 83-87. Gardner, Howard (1983): Frames of Mind: The Theory of Multiple Intelligences, New York: Basic Books. Gresham-Lancaster, Scot (1998): »The Aesthetics and History of the Hub: The Effects of Changing Technology on Network Computer Music«. Leonardo Music Journal 8, 39-44. Jennings, Kevin (2003): »Toy Symphony: An International Music Technology Project for Children«. Music Education International 2, 3-21. 322 Jordà, Sergi (2002): »Afasia: The Ultimate Homeric One-man multimedia-band«. In: Eoin Brazil (Ed.), New Interfaces for Musical Expression, NIME-02, Proceedings, Dublin: Media Lab Europe, 132-137. Levin, Golan (2001): »Dialtone: A Telesymphony«. Online available: (last access: October 2007). Lewis, George E. (2000): »Too Many Notes: Computers, Complexity and Culture in Voyager«. Leonardo Music Journal 10, 33-39. Machover, Tod (1992): Hyperinstruments: A Progress Report 1987-1991, Cambridge: MIT Media Laboratory. Machover, Tod (2004): »Shaping Minds Musically«. BT Technology Journal 22(4), 171-179. Martin, Fred/Mikhak, Bakhtiar/Silverman, Brian (2000): »MetaCricket: A designer’s kit for making computational devices«. IBM Systems Journal 39, 3-4. Mathews, Max V. (1991): »The Radio Baton and Conductor Program, or: Pitch, the Most Important and Least Expressive Part of Music«. Computer Music Journal 15(4), 37-46. Nikitina, Svetlana (2004): »Toy Symphony Report«, Cambridge: Harvard University School of Education. Pachet, Francois (2002a): »The continuator: Musical interaction with style«. In: International Computer Music Association (Ed.), Proceedings of the 2002 International Computer Music Conference. San Francisco: International Computer Music Association. Papert, Seymour (1980): Children, Computers, and Powerful Ideas, New York: Basic Books. Paradiso, Joseph A. (2004): »Wearable Wireless Sensing for Interactive Media«. In: Proceedings of the 1st International Workshop on Wearable and Implantable Body Sensor Networks, 2004, 30-31. Patten, James/Recht, Benjamin/Ishii, Hiroshi (2002): »Audiopad: a tag-based inter- face for musical performance«. In: Eoin Brazil (Ed.), New Interfaces for Musical Expression, NIME-02, Proceedings, Dublin: Media Lab Europe, 11-16. Proceedings of International Conferences on New Interfaces for Musical Expression (NIME), (last access: October 2007). Resnick, Mitchel/Martin, Fred/Sargent, Randy/Silverman, Brian (1996): »Programmable Bricks: Toys to Think With«. IBM Systems Journal 35(3–4), 443-452. Rowe, Robert (2004): Machine Musicianship, Cambridge: MIT Press. Scheirer, Eric D. (1998): »Tempo and beat analysis of acoustic musical signals«. In: Journal of the Acoustical Society of America 103(1), 588-601. Singer, Eric/Feddersen, Jeff/Redmon, Chad/Bowen, Bil (2004): »LEMUR’s Musical Robots«. In: Yoichi Nagashima/Michael J. Lyons (Eds.), New Interfaces for Musical Expression, NIME-04, Proceedings, Hamamatsu: Shizuoka University of Art and Culture, 181-184. 323 Smith, Joshua R./Strickon, Josh (1998): »The MIDIBoat«. Online available: (last access: October 2007). Steiner, Nyle A. (2004): »The Electronic Valve Instrument (EVI), an electronic musi- cal wind controller for playing synthesizers«. Journal of the Acoustical Society of America 115(5), 2451-2451. Takanishi, Atsuo/Maeda, Maki (1998): »Development of Anthropomorphic Flutist Robot WF-3RIV«. In: Proceedings of the 1984 International Computer Music Conference, San Francisco: International Computer Music Association, 328-331. Tanguiane, Andranick S. (1993): Artificial Perception and Music Recognition, New York: Springer. Theremin, Leon (1996): »The Design of a Musical Instrument Based on Cathode Relays«. Leonardo Music Journal 6, 49-50. Toyota (2004): »Trumpet Robot«. (last accessed October 2007). Waiswisz, M. (1985): »The Hands: A Set of Remote MIDI Controllers«. In: Proceedings of the 1985 International Computer Music Conference. San Francisco: International Computer Music Association, 313-318. Weinberg, Gil (2001): »The Squeezables: Toward an Expressive and Interdependent Multi-player Musical Instrument«. Computer Music Journal 25(2), 37-45. Weinberg, Gil (2005): »Interconnected Musical Networks – Toward a Theoretical Framework«. Computer Music Journal 29(2), 23-39. Weinberg, Gil/Lackner, Tamara M./Jay, Jason (2000): »The Musical Fireflies – Learning About Mathematical Patterns in Music Through Expression and Play«. In: Proceedings of the 2000 Colloquium on Musical Informatics. L’Aquila, Italy: Instituto GRAMMA, 146-149. Weinberg Gil/Aimi, Roberto/Jennings, Kevin (2002): »The Beatbug network: a rhythmic system for interdependent group collaboration«. In: Eoin Brazil (Ed.), New Interfaces for Musical Expression, NIME-02, Proceedings, Dublin: Media Lab Europe, 1-6. Weinberg, Gil/Driscoll, Scott (2005): »›iltur‹: Connecting Novices and Experts Through Collaborative Improvisation«. In: Sidney Fels/Tina Blaine (Eds.), New Interfaces for Musical Expression, NIME-05, Proceedings, Vancouver: University of British Columbia, Media and Graphics Interdisciplinary Center (MAGIC), 17-22. 324 Acknowledgements The projects described in this paper would not have been possible without the valuable contribution of colleagues and students at MIT Media Lab and Georgia Tech Music department. In particular, I would like to thank Seum Lim Gan, Roberto Aimi, Tamara Lackner, Jason Jay, Scott Driscoll, Travis Thatcher, Mark Godfrey, Alex Rae, and John Rhoads for their indispensable contribution for the development of the musi- cal instruments and applications. I would also like to thank Tod Machover, director of the Hyperinstrument group at MIT Media Lab, and Frank Clark, director of the Music Department at Georgia Tech for their support. 325