A book about Australian experimental music, Experimental Music: Audio Explorations in Australia (UNSW Press) has just appeared. It is edited by Gail Priest and has chapters by quite a few well-known names in the Australian experimental music scene. On the whole the book focuses on the last 10-15 years, though events from as early as the 1970s are mentioned. The accompanying CD has tracks from as far back as 1971 and as recent as 2007. My piece Peer Pressure (2001) is included.
Gail has also started up www.experimentalmusicaustralia.net, a website related to the book. It contains a growing list of Australian experimental musicians. People are invited to submit themselves for consideration.
Tuesday, December 30, 2008
Sunday, December 21, 2008
Show and Tell at Monash
On 10th and 11th December the Fine Art Department at Monash University organised a postgraduate colloquium, a sort of big show-and-tell. It wasn't a public exhibition, but each student had some space to set up work and we each got a slot to talk about what we had done, and answer questions in a discussion led by a staff member. About 50 students were involved, so there were typically three or four parallel sessions.
We also had two lectures by Professor Andrew Benjamin, who has the title of Professor of Critical Theory and Philosophical Aesthetics at Monash. Professor Benjamin then attended quite a few of the student presentations, including mine, and asked probing questions.
The whole event was a great success, and an excellent way both to get feedback on one's own work and to find out what other students were up to over a wide range of practices: painting, drawing, sculpture, digital imagery, video and various types of installation. This is the first time the colloquium has been held; I hope it becomes a regular event.
We also had two lectures by Professor Andrew Benjamin, who has the title of Professor of Critical Theory and Philosophical Aesthetics at Monash. Professor Benjamin then attended quite a few of the student presentations, including mine, and asked probing questions.
The whole event was a great success, and an excellent way both to get feedback on one's own work and to find out what other students were up to over a wide range of practices: painting, drawing, sculpture, digital imagery, video and various types of installation. This is the first time the colloquium has been held; I hope it becomes a regular event.
Thursday, November 27, 2008
Thursday, November 20, 2008
Exhibition: 30x30x30
I have a piece called Exiguous Cube in the 30x30x30 Exhibition
at the Faculty Gallery, Monash University, Caulfield, Melbourne.
When: Opening Fri 21st November, 5-7pm.
Exhibition 21st Nov - 5th Dec
Where: Faculty Gallery, ground floor Building G, Monash University Caulfield Campus (opposite Caulfield Station)
What: The exhibition is for works that are no bigger than 30x30 cm or 30x30x30 cm, from staff and students in the Faculty of Art and Design, Monash University.
My piece is made out of Lego bricks.
To make an Exiguous Cube in two steps:
Step 1: Add bricks. Place 2 x 4 Lego bricks at random in a 29cm x 29cm x 29cm cube, until no more will fit.
Step 2: Remove bricks. Choose a brick at random. Apply a test: the brick passes the test if it can be removed while leaving the eight corner blocks connected to one another. If the brick passes the test, remove it. Continue choosing bricks at random and applying the test until no more bricks can be removed.
The result is an Exiguous Cube: all the eight corner bricks are connected to one another, but if any brick is removed, the corner blocks won’t all be connected to one another by continuous chains of Lego bricks.
I wrote a computer program to carry out steps 1 and 2, and then built the structure produced by my program.
at the Faculty Gallery, Monash University, Caulfield, Melbourne.
When: Opening Fri 21st November, 5-7pm.
Exhibition 21st Nov - 5th Dec
Where: Faculty Gallery, ground floor Building G, Monash University Caulfield Campus (opposite Caulfield Station)
What: The exhibition is for works that are no bigger than 30x30 cm or 30x30x30 cm, from staff and students in the Faculty of Art and Design, Monash University.
My piece is made out of Lego bricks.
To make an Exiguous Cube in two steps:
Step 1: Add bricks. Place 2 x 4 Lego bricks at random in a 29cm x 29cm x 29cm cube, until no more will fit.
Step 2: Remove bricks. Choose a brick at random. Apply a test: the brick passes the test if it can be removed while leaving the eight corner blocks connected to one another. If the brick passes the test, remove it. Continue choosing bricks at random and applying the test until no more bricks can be removed.
The result is an Exiguous Cube: all the eight corner bricks are connected to one another, but if any brick is removed, the corner blocks won’t all be connected to one another by continuous chains of Lego bricks.
I wrote a computer program to carry out steps 1 and 2, and then built the structure produced by my program.
Monday, November 17, 2008
A censored Internet
The Australian Federal Government is planning to censor the whole of the Internet. This is not just about providing a "child-safe" version of the Internet; that is only part of the plan.
The other, and really objectionable, part is to censor the Internet for everyone, by requiring all Australian Internet Service Providers to block sites on a secret Government blacklist. The Government is also trialling "dynamic" filtering, which attempts to block sites on-the-fly on the basis of content.
As has been pointed out, this will do nothing to block pornographers, who have plenty of ways of evading filters. What it will do is:
For more information:
The other, and really objectionable, part is to censor the Internet for everyone, by requiring all Australian Internet Service Providers to block sites on a secret Government blacklist. The Government is also trialling "dynamic" filtering, which attempts to block sites on-the-fly on the basis of content.
As has been pointed out, this will do nothing to block pornographers, who have plenty of ways of evading filters. What it will do is:
- Slow down the Internet and make it more expensive for everyone. (Of course it is already slow and expensive compared to what is available in other advanced countries.)
- Block at least 1% of sites that have nothing objectionable, because the filtering software got it wrong. (1% is the lowest figure in the trial referred to below.)
- Give the Government extraordinary power to control what we can view.
- Give the Government the ability to read our bank information and the like, as the https protocol can be read by filtering software.
- Possibly give the Government power to censor email as well as websites, as one of the filters trialled by the Government has this ability.
For more information:
- http://nocleanfeed.com/ - lots of links to further information, and suggestions on action to take.
- http://www.acma.gov.au/webwr/_assets/main/lib310554/isp-level_internet_content_filtering_trial-report.pdf - the Government report on the filtering trial in Tasmania.
Monday, October 27, 2008
A sonic history of computing
I have been reading the book The Language of New Media by Lev Manovich (MIT Press, 2001). In the early part of the book, Manovich has some scattered comments about the history of computing, and he tries very hard to find connections with visual media.
Of course Manovich refers to the punched cards used to control the Jacquard loom, which is certainly a connection with visual media. But he goes on to discuss Turing's abstract machines, and to say: "[a Turing machine's] diagram looks suspiciously like a film projector. Is this a coincidence?" Manovich then discusses Conrad Zuse's use of old cinema film to make punched tape to control his machine. Manovich's two examples of computing machines in these comments are Babbage's Analytical Engine, which was never built, and Zuse's machines, which as far as I know had minimal influence on subsequent developments. Von Neumann misses out completely.
(And, by the way, I don't think Turing's original paper had a diagram. The paper is "On computable numbers, with an application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, Ser. 2, Vol. 42, 1937. There is what I believe to be a photographic reproduction of it in the book The Undecidable, ed. Martin Davis, Raven Press, Hewlett, NY, 1965. There is no diagram in this reproduction.)
Let me make a case along Manovich’s lines for the importance of sound culture in the history of computing.
In functionality the tape part of Turing’s abstract machine is like an audio tape recorder; is this a coincidence? The BBC began using audio recorders in 1932; they used a steel tape. Turing submitted his paper in 1936.
The modern stored-program digital computer originated from several lines of investigation in the period 1937–1950. The first design that fully implemented the features of modern machines was EDVAC (initial design 1945, came into operation in 1951); this machine was far more important for subsequent developments than any of the examples Manovich refers to. The design used magnetic wire for input and output, a system based on audio magnetic wire recorders. For memory it used a mercury acoustic delay line, where data was stored in the form of sound pulses circulating in a tube of mercury.
Thus sound culture could be said to play an important part in the history of computing. Of course this has no more validity than Manovich’s statements about visual culture and computing.
Of course Manovich refers to the punched cards used to control the Jacquard loom, which is certainly a connection with visual media. But he goes on to discuss Turing's abstract machines, and to say: "[a Turing machine's] diagram looks suspiciously like a film projector. Is this a coincidence?" Manovich then discusses Conrad Zuse's use of old cinema film to make punched tape to control his machine. Manovich's two examples of computing machines in these comments are Babbage's Analytical Engine, which was never built, and Zuse's machines, which as far as I know had minimal influence on subsequent developments. Von Neumann misses out completely.
(And, by the way, I don't think Turing's original paper had a diagram. The paper is "On computable numbers, with an application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, Ser. 2, Vol. 42, 1937. There is what I believe to be a photographic reproduction of it in the book The Undecidable, ed. Martin Davis, Raven Press, Hewlett, NY, 1965. There is no diagram in this reproduction.)
Let me make a case along Manovich’s lines for the importance of sound culture in the history of computing.
In functionality the tape part of Turing’s abstract machine is like an audio tape recorder; is this a coincidence? The BBC began using audio recorders in 1932; they used a steel tape. Turing submitted his paper in 1936.
The modern stored-program digital computer originated from several lines of investigation in the period 1937–1950. The first design that fully implemented the features of modern machines was EDVAC (initial design 1945, came into operation in 1951); this machine was far more important for subsequent developments than any of the examples Manovich refers to. The design used magnetic wire for input and output, a system based on audio magnetic wire recorders. For memory it used a mercury acoustic delay line, where data was stored in the form of sound pulses circulating in a tube of mercury.
Thus sound culture could be said to play an important part in the history of computing. Of course this has no more validity than Manovich’s statements about visual culture and computing.
Tuesday, September 16, 2008
"Undue Noise" in Castlemaine
This Saturday (20th September) I will be playing audio-visual pieces (mostly live) at a gig called Undue Noise in Castlemaine.
Where: ICU, 1 Halford St, Castlemaine, Victoria.
When: Saturday 20th September, 8pm.
How much: Sorry - I don't know. But it won't be a lot.
Undue Noise is a series organised on behalf of experimental sound and video artists in central Victoria. It takes place at venues in Bendigo and Castlemaine.
More info: http://undue.cajid.com/ or http://www.myspace.com/bendigounduenoise.
Tuesday, August 26, 2008
Trocadero - Followup
There is to be a massive group exhibition, "Wallpaper 08", at Trocadero Art Space. For me this is a follow-on from my exhibition there in August, as I will be showing some new high-resolution prints generated by the program I wrote for Cloud Drum.
The exhibition dates are 10-27 September, and the address is Level 1, 119 Hopkins St, Footscray, Melbourne (near Footscray station).
Later: My contribution is six new images from Cloud Drum. There are something like 50 artists involved in this show. When I put my works up, about half of the artists had installed their contributions. It should be a pretty amazing show. The opening is on Saturday 13th September, 4-6pm.
The exhibition dates are 10-27 September, and the address is Level 1, 119 Hopkins St, Footscray, Melbourne (near Footscray station).
Later: My contribution is six new images from Cloud Drum. There are something like 50 artists involved in this show. When I put my works up, about half of the artists had installed their contributions. It should be a pretty amazing show. The opening is on Saturday 13th September, 4-6pm.
Friday, July 25, 2008
Exibition at Trocadero Art Space
My installation Cloud Drum will be at Trocadero Art Space in Footscray, Melbourne, 30th July to 16th August. My piece is in Gallery 2.
Address: Level 1, 119 Hopkins St, Footscray.
It is a short walk from Footscray railway station.
Gallery hours: The gallery is open Wed - Sat, 11am - 5pm.
I will be there on Thursdays during the exhibition.
The opening is on Saturday 2nd August, 4pm - 6pm. All welcome!
This opening is an event on Facebook.
Click for Map
Address: Level 1, 119 Hopkins St, Footscray.
It is a short walk from Footscray railway station.
Gallery hours: The gallery is open Wed - Sat, 11am - 5pm.
I will be there on Thursdays during the exhibition.
The opening is on Saturday 2nd August, 4pm - 6pm. All welcome!
This opening is an event on Facebook.
Click for Map
Monday, July 21, 2008
ACMC 2008
Recently I attended the 2008 Australasian Computer Music Conference in Sydney, 10-12 July. I think this is still the only musical event in the region that combines refereed academic papers, artist talks and a festival. A real benefit of the conference is hearing talks from people whose work is then performed in the concerts. This isn't meant to be a detailed review, just a mention of some highlights for me.
The conference was held at the Sydney Conservatorium of Music, which was a great venue, with the talks, concerts and so on held in rooms adjacent to central atrium of the new part of the Conservatorium. It was smoothly organised by Anthony Hood, Robert Sazdov, Ivan Zavada, Sonia Wilkie and a team of helpers. I couldn't get to everything, but I still attended about 20 talks and 6 concerts in three days. Additionally, the conference linked in with Liquid Architecture's Sydney leg, the Liquid Architecture gigs providing late-light events for ACMC. Such late-night events have become a tradition for ACMC, but this is the first time Liquid Architecture has been involved.
The keynote speakers were Robert Normandeau from Montreal and Roger Dean from Sydney. Robert is a very well-known "acousmatic" composer. He devoted a lot of his keynote speech to a detailed discussion of his piece StrinGDberg. This piece was the last one in the concerts, and was certainly a highlight of the conference.
I had to miss part of Roger Dean's talk, but the part I did hear was fascinating. It was concerned with empirical psychoacoustic studies of various musical attributes, using electro-acoustic music as test materials. It seems that almost all such work is conducted using instrumental music, and Roger saw advantages in using music less familiar to many participants.
A few other talks that struck me: Warren Burt's comparing electro-acoustic composition to Sufism (given by video,as Warren couldn't attend); Ros Bandt's on her installation Sea Lament based on sounds associated with Japanese woman abalone divers, and especially the whistling noise they make when they reach the surface; Toby Gifford and Andrew Brown's talk describing a simple but effective method for detecting percussive attacks very quickly when they are buried in other sounds; and the talk by Andrew Sorenson and Andrew Brown entitled "A compositional model for the generation of orchestral music in the German symphonic tradition". Why would one want to do this? It turns out that the generation is in real time, with obvious application to games. Instead of trying to solve completely the problem, say, of chord progression, they so far have rough and ready versions of all the main musical components. The results weren't at all bad.
From the concerts I have already mentioned Robert Normandeau's StrinGDberg. Of the other tape pieces I would mention the long and compelling piece Ombres, Espaces, Silences by Giles Gobeil, composed in stereo and ably diffused over the conference 16-channel system by Conservatorium student Henrique Dib. Remarkably, this was Henrique's first public diffusion. Also the 1969 piece Continuum, by the pioneer Tristram Cary, known for his work on Doctor Who, and a long-time resident of Adelaide, who died recently. This work held up extremely well.
Among the live performances I would mention the engaging Hands on Stage by Chi-Hsia Lai, using a small table with a translucent top, a webcam underneath, and microphones attached. The webcam interpreted shadows cast on the table as control information for sound manipulation, and additionally we saw projected a modified version of what the webcam was seeing. Another novel interface was the electronic sitar of Ajay Kapur, coupled with a head-mounted controller. The work, Anjuna's Digital Raga, was very enjoyable. Unfortunately I had to miss the workshop that Ajay gave about his work.
My favourite audio-visual work was Brigid Burke's Strings, involving complex projected images, live bass clarinet (played by Brigid), and electronically transformed sound. I also mention the strange work Po[or Symm]etry [Dra]in[s] [E]motion[s], by Mark Havriliv and Josh Dubrau. It consisted of an electronic chat session between the two performers, projected up on a screen and accompanied by sounds. But the chat messages were being transformed by a computer program, resulting in received messages looking like the title of the work.
I only engaged with two installations apart from my own Cloud Drum, Colin Black's "extended environmental guitar", which was documentation about a 15-metre long construction installed in remote locations associated with the explorer Ludwig Leichhardt, and Ros Bandt's Sea Lament mentioned above.
Of course there was a lot more. It was good to meet with old friends, but also good to see quite a few new faces. This is the sixteenth ACMC, and there seems to be plenty of energy in the community.
The conference was held at the Sydney Conservatorium of Music, which was a great venue, with the talks, concerts and so on held in rooms adjacent to central atrium of the new part of the Conservatorium. It was smoothly organised by Anthony Hood, Robert Sazdov, Ivan Zavada, Sonia Wilkie and a team of helpers. I couldn't get to everything, but I still attended about 20 talks and 6 concerts in three days. Additionally, the conference linked in with Liquid Architecture's Sydney leg, the Liquid Architecture gigs providing late-light events for ACMC. Such late-night events have become a tradition for ACMC, but this is the first time Liquid Architecture has been involved.
The keynote speakers were Robert Normandeau from Montreal and Roger Dean from Sydney. Robert is a very well-known "acousmatic" composer. He devoted a lot of his keynote speech to a detailed discussion of his piece StrinGDberg. This piece was the last one in the concerts, and was certainly a highlight of the conference.
I had to miss part of Roger Dean's talk, but the part I did hear was fascinating. It was concerned with empirical psychoacoustic studies of various musical attributes, using electro-acoustic music as test materials. It seems that almost all such work is conducted using instrumental music, and Roger saw advantages in using music less familiar to many participants.
A few other talks that struck me: Warren Burt's comparing electro-acoustic composition to Sufism (given by video,as Warren couldn't attend); Ros Bandt's on her installation Sea Lament based on sounds associated with Japanese woman abalone divers, and especially the whistling noise they make when they reach the surface; Toby Gifford and Andrew Brown's talk describing a simple but effective method for detecting percussive attacks very quickly when they are buried in other sounds; and the talk by Andrew Sorenson and Andrew Brown entitled "A compositional model for the generation of orchestral music in the German symphonic tradition". Why would one want to do this? It turns out that the generation is in real time, with obvious application to games. Instead of trying to solve completely the problem, say, of chord progression, they so far have rough and ready versions of all the main musical components. The results weren't at all bad.
From the concerts I have already mentioned Robert Normandeau's StrinGDberg. Of the other tape pieces I would mention the long and compelling piece Ombres, Espaces, Silences by Giles Gobeil, composed in stereo and ably diffused over the conference 16-channel system by Conservatorium student Henrique Dib. Remarkably, this was Henrique's first public diffusion. Also the 1969 piece Continuum, by the pioneer Tristram Cary, known for his work on Doctor Who, and a long-time resident of Adelaide, who died recently. This work held up extremely well.
Among the live performances I would mention the engaging Hands on Stage by Chi-Hsia Lai, using a small table with a translucent top, a webcam underneath, and microphones attached. The webcam interpreted shadows cast on the table as control information for sound manipulation, and additionally we saw projected a modified version of what the webcam was seeing. Another novel interface was the electronic sitar of Ajay Kapur, coupled with a head-mounted controller. The work, Anjuna's Digital Raga, was very enjoyable. Unfortunately I had to miss the workshop that Ajay gave about his work.
My favourite audio-visual work was Brigid Burke's Strings, involving complex projected images, live bass clarinet (played by Brigid), and electronically transformed sound. I also mention the strange work Po[or Symm]etry [Dra]in[s] [E]motion[s], by Mark Havriliv and Josh Dubrau. It consisted of an electronic chat session between the two performers, projected up on a screen and accompanied by sounds. But the chat messages were being transformed by a computer program, resulting in received messages looking like the title of the work.
I only engaged with two installations apart from my own Cloud Drum, Colin Black's "extended environmental guitar", which was documentation about a 15-metre long construction installed in remote locations associated with the explorer Ludwig Leichhardt, and Ros Bandt's Sea Lament mentioned above.
Of course there was a lot more. It was good to meet with old friends, but also good to see quite a few new faces. This is the sixteenth ACMC, and there seems to be plenty of energy in the community.
Wednesday, July 9, 2008
Installation at ACMC
I have an installation called Cloud Drum at this year's Australasian Computer Music Conference in Sydney, July 10-12. It is based on the vibrations of an idealised drum, as is Triangular Vibrations, but Cloud Drum is real-time, interactive, black-and-white, and a much gentler piece.
Triangular Vibrations again
Well, it won't be played at the International Computer Music Conference, but Triangular Vibrations is having a good run. As noted in an earlier post, Ivan Zavada took it on tour; also it was included in the art program that was part of the 2008 Computational Aesthetics conference in Lisbon, Portugal. Now it is being included in the Liquid Architecture Screening Reel, part of the Sydney leg of the Liquid Architecture festival, 11 and 12 July 2008.
Later: Triangular Vibrations will be screened as part of Abstracta Cinema in Rome, Italy on 23rd September, 2008.
Later: Triangular Vibrations will be screened as part of Abstracta Cinema in Rome, Italy on 23rd September, 2008.
Friday, June 27, 2008
ICMC acceptance, or not
I submitted my piece Triangular Vibrations to the 2008 International Computer Music Conference, in Belfast this year. I received first a provisional acceptance, and then a confirmed acceptance. I was very pleased about this, as I thought there would be a lot of competition.
But then it turned out that one has to attend the conference in order to have the work played. Unfortunately there was no mention of this in the call for pieces. I don't object to a policy of priority for those who can attend, but I would have liked an indication of this at the time I submitted the piece. I contacted the organisers, and was told "we could not have anticipated the volume of submissions that were made". So, it was very competitive.
I don't know what to conclude from this. Despite the Internet, Australia is still a long way away from Europe, in time and in dollars. For a little while I was making frequent overseas trips, but I couldn't sustain it, and I haven't been further than New Zealand for a while.
But then it turned out that one has to attend the conference in order to have the work played. Unfortunately there was no mention of this in the call for pieces. I don't object to a policy of priority for those who can attend, but I would have liked an indication of this at the time I submitted the piece. I contacted the organisers, and was told "we could not have anticipated the volume of submissions that were made". So, it was very competitive.
I don't know what to conclude from this. Despite the Internet, Australia is still a long way away from Europe, in time and in dollars. For a little while I was making frequent overseas trips, but I couldn't sustain it, and I haven't been further than New Zealand for a while.
Tuesday, June 3, 2008
"Triangular Vibrations" on tour
Ivan Zavada from the Sydney Conservatorium of Music recently took a program of electro-acoustic music by Conservatorium composers to Sweden, and also presented a concert in Florence (Firenze) at the Conservatorio Cherubini. My piece Triangular Vibrations was played in Gävle and Stockholm in Sweden, and at the concert in Florence.
The concert in Stockholm was at Fylkingen, a long-established new and experimental music association and venue in Stockholm, which has a special emphasis on electronic music. Over the years they have presented works by many well-known composers, including Berio, Stockhausen, Xenakis, Cage, La Monte Young, David Tudor, and many more. I'm pleased to be in this company!
The concert in Stockholm was at Fylkingen, a long-established new and experimental music association and venue in Stockholm, which has a special emphasis on electronic music. Over the years they have presented works by many well-known composers, including Berio, Stockhausen, Xenakis, Cage, La Monte Young, David Tudor, and many more. I'm pleased to be in this company!
Saturday, May 24, 2008
Master of Music
I submitted my portfolio and essay for the Master of Music (Composition) at the Sydney Conservatorium of Music last year ('07). It is now all corrected, bound, and sent off to the University library. Under the rules in force when I started, the essay was to not to be about my own work, so not an exegesis, but was instead to be about a topic related in some way to my work. I wrote on "The Concept of Emergence in Generative Art". The essay is here.
Monday, April 14, 2008
Lua: Yet another scripting language?
At various times I have dabbled in a variety of programming languages, from Fortran on. When I started to write programs for computer music I settled on C; eventually, when the benefits of encapsulation became compelling, I moved to C++. I sometimes used Python for quick-and-dirty manipulation of data, notably in my brainwave sonification piece. All of this was done at the command line, since my programs generally didn't have graphical user interfaces. I occasionally used Tk/Tcl for graphics.
Quite recently I moved to the Macintosh OS X platform, for reasons discussed here. I am grappling with Cocoa for constructing user interfaces and handling graphics, and this has meant dealing with Objective-C. After some hesitation I started using the hybrid Objective-C++, which is a weird mixture, but does work. So I have been simultaneously trying to learn about user interfaces in general, learn about Cocoa, and learn about Objective-C++.
Why Lua?
Recently I have become aware of Lua, though it has been around for a while. Why should I care about another scripting language, when I have already made some use of Python, and when I am already dealing with too many new things? The answer is that it seems Lua will solve a specific problem, namely how to read and write configuration files.
In the past I have spent some effort writing routines to read in and parse configuration files. I am now experimenting with using Lua instead, using as a test bed a small program I am writing that generates images. The program has about a dozen parameters that control the image, and as the program develops there may be more. Although the program does have a GUI, there are too many parameters to set conveniently through the GUI. Also, I would like to be able to recreate an image just by loading a configuration file via the GUI.
The biggest claimed advantage of Lua is that it is very easy to embed. I agree: it is easy to add the whole of the Lua source into a project, if I don't want to make any assumptions about libraries. Getting the whole thing to work was no harder than writing a simple parser, and now I know how to do it, so it will be even easier next time.
Another advantage, as a configuration file handler, is unusual flexibility of file format. The procedure for reading in a configuration file is to fire up a Lua interpreter, run the file as a Lua script, and then read into C/C++ the values of Lua global variables set by the script. The script could just consist of statements like
but it can also contain arithmetic expressions, conditionals, loops, and of course comments. All that matters is the values of the global variables of interest after the script has been run.
Two other claimed advantages are small size (the executable isn't bloated too much by adding Lua) and fast running for an interpreted language. Neither has been an issue for me. Finally, Lua is free in every sense (as of course is Python).
More extensive use of Lua
It seems that in the game industry Lua has been used for things like level design, so that a game level can be specified as a Lua script. Further, it is easy to write additional functions for Lua in C/C++, and these additional functions can enable Lua to reach into the C++ part of the program. (I tried a small example.) Following this path would mean a different use of Lua: instead of firing up an interpreter when I need to read in a file and then dismissing it, it would be necessary to keep one interpreter around for the whole program, and a fair amount of the program state would reside in Lua. At present I have no plans to do this.
Alternatives
The game industry discussion mentioned XML and also Python. For small jobs Lua seems to be easier. In the past I thought briefly about embedding Python, but it appeared to be complicated, and I didn't pursue it. Maybe I just didn't have a clear description of what to do. For Lua, instructions are here.
Conclusion
My experiment with Lua was a success, so I will go on using it for configuration files and the like. This means I will routinely be dealing with three languages in a single program: C++, Objective C and Lua. Since I only use fairly basic parts of each of these languages, I think this is manageable.
Quite recently I moved to the Macintosh OS X platform, for reasons discussed here. I am grappling with Cocoa for constructing user interfaces and handling graphics, and this has meant dealing with Objective-C. After some hesitation I started using the hybrid Objective-C++, which is a weird mixture, but does work. So I have been simultaneously trying to learn about user interfaces in general, learn about Cocoa, and learn about Objective-C++.
Why Lua?
Recently I have become aware of Lua, though it has been around for a while. Why should I care about another scripting language, when I have already made some use of Python, and when I am already dealing with too many new things? The answer is that it seems Lua will solve a specific problem, namely how to read and write configuration files.
In the past I have spent some effort writing routines to read in and parse configuration files. I am now experimenting with using Lua instead, using as a test bed a small program I am writing that generates images. The program has about a dozen parameters that control the image, and as the program develops there may be more. Although the program does have a GUI, there are too many parameters to set conveniently through the GUI. Also, I would like to be able to recreate an image just by loading a configuration file via the GUI.
The biggest claimed advantage of Lua is that it is very easy to embed. I agree: it is easy to add the whole of the Lua source into a project, if I don't want to make any assumptions about libraries. Getting the whole thing to work was no harder than writing a simple parser, and now I know how to do it, so it will be even easier next time.
Another advantage, as a configuration file handler, is unusual flexibility of file format. The procedure for reading in a configuration file is to fire up a Lua interpreter, run the file as a Lua script, and then read into C/C++ the values of Lua global variables set by the script. The script could just consist of statements like
max_life_value = 5
but it can also contain arithmetic expressions, conditionals, loops, and of course comments. All that matters is the values of the global variables of interest after the script has been run.
Two other claimed advantages are small size (the executable isn't bloated too much by adding Lua) and fast running for an interpreted language. Neither has been an issue for me. Finally, Lua is free in every sense (as of course is Python).
More extensive use of Lua
It seems that in the game industry Lua has been used for things like level design, so that a game level can be specified as a Lua script. Further, it is easy to write additional functions for Lua in C/C++, and these additional functions can enable Lua to reach into the C++ part of the program. (I tried a small example.) Following this path would mean a different use of Lua: instead of firing up an interpreter when I need to read in a file and then dismissing it, it would be necessary to keep one interpreter around for the whole program, and a fair amount of the program state would reside in Lua. At present I have no plans to do this.
Alternatives
The game industry discussion mentioned XML and also Python. For small jobs Lua seems to be easier. In the past I thought briefly about embedding Python, but it appeared to be complicated, and I didn't pursue it. Maybe I just didn't have a clear description of what to do. For Lua, instructions are here.
Conclusion
My experiment with Lua was a success, so I will go on using it for configuration files and the like. This means I will routinely be dealing with three languages in a single program: C++, Objective C and Lua. Since I only use fairly basic parts of each of these languages, I think this is manageable.
Tuesday, March 11, 2008
Studying at Monash
I have just started a PhD (part-time) in the Faculty of Art and Design at Monash University, Melbourne. My PhD will be practice-based, i.e. largely by portfolio; I will be making generative artworks. Art and Design describe the desired outcome as a contribution that has "substantial cultural significance", which is a much better approach than trying to fit a practice-based research degree into the usual framework of a "substantial contribution to knowledge".
I knew that moving from a conservatorium of music to an art and design school would be a substantial cultural shift, but it is an even bigger change than I expected, and I'm not confident that I have really grasped the differences yet.
There are a huge number of higher degree students around in Art and Design, I think upwards of 150. There is a lot going on, and the Faculty are making an effort to overcome the isolation often felt by research students. Monash is a multi-campus university these days; I will largely be at the Caulfield campus, where most of the Art and Design people and facilities are.
So far I feel confused but exhilarated!
I knew that moving from a conservatorium of music to an art and design school would be a substantial cultural shift, but it is an even bigger change than I expected, and I'm not confident that I have really grasped the differences yet.
There are a huge number of higher degree students around in Art and Design, I think upwards of 150. There is a lot going on, and the Faculty are making an effort to overcome the isolation often felt by research students. Monash is a multi-campus university these days; I will largely be at the Caulfield campus, where most of the Art and Design people and facilities are.
So far I feel confused but exhilarated!
Tuesday, March 4, 2008
Music.Sound.Design Symposium at UTS
Warning: Long post!
Recently I attended the Music.Sound.Design symposium held on February 13-15 at the University of Technology, Sydney (UTS). To quote from the symposium booklet:
The main senior academic figure present was the Dean of the Faculty of Design, Architecture and Building, Theo van Leeuwen. The nitty-gritty organisation was done (I think) mostly by Ben Byrne. Apparently there was a only a short lead time for organising the event, yet it ran pretty well.
There were four one-hour keynote addresses, twenty 20-minute presentations, two concerts, a showing of an installation, and three so-called workshops. All of these except the workshops were open to the general public; the workshops were about curriculum and pedagogy, and were restricted to educators and practitioners. In fact everyone who showed up at the Symposium fitted into these categories, so the workshops were also opened to everyone. Since I am no longer teaching, I only attended one of the workshops; I got to most of the other sessions.
Usual disclaimer: what follows is a personal view of a complex event. Apologies to anyone I have misrepresented!
The keynote addresses
The opening address was by Kees Tazelaar, head of the Institute of Sonology at the Hague. The Institute is a unique institution, devoted to sound, with a central focus on electronic and computer music; the name "sonology" was coined when the Institute was set up. Kees talked about several topics related to "classical" electro-acoustic music, including his own compositional methods, his work on the reconstruction of Varèse's Poème électronique, created for the Philips pavilion at the Brussels World Fair in 1958, and the compositional work of Gottfried-Michael Koenig, a computer music pioneer who was a predecessor of Kees's as head of the Institute. As I understand it, the approaches of both Kees and Koenig involve treating processes on an equal footing with source material; the output consists of many layers of processing, where the processes themselves are organised in a manner inspired by serialism.
The second keynote address was by Ernest Edmonds, who runs the Creativity and Cognition Studios at UTS. He is interested in what he called "Art Systems" or "Art Processes", and in particular works where a single abstract underlying process gives rise to both sounds and visuals. He presented an example of an audio-visual work inspired by the colour-field paintings of the 1960s, and presented at a celebration of these paintings. The visuals consisted of vertical bars of colour, accompanied at first by "chuffs" of sound passed through resonant filters. Initially there seemed to be a very simple correlation: each chuff caused a change in the image, but after a while, as the density of chuffs increased, it became clear that something more complicated was happening. I asked Ernest about this, and he said that the piece was in four sections, with a different process in each section. He also said that it went down very well with an audience of colour-field painting buffs.
The Japanese artist Yasunao Tone was the third keynote speaker. His presentation was difficult to follow because of language problems, but nonetheless very interesting. Yasunao was a member of the Fluxus group and has a long history of engagement with experimental art in many media. He talked about some early pieces, including one done as part of a presentation for Volkswagen, where a VW Beetle was wired up with proximity sensors and so on, so that it made various sounds when people approached it, opened the doors, etc. Some of the sounds were very short snippets of the German national anthem. According to Yasunao, the VW executives weren't impressed. Yasunao then talked about a number of works he has done based on text, and in particular on Chinese characters, which of course make up the main part of Japanese writing. He described one work, based on a Chinese translation of a work of Ezra Pound (I think),where each character was represented in several ways, in its modern form, in an older quasi-pictorial form, by an actual picture, by its sound as read by Yasunao (not an expert Chinese speaker, he says), and by the original English word. His aim was to supply the aural and visual elements missing from a normal written translation. He said that the older form of one of the characters was supposed to represent a baby being placed in a river as part of some sort of ritual, and he actually found a picture of a baby being placed in a river. Yasunao is well-known for his "wounded CDs", where he played CDs damaged in various ways, to upset the then much-hyped "purity of digital sound", but he didn't talk particularly about this work. It is clear that he has enormous energy, and he opened his talk by reciting/singing a sound-poem (no words) by (I think) Nam June Paik. He said it was to wake himself up, and it woke up the rest of us too!
I discuss the fourth keynote address, by Julian Knowles, at the end of this post.
The twenty-minute-talks
The twenty-minute talks covered a very wide range of topics. The biggest single group was formed by artists talking about the way they use sound in their art, including audio-visual work, installations with an audio component, sound sculpture, virtual musical instruments (realised on a computer), and so on. Jim Denley talked about recordings he had made of his own improvising in some extraordinary natural spaces in the Buddawang Mountains. There were two talks by builders of physical (as opposed to virtual) instruments: Danielle Wilde described her "hipDisk", which requires the wearer to make dancer-like movements with the body in order to play tunes, and Donna Hewitt described her "eMic", a microphone stand with various controls attached to it, allowing singers to control the processing of their sound.
A couple of the twenty-minute talks were about pedagogy: John Bassett spoke on teaching sound engineering and Densil Cabrera on teaching acoustics. Damien Castaldi discussed the way that radio is mutating into podcasting and webcasting. Stephen Barrass talked about the work his group is doing in data representation, with an emphasis on sonification (representing data as sound; analogous to visualisation).
Another group of talks could be described as historical and critical. There were three talks on sound from a cinematic point of view, in part devoted to historical changes in the way sound has been treated in film. Peter Blamey talked about La Monte Young's idea of listening "inside a sound", and the progressive changes in the sort of sounds that La Monte Young used. Caleb Kelly talked about "cracked media", a movement where a closed medium, for example an LP, was cracked open by melting part of the disc, sawing it up and gluing the pieces back together in a different arrangement, and so on. Computer technology has now made all media open, despite the Digital Rights Management bully-boys.
Finally (in logical order, if not in time order), Mitchell Whitelaw gave a very wide-ranging talk about new media, starting from the distinction made by Hans Ulrich Gumbrecht between "meaning culture" (trying to understand the world) and "presence culture" (body-centric living in the world), and leading on to a dialectic between the immaterial (abstract patters, bits, data) and the material (the embodiment of these patterns as things we can hear, see, feel). These ideas seemed to be in danger of becoming a theory of everything, but this is work in progress, and it will be very interesting to see what it develops into.
The installation and the concerts - the power of orthodoxy
Robin Fox had an installation which represents a development of his work with oscilloscopes. In the earlier work Robin fed carefully calculated audio signals into an oscilloscope, generating amazing rapidly-changing shapes and patterns. Later Robin used a laser with a green beam in performance, flicking it rapidly all over the room and the audience, again under the control of sound. The installation at UTS was in a blacked-out room, with two of these sonically controlled controlled green lasers, some mirrors, fog from a fog machine, and quite intense sound. The result was pretty impressive, but didn't quite have the impact for me of the laser performance I saw him do a year or two ago.
There were two concerts. Given the wide range of practices discussed during the Symposium, they had a surprisingly narrow scope. The organisers were not really to blame; rather it seems to be the power of orthodoxy. In fact two orthodoxies were represented at the concerts, where by "orthodoxy" I mean not just a genre, but a genre that becomes a normative force: things ought to be done this way; it becomes difficult to break away. The word "orthodoxy" was suggested to me in discussion.
The three pieces presented by Kees Kazelaar come from the (now) academic electronic/computer music tradition, which belongs squarely to art music. The first piece was the short Concret PH by Xenakis, created for the 1958 Philips pavilion along with Varèse's Poème électronique. Then we heard the reconstruction of the Poème électronique worked on by Kees, and a piece by Kees himself, whose title I unfortunately didn't catch, but which was inspired by the phenomenon seen in very cold climates of one's breath crystallising into a cloud of ice particles.
The remaining pieces in both concerts, with the partial exception of Yasunao Tone's, fell under the "laptop performance" orthodoxy. It is a part of this orthodoxy that the only information given to the audience is the names of the performers. The pieces, or sets, don't even have names, and there is nothing resembling program notes. (Kees's piece fell victim to this orthodoxy; there was no information about it in the program.) Of course there are different sub-practices within laptop performance: Donna Hewitt made visible gestures during her engaging performance with the eMic instrumented mike stand; Philip Samartzis played very quiet sounds after a raucous beginning, while Robin Fox's piece was uncomfortably loud; Peter Blamey didn't use a laptop at all, just a mis-wired mixing desk. Nonetheless the pieces did all belong to one relatively narrow practice.
Yasunao Tone's performance was the only one with a visual component. Yasunao had a drawing tablet connected to his laptop and drew a sequence of Chinese characters. I think these constituted a Chinese translation of a poem by Ezra Pound (again, there were no program notes). There was a base sound-track of fairly confused-sounding and harsh noises from many sources. The stylus of the drawing tablet acted to puncture through this base layer and release even louder and harsher sounds. Although the overt (and literary) structure of the piece set it apart from the usual laptop performance, in other respects, especially in the sound world used and the semi-improvised performance, it fitted in to the laptop performance orthodoxy very well.
The organisers invited various people to participate in the concerts, and some of those invited work across a range of genres and practices. But it seems that they all automatically went into laptop performance mode, succumbing indeed to the power of the orthodoxy.
The conservatoire model and its inversion
The final keynote address was by Julian Knowles from the Faculty of Creative Industries, Queensland University of Technology. Julian's intentionally provocative talk was about aspects of music education. Among other things, he put up a list of various composition appointments at conservatoria around Australia. The appointees all came from the Western classical tradition, most had studied in England, and the dominant influence was European modernism; indeed some of the people had studied with key figures in this movement. The only real exception was the now defunct Music Department at La Trobe University, which had strong links with the University of California at San Diego.
In this context, Julian put up quotations from conservative figures in Australian music asserting that the only sort of music education worthy of the name was education in the Western classical tradition.
Julian went on to list various features of what he called the "conservatoire model". I didn't catch them all (there were a lot), but the starting point was that composition and performance are distinct activities carried out by different people, and recording is a third distinct activity, carried out by technicians rather than creative people. Julian then systematically inverted all the features of the conservatoire model, so after this inversion the same person is both composer and performer, recording is a creative activity carried out as often as not by the performer/composer, and so on. Julian argued that this inverted model is the reality of today's practitioners.
When Julian was at the University of Western Sydney he was involved in the Electronic Arts program there (now closed, as with much of the rest of the art program at UWS). The program attempted to address this new reality in its course structure. For example, traditional music notation was not a prerequisite. Julian put up a collage of alternative notations, such as waveform displays, track layouts in ProTools, a Max patch, and so on. Julian argued that if a student needs traditional notation, it should be available, but not everyone needs it.
Julian also made the point that thanks to the wide spread of music-making technology, the institutions are no longer the gate-keepers for innovation in electronic or computer music. The institutions can certainly act as creative centres, but they are no longer the only source.
I did have the feeling that there was a certain amount of stigmatisation of conservatoria during the symposium, and after Julian's talk I was tempted to leap up and say that my MMus portfolio at the Sydney Conservatorium exemplifies several of the practices discussed at the symposium, and contains no traditionally notated pieces. But of course Julian is largely correct. The core mission of the Sydney Conservatorium is to train the next generation of classical music performers, and the Conservatorium has close industry links with the Sydney Symphony Orchestra and other such organisations. The other multifarious activities of the Conservatorium—composition, musicology, music education, music technology, research into music pedagogy and performance—are all seen as ancillary. Of course the Conservatorium orchestra must play Tchaikovsky, as the students must be able to perform Tchaikovsky as part of their professional training, to summarise a conversation I overheard. It doesn't matter that access to an orchestra is essentially impossible for a student composer. Playing Tchaikovsky is the reality of the industry.
Julian's talk was the last activity in the pedagogical strand of the conference. I didn't really engage with this, as I am not now involved in teaching, but I was aware of some of the undercurrents. Remarkably, although three of the four keynote speakers describe themselves as composers, there was not very much discussion of music (and I include jazz, pop, rock, world music, electronica, hip-hop,...). Also I heard no overt discussion of design at all, and I don't know what the word means in this context. It was suggested to me that design in some sense underlies all of the symposium topics, though surely any discipline worthy of the name has a systematic methodology.
Thus there was an impression that the symposium was really about sound, and that music and design were secondary. Another topic was whether computer programming should be taught, and if so in what form. Tom Ellard (rock muso, electronic music pioneer, audio-visual artist) put up a page from Schoenberg's harmony textbook and said that this should be taught before programming. But then Tom made an ambit claim for "music" to include all art forms, including painting and architecture.
Finally the question arose as to whether the proposed course will just be a collection of unrelated units, or whether there is a coherent disciplinary core. It was suggested to me that historical and critical studies might provide such a core. The course is at a very early stage of construction, and the symposium was not expected to provide final answers to such questions. It will be interested to see what is taught in 2010, when the course is planned to start.
For me the value of the symposium was what I hoped it would be: encountering a wide range of views from a collection of very interesting people!
Recently I attended the Music.Sound.Design symposium held on February 13-15 at the University of Technology, Sydney (UTS). To quote from the symposium booklet:
The Faculties of Design, Architecture and Building and Humanities and Social Sciences at UTS are together embarking on a project to develop a new undergraduate program emphasizing cross-disciplinary practice across the areas of music, sound and design and as part of that process are holding the UTS Music.Sound.Design Symposium 2008.
The main senior academic figure present was the Dean of the Faculty of Design, Architecture and Building, Theo van Leeuwen. The nitty-gritty organisation was done (I think) mostly by Ben Byrne. Apparently there was a only a short lead time for organising the event, yet it ran pretty well.
There were four one-hour keynote addresses, twenty 20-minute presentations, two concerts, a showing of an installation, and three so-called workshops. All of these except the workshops were open to the general public; the workshops were about curriculum and pedagogy, and were restricted to educators and practitioners. In fact everyone who showed up at the Symposium fitted into these categories, so the workshops were also opened to everyone. Since I am no longer teaching, I only attended one of the workshops; I got to most of the other sessions.
Usual disclaimer: what follows is a personal view of a complex event. Apologies to anyone I have misrepresented!
The keynote addresses
The opening address was by Kees Tazelaar, head of the Institute of Sonology at the Hague. The Institute is a unique institution, devoted to sound, with a central focus on electronic and computer music; the name "sonology" was coined when the Institute was set up. Kees talked about several topics related to "classical" electro-acoustic music, including his own compositional methods, his work on the reconstruction of Varèse's Poème électronique, created for the Philips pavilion at the Brussels World Fair in 1958, and the compositional work of Gottfried-Michael Koenig, a computer music pioneer who was a predecessor of Kees's as head of the Institute. As I understand it, the approaches of both Kees and Koenig involve treating processes on an equal footing with source material; the output consists of many layers of processing, where the processes themselves are organised in a manner inspired by serialism.
The second keynote address was by Ernest Edmonds, who runs the Creativity and Cognition Studios at UTS. He is interested in what he called "Art Systems" or "Art Processes", and in particular works where a single abstract underlying process gives rise to both sounds and visuals. He presented an example of an audio-visual work inspired by the colour-field paintings of the 1960s, and presented at a celebration of these paintings. The visuals consisted of vertical bars of colour, accompanied at first by "chuffs" of sound passed through resonant filters. Initially there seemed to be a very simple correlation: each chuff caused a change in the image, but after a while, as the density of chuffs increased, it became clear that something more complicated was happening. I asked Ernest about this, and he said that the piece was in four sections, with a different process in each section. He also said that it went down very well with an audience of colour-field painting buffs.
The Japanese artist Yasunao Tone was the third keynote speaker. His presentation was difficult to follow because of language problems, but nonetheless very interesting. Yasunao was a member of the Fluxus group and has a long history of engagement with experimental art in many media. He talked about some early pieces, including one done as part of a presentation for Volkswagen, where a VW Beetle was wired up with proximity sensors and so on, so that it made various sounds when people approached it, opened the doors, etc. Some of the sounds were very short snippets of the German national anthem. According to Yasunao, the VW executives weren't impressed. Yasunao then talked about a number of works he has done based on text, and in particular on Chinese characters, which of course make up the main part of Japanese writing. He described one work, based on a Chinese translation of a work of Ezra Pound (I think),where each character was represented in several ways, in its modern form, in an older quasi-pictorial form, by an actual picture, by its sound as read by Yasunao (not an expert Chinese speaker, he says), and by the original English word. His aim was to supply the aural and visual elements missing from a normal written translation. He said that the older form of one of the characters was supposed to represent a baby being placed in a river as part of some sort of ritual, and he actually found a picture of a baby being placed in a river. Yasunao is well-known for his "wounded CDs", where he played CDs damaged in various ways, to upset the then much-hyped "purity of digital sound", but he didn't talk particularly about this work. It is clear that he has enormous energy, and he opened his talk by reciting/singing a sound-poem (no words) by (I think) Nam June Paik. He said it was to wake himself up, and it woke up the rest of us too!
I discuss the fourth keynote address, by Julian Knowles, at the end of this post.
The twenty-minute-talks
The twenty-minute talks covered a very wide range of topics. The biggest single group was formed by artists talking about the way they use sound in their art, including audio-visual work, installations with an audio component, sound sculpture, virtual musical instruments (realised on a computer), and so on. Jim Denley talked about recordings he had made of his own improvising in some extraordinary natural spaces in the Buddawang Mountains. There were two talks by builders of physical (as opposed to virtual) instruments: Danielle Wilde described her "hipDisk", which requires the wearer to make dancer-like movements with the body in order to play tunes, and Donna Hewitt described her "eMic", a microphone stand with various controls attached to it, allowing singers to control the processing of their sound.
A couple of the twenty-minute talks were about pedagogy: John Bassett spoke on teaching sound engineering and Densil Cabrera on teaching acoustics. Damien Castaldi discussed the way that radio is mutating into podcasting and webcasting. Stephen Barrass talked about the work his group is doing in data representation, with an emphasis on sonification (representing data as sound; analogous to visualisation).
Another group of talks could be described as historical and critical. There were three talks on sound from a cinematic point of view, in part devoted to historical changes in the way sound has been treated in film. Peter Blamey talked about La Monte Young's idea of listening "inside a sound", and the progressive changes in the sort of sounds that La Monte Young used. Caleb Kelly talked about "cracked media", a movement where a closed medium, for example an LP, was cracked open by melting part of the disc, sawing it up and gluing the pieces back together in a different arrangement, and so on. Computer technology has now made all media open, despite the Digital Rights Management bully-boys.
Finally (in logical order, if not in time order), Mitchell Whitelaw gave a very wide-ranging talk about new media, starting from the distinction made by Hans Ulrich Gumbrecht between "meaning culture" (trying to understand the world) and "presence culture" (body-centric living in the world), and leading on to a dialectic between the immaterial (abstract patters, bits, data) and the material (the embodiment of these patterns as things we can hear, see, feel). These ideas seemed to be in danger of becoming a theory of everything, but this is work in progress, and it will be very interesting to see what it develops into.
The installation and the concerts - the power of orthodoxy
Robin Fox had an installation which represents a development of his work with oscilloscopes. In the earlier work Robin fed carefully calculated audio signals into an oscilloscope, generating amazing rapidly-changing shapes and patterns. Later Robin used a laser with a green beam in performance, flicking it rapidly all over the room and the audience, again under the control of sound. The installation at UTS was in a blacked-out room, with two of these sonically controlled controlled green lasers, some mirrors, fog from a fog machine, and quite intense sound. The result was pretty impressive, but didn't quite have the impact for me of the laser performance I saw him do a year or two ago.
There were two concerts. Given the wide range of practices discussed during the Symposium, they had a surprisingly narrow scope. The organisers were not really to blame; rather it seems to be the power of orthodoxy. In fact two orthodoxies were represented at the concerts, where by "orthodoxy" I mean not just a genre, but a genre that becomes a normative force: things ought to be done this way; it becomes difficult to break away. The word "orthodoxy" was suggested to me in discussion.
The three pieces presented by Kees Kazelaar come from the (now) academic electronic/computer music tradition, which belongs squarely to art music. The first piece was the short Concret PH by Xenakis, created for the 1958 Philips pavilion along with Varèse's Poème électronique. Then we heard the reconstruction of the Poème électronique worked on by Kees, and a piece by Kees himself, whose title I unfortunately didn't catch, but which was inspired by the phenomenon seen in very cold climates of one's breath crystallising into a cloud of ice particles.
The remaining pieces in both concerts, with the partial exception of Yasunao Tone's, fell under the "laptop performance" orthodoxy. It is a part of this orthodoxy that the only information given to the audience is the names of the performers. The pieces, or sets, don't even have names, and there is nothing resembling program notes. (Kees's piece fell victim to this orthodoxy; there was no information about it in the program.) Of course there are different sub-practices within laptop performance: Donna Hewitt made visible gestures during her engaging performance with the eMic instrumented mike stand; Philip Samartzis played very quiet sounds after a raucous beginning, while Robin Fox's piece was uncomfortably loud; Peter Blamey didn't use a laptop at all, just a mis-wired mixing desk. Nonetheless the pieces did all belong to one relatively narrow practice.
Yasunao Tone's performance was the only one with a visual component. Yasunao had a drawing tablet connected to his laptop and drew a sequence of Chinese characters. I think these constituted a Chinese translation of a poem by Ezra Pound (again, there were no program notes). There was a base sound-track of fairly confused-sounding and harsh noises from many sources. The stylus of the drawing tablet acted to puncture through this base layer and release even louder and harsher sounds. Although the overt (and literary) structure of the piece set it apart from the usual laptop performance, in other respects, especially in the sound world used and the semi-improvised performance, it fitted in to the laptop performance orthodoxy very well.
The organisers invited various people to participate in the concerts, and some of those invited work across a range of genres and practices. But it seems that they all automatically went into laptop performance mode, succumbing indeed to the power of the orthodoxy.
The conservatoire model and its inversion
The final keynote address was by Julian Knowles from the Faculty of Creative Industries, Queensland University of Technology. Julian's intentionally provocative talk was about aspects of music education. Among other things, he put up a list of various composition appointments at conservatoria around Australia. The appointees all came from the Western classical tradition, most had studied in England, and the dominant influence was European modernism; indeed some of the people had studied with key figures in this movement. The only real exception was the now defunct Music Department at La Trobe University, which had strong links with the University of California at San Diego.
In this context, Julian put up quotations from conservative figures in Australian music asserting that the only sort of music education worthy of the name was education in the Western classical tradition.
Julian went on to list various features of what he called the "conservatoire model". I didn't catch them all (there were a lot), but the starting point was that composition and performance are distinct activities carried out by different people, and recording is a third distinct activity, carried out by technicians rather than creative people. Julian then systematically inverted all the features of the conservatoire model, so after this inversion the same person is both composer and performer, recording is a creative activity carried out as often as not by the performer/composer, and so on. Julian argued that this inverted model is the reality of today's practitioners.
When Julian was at the University of Western Sydney he was involved in the Electronic Arts program there (now closed, as with much of the rest of the art program at UWS). The program attempted to address this new reality in its course structure. For example, traditional music notation was not a prerequisite. Julian put up a collage of alternative notations, such as waveform displays, track layouts in ProTools, a Max patch, and so on. Julian argued that if a student needs traditional notation, it should be available, but not everyone needs it.
Julian also made the point that thanks to the wide spread of music-making technology, the institutions are no longer the gate-keepers for innovation in electronic or computer music. The institutions can certainly act as creative centres, but they are no longer the only source.
I did have the feeling that there was a certain amount of stigmatisation of conservatoria during the symposium, and after Julian's talk I was tempted to leap up and say that my MMus portfolio at the Sydney Conservatorium exemplifies several of the practices discussed at the symposium, and contains no traditionally notated pieces. But of course Julian is largely correct. The core mission of the Sydney Conservatorium is to train the next generation of classical music performers, and the Conservatorium has close industry links with the Sydney Symphony Orchestra and other such organisations. The other multifarious activities of the Conservatorium—composition, musicology, music education, music technology, research into music pedagogy and performance—are all seen as ancillary. Of course the Conservatorium orchestra must play Tchaikovsky, as the students must be able to perform Tchaikovsky as part of their professional training, to summarise a conversation I overheard. It doesn't matter that access to an orchestra is essentially impossible for a student composer. Playing Tchaikovsky is the reality of the industry.
Julian's talk was the last activity in the pedagogical strand of the conference. I didn't really engage with this, as I am not now involved in teaching, but I was aware of some of the undercurrents. Remarkably, although three of the four keynote speakers describe themselves as composers, there was not very much discussion of music (and I include jazz, pop, rock, world music, electronica, hip-hop,...). Also I heard no overt discussion of design at all, and I don't know what the word means in this context. It was suggested to me that design in some sense underlies all of the symposium topics, though surely any discipline worthy of the name has a systematic methodology.
Thus there was an impression that the symposium was really about sound, and that music and design were secondary. Another topic was whether computer programming should be taught, and if so in what form. Tom Ellard (rock muso, electronic music pioneer, audio-visual artist) put up a page from Schoenberg's harmony textbook and said that this should be taught before programming. But then Tom made an ambit claim for "music" to include all art forms, including painting and architecture.
Finally the question arose as to whether the proposed course will just be a collection of unrelated units, or whether there is a coherent disciplinary core. It was suggested to me that historical and critical studies might provide such a core. The course is at a very early stage of construction, and the symposium was not expected to provide final answers to such questions. It will be interested to see what is taught in 2010, when the course is planned to start.
For me the value of the symposium was what I hoped it would be: encountering a wide range of views from a collection of very interesting people!
Tuesday, February 19, 2008
Musicophilia
My father, I believe, was tone deaf. He certainly showed no interest at all in music, and once, when I played him a major and a minor chord, he said he could not hear any difference. I have had people tell me that there is no such thing as tone deafness, but now I read in the latest book by Oliver Sacks that perhaps five percent of the population is tone deaf. The book is entitled Musicophilia: Tales of Music and the Brain (Picador, 2007), and covers an extraordinary range of conditions and phenomena.
Some of the conditions Sacks describes are very rare or unique, such as the extraordinary case of a composer, who, after being seriously injured in a car crash, lost her ability to hear harmony. She describes listening to a Beethoven string quartet: "I heard four separate voices, four thin, sharp laser beams, beaming in four different directions". Other conditions discussed in the book are commoner, such as tone deafness. My father had a good sense of verbal rhythm, enjoyed poetry and wrote it himself. According to Sacks this is certainly neurologically compatible with tone deafness, as rhythm is "represented widely in the brain". Another condition that increasing numbers of us can look forward to as we age is musical hallucinations: hearing music that apparently comes from an outside source. At the onset, people who get this think that someone is playing a radio or CD nearby; the experience is quite different from that of mentally singing a tune. Musical hallucinations are associated with going deaf. The explanation is apparently that those parts of the brain receiving aural signals expect a continual stream of input, and if they don't get it, they produce activity anyway. With normal hearing, silence doesn't produce these hallucinations, as the auditory system actively reports silence. It is only if the communication is broken that the hallucination-generating mechanism kicks in.
Sacks doesn't only consider defects in musical perception or appreciation. He also describes unusual positive abilities, including perfect pitch (something I don't have a trace of). Sacks reports studies by Diana Deutsch showing that native speakers of a tonal language will pronounce words with close to absolute pitch, and that Chinese music students are much more likely to have perfect pitch than U.S. students. Sacks also describes cases of musical savantism, where people who are seriously disabled in many ways display some extraordinary ability, such as the man who knew by heart more than 2000 operas and all the Bach cantatas. These savant abilities usually come at the expense of abstract thought. Sacks mentions the intriguing work by Allan Snyder and others in artificially producing temporary savantism by magnetic stimulation of the brain: inhibiting the activity of the part of the brain responsible for abstract thought can release savant-like abilities in at least some people.
There is a lot more in Sacks's book. It isn't a neurological treatise; as the subtitle Tales of Music and the Brain indicates, it is largely anecdotal. It raises questions rather than answering them, and I think in that general there aren't any answers yet as to why we as a species have such an extraordinary sensitivity to music, though Sacks mentions speculation about the intertwined origins of music and language. Anyone interested in music is likely to be fascinated by this book.
Some of the conditions Sacks describes are very rare or unique, such as the extraordinary case of a composer, who, after being seriously injured in a car crash, lost her ability to hear harmony. She describes listening to a Beethoven string quartet: "I heard four separate voices, four thin, sharp laser beams, beaming in four different directions". Other conditions discussed in the book are commoner, such as tone deafness. My father had a good sense of verbal rhythm, enjoyed poetry and wrote it himself. According to Sacks this is certainly neurologically compatible with tone deafness, as rhythm is "represented widely in the brain". Another condition that increasing numbers of us can look forward to as we age is musical hallucinations: hearing music that apparently comes from an outside source. At the onset, people who get this think that someone is playing a radio or CD nearby; the experience is quite different from that of mentally singing a tune. Musical hallucinations are associated with going deaf. The explanation is apparently that those parts of the brain receiving aural signals expect a continual stream of input, and if they don't get it, they produce activity anyway. With normal hearing, silence doesn't produce these hallucinations, as the auditory system actively reports silence. It is only if the communication is broken that the hallucination-generating mechanism kicks in.
Sacks doesn't only consider defects in musical perception or appreciation. He also describes unusual positive abilities, including perfect pitch (something I don't have a trace of). Sacks reports studies by Diana Deutsch showing that native speakers of a tonal language will pronounce words with close to absolute pitch, and that Chinese music students are much more likely to have perfect pitch than U.S. students. Sacks also describes cases of musical savantism, where people who are seriously disabled in many ways display some extraordinary ability, such as the man who knew by heart more than 2000 operas and all the Bach cantatas. These savant abilities usually come at the expense of abstract thought. Sacks mentions the intriguing work by Allan Snyder and others in artificially producing temporary savantism by magnetic stimulation of the brain: inhibiting the activity of the part of the brain responsible for abstract thought can release savant-like abilities in at least some people.
There is a lot more in Sacks's book. It isn't a neurological treatise; as the subtitle Tales of Music and the Brain indicates, it is largely anecdotal. It raises questions rather than answering them, and I think in that general there aren't any answers yet as to why we as a species have such an extraordinary sensitivity to music, though Sacks mentions speculation about the intertwined origins of music and language. Anyone interested in music is likely to be fascinated by this book.
Sunday, February 3, 2008
2007 Asian Art Biennial
Recently I received a catalogue from the 2007 Asian Art Biennial, which opened in October 2007 at the National Taiwan Museum of Fine Arts. It closes on February 24. My connection with the event is through my piece Dissonant Particles; it is on the DVD Video by Numbers from the Melbourne-based group Tape Projects, and this DVD was shown at the Biennial. Unfortunately I didn’t get to visit the Biennial.
I found the catalogue (in Chinese and English) impressive, and I imagine that actually being there would have been overwhelming. Judging from the catalogue, there was a mixture of old and new media and also a mixture of old and new influences. Some of the works refer to traditional Chinese or Korean art (and one to Da Vinci’s Last Supper), others to globalisation, the proliferation of digital technology, and in particular urbanisation. Australia has long had only a minority of its population in rural areas (41% in 1901, dropping to just 12% in 1996, according to one set of figures). For Asia the big movement off the land is happening now: the landmark where more than half the world’s population lives in cities has either just been reached or is just about to be, according to various estimates.
According to the catalogue, most of the artists in the Biennial were from Taiwan, mainland China or Korea, with a sprinkling from other countries, including a small group from Australia. Apart from the Video by Numbers DVD, there was an interactive installation entitled Split Reel by Jason Bond, Benjamin Ducroz, Michael Prior and Tarwin Stroh-Spijer, and a performance (I think, not just a DVD playing) by Robin Fox. Robin is the man of the moment in Australian sound-and-image art, and appears on the Synchresis DVD commented on here. Australian ambivalence as to how much we are part of Asia will no doubt continue, but links like those through the Biennial can only be good.
Monday, January 28, 2008
The "Synchresis" DVD
The Australian Network for Art and Technology (ANAT) has produced a DVD entitled "Synchresis", distributed with the Summer 2007 edition of ANAT’s magazine Filter. The DVD contains ten recent pieces from Australian audiovisual artists, including my piece Triangular Vibrations. The associated issue of Filter includes an essay "Monsters and Maps" by Mitchell Whitelaw, who curated the DVD, and articles by three of the artists represented on the DVD, namely Robin Fox, Jean Poole and Botborg.
"Synchresis" was not a word I knew; Mitchell's essay informs me that it was coined by the French theorist Michel Chion, and defined by him to mean "the spontaneous and irresistible weld produced between a particular phenomenon and visual phenomenon when they occur at the same time". The word only makes sense in a technological culture, as before the invention of the telephone, the phonograph, and ultimately the movie soundtrack, there was no way of separating a sound from its physical cause. Now we can associate any sound with any image, producing hybrid experiences that have no counterpart in reality, the "monsters" of Mitchell's title.
Here I wish to make a few comments on the relationships between the sounds and the images in the works on the DVD. Inevitably these comments recapitulate Mitchell's to some extent.
Most of the pieces lie at one of two extremes of a possible spectrum: either the sounds and images are very tightly coupled or they are apparently totally disconnected. From this point of view Peter Newman's Rosebud is the most conventional piece: crackly sounds to go with (slowly revealed) fiery imagery, a background of synthesiser sounds for ambience; so a loose coupling based partly on convention. I should say that it is only in the relation of sound and image that this work is conventional; in other ways it is extraordinary.
In three of the "tightly coupled" pieces the connection between sound and image is literally mechanical, via hardware. Robin Fox’s piece Immaculate Infection is created by feeding carefully crafted audio signals into an oscilloscope; an ordinary diagnostic technique is elevated to an artform. The other two "mechanical" pieces involve perversions or unintended uses of equipment: Andrew Gadow’s Technè-Auxons uses video signals from a Fairlight CVI (Computer Video Instrument) as audio; Botborg feed video signals into audio inputs and vice versa, creating complex and unstable results.
In the two remaining pieces with tightly coupled sound and image the connection is more abstract. Here we encounter the "maps" of Mitchell’s title. There has been much discussion about processes creating images from sounds (and the other way round); an elaborate example is here. At the simplest level any such process requires the specification of a map (function, correspondence, mapping) that takes some aspect of sound as input and produces a colour or other visual component as output. Maps need not be from one perceivable object (sound or image) to another; the fields of sonification and visualisation take abstract data in many forms (stockmarket prices, brainwave signals, rainfall) and map the data to either sounds and images. The choice of map is crucial. In 2004 I made a sonification of brainwave data for a concert of sonifications, and much of the creative work went into developing the mappings.
My piece on the "Synchresis" DVD has an underlying abstract process which generates both the sound and the image. So two maps are required, one from the process to the sound and the other from the process to the image. My process was a mathematical model representing an idealised drum, and my mappings followed the physics closely, though not slavishly. The other work that depends on mappings is Julian Oliver's and Steven Pickles's Fijuu2, a piece of software allowing a user to build up electronic music tracks in layers using a video game controller. Each sound has a different visual counterpart that reflects changes in the sound by becoming more or less elaborate. It appears that each sound is controlled by a few numbers and has an associated pair of mappings, one from the numbers to the actual sound we hear, and another from the same numbers to the visual associated with the sound. The maps appear more arbitrary than mine in Triangular Vibrations. The DVD track for Fijuu2 is a demo of the software.
It is hard to comment on the four pieces on the DVD with no close relationship between sound and image. The connections are made in the viewer’s brain, and the pieces need to be experienced rather than described.
The style or mood of the various pieces was quite unrelated to the degree of coupling of sound and image. Several pieces had what I think of as an epileptic aesthetic, very flickery and jerky, including the pieces by Robin Fox and Ian Andrews, and especially Peter Newman's, which is extraordinarily intense; I found it physically difficult to watch. These three pieces span the spectrum from tight coupling to no apparent coupling. Again, I would call my own piece slow but intense; Wade Marynowski's piece is also slow, but laid back; it appears to be a record of a live performance, and the slow tempo may have been dictated by the demands of real-time video processing. [Edit: I was wrong about this. Wade's piece is basically a recording of a live performance, but Wade tells me that he could have made it frenetic; the relaxed pace is an artistic decision.] My piece has tight coupling; Wade's has very loose coupling. The decision of the creator about the degree of audio-visual coupling is a meta-aesthetic decision, with indirect effect on the surface sensory qualities of the artwork.
What the pieces on the DVD do have in common is that they are audiovisual: they take seriously the relationship between sound and image. As Mitchell points out, the DVD explores territory nearer to sound and music than to video art. I found the DVD and Mitchell’s essay illuminating both in bringing together the actual works and in providing a framework to understand them and their context.
Subscribe to:
Posts (Atom)