I am one of six authors of a paper with the title "Ten Questions Concerning Generative Computer Art", which has been accepted for the journal Leonardo, though it will be a while before it is printed. A PDF version is available here.
Friday, November 2, 2012
Wednesday, October 3, 2012
Extraordinary exhibition of Australian botanical art
(I am interested in this because I live in Ballarat and am involved with the gallery.)
The Art Gallery of Ballarat has mounted an extraordinary exhibition of Australian botanical illustration, the most comprehensive ever in Australia. There are over 400 works, going back as far as William Dampier's vist to the West Australian coast in 1699, and forward to contemporary work. I found particularly interesting a sequence of works by Celia Rosser showing the stages of a drawing from the initial field sketch of a Banksia (Banksia aculeata) right through to the completed work. The delicate drawings of microscopic details of mosses by Lauren Black are also very attractive. As the exhibition points out, certainly from the mid 19th century women artists played a very important role in botanical illustration, firstly as amateurs, then as professionals.
The English and French exploratory voyages at the end of the 18th century led to Australian plants being cultivated in England and elsewhere in Europe. I was startled to see an account of the Gymea Lily (which grows wild around Sydney) in the botanical gardens of St Petersburg. Eventually the Australian colonies became well-established enough to have their own scientific establishments and produce their own botanical prints, and here Ferdinand von Mueller, scientist and first Director of the Melbourne Botanical Gardens, led the way in promoting botanical illustration. The exhibition has works representing all of the stages of interest in Australian plants, and also shows how the various techniques used for reproducing the images changed over time.
In conjunction with the exhibition there is a lavishly illustrated book with six substantial essays.
The exhibition and the book have been prepared by the Art Gallery of Ballarat, and the majority of the works come from the Gallery's own collection - altogether a remarkable achievement.
Where: Art Gallery of Ballarat, 40 Lydiard Street (North), Ballarat.
When: Until December 2, 2012.
Hours: 9am - 5pm every day.
Price: Full $12, concession $8, members of the Art Gallery of Ballarat Association have free entry.
(Entry to the permanent collection of the Gallery is free.)
The Art Gallery of Ballarat has mounted an extraordinary exhibition of Australian botanical illustration, the most comprehensive ever in Australia. There are over 400 works, going back as far as William Dampier's vist to the West Australian coast in 1699, and forward to contemporary work. I found particularly interesting a sequence of works by Celia Rosser showing the stages of a drawing from the initial field sketch of a Banksia (Banksia aculeata) right through to the completed work. The delicate drawings of microscopic details of mosses by Lauren Black are also very attractive. As the exhibition points out, certainly from the mid 19th century women artists played a very important role in botanical illustration, firstly as amateurs, then as professionals.
The English and French exploratory voyages at the end of the 18th century led to Australian plants being cultivated in England and elsewhere in Europe. I was startled to see an account of the Gymea Lily (which grows wild around Sydney) in the botanical gardens of St Petersburg. Eventually the Australian colonies became well-established enough to have their own scientific establishments and produce their own botanical prints, and here Ferdinand von Mueller, scientist and first Director of the Melbourne Botanical Gardens, led the way in promoting botanical illustration. The exhibition has works representing all of the stages of interest in Australian plants, and also shows how the various techniques used for reproducing the images changed over time.
In conjunction with the exhibition there is a lavishly illustrated book with six substantial essays.
The exhibition and the book have been prepared by the Art Gallery of Ballarat, and the majority of the works come from the Gallery's own collection - altogether a remarkable achievement.
Where: Art Gallery of Ballarat, 40 Lydiard Street (North), Ballarat.
When: Until December 2, 2012.
Hours: 9am - 5pm every day.
Price: Full $12, concession $8, members of the Art Gallery of Ballarat Association have free entry.
(Entry to the permanent collection of the Gallery is free.)
Sunday, August 26, 2012
J.W. Power's Book on the Mathematics of Pictorial Construction
J.W. Power was an Australian artist who, while at one time very
significant, seems to have fallen out of the history books. Born in
1881, he studied medicine at the University of Sydney, and then
moved to London in 1907 for further study. He was a military
surgeon during World War I, and after the war decided to become a
full-time artist, spending much time in Paris. He became a
member of the avant-garde group Abstraction-Création in
company with other leading artists in Paris, and was involved with
both cubism and surrealism.
Power left a large sum of money to the University of Sydney for the purpose of making the latest ideas and theories in art available in Australia through lectures and through the purchase of artworks. The bequest eventually led to the establishment of the Museum of Contemporary Art in Sydney as well as the Power Institute within the University of Sydney. Power as an artist is currently attracting attention, with a major Power exhibition, in the form of a recreation of Power's solo exhibition in Paris in 1934, coming up at the University of Sydney's art gallery.
My interest in Power comes about because in 1932 he published a geometrically-based book (in Paris; both French and English versions appeared). The English version has the title The Elements of Pictorial Construction: A Study of the Methods of Old and Modern Masters. I recently had a chance to examine copies of both versions at the University of Sydney, thanks to Anthony Green, Senior Librarian of the Schaeffer Library and Ann Stephen, Senior Curator of the University of Sydney Art Collection.
I also found that the whole work (in French and English) is available online through the National Library of Australia (go to http://catalogue.nla.gov.au/ and search for "elements of pictorial construction"). However, the physical book has a unique feature: six pockets at the back, each of which contains a photograph of an artwork and one or more transparent sheets with lines drawn on them, which are to be placed over the photograph according to instructions in Power's text. These reveal features of the construction of the work, according to Power's analysis.
What is the basis of Power's work?
Power considers that he is recovering a method of construction used by the Old Masters. He starts from the idea that all the significant points in a masterpiece of painting are carefully placed. He draws horizontal and vertical lines from each such point to the edges of the painting, and considers in what proportion the edges are divided, expecting this to be significant. Apart from the midpoint of the edge, the Golden Section point (approximately 0.618) is a natural choice, but Power introduces others, including a point on the long side which is √2 or about 1.414 times the length of the short side, being the diagonal of the square on the short side. Then Power has a method of "transfer": for example, take the distance between the √2 point and the end of the long side, and mark off this distance on the short side. He even makes a second transfer of the remaining distance on the short side back to the long side. The result is a large number of horizontal and vertical lines (27 horizontal lines and 16 vertical in the case of the "Mond" Crucifixion by Raphael, in Power's first detailed analysis). Power also identifies an equal-sided nonagon (nine-sided figure) connecting significant points within the painting, and a hexagon surrounding the painting.
Power also has an idea of "movable format", the same configuration appearing in different parts of the one painting. He applies this to a Last Judgement by Rubens, with the movable configuration consisting of a perspective drawing of a cone with an inscribed square pyramid, which he sees as being used in approximately eight different positions. Having a actual piece of cellophane to move about is a great help! According to Power, the positions are not arbitrary, but each is obtained by rotating the configuration about a specific point, or a similar such move.
There are some other ideas about construction in the book as well, but the division of the edges of the painting and the "movable format" are the most important.
What do I think of the book?
Power sticks very closely to his subject of construction and does not consider colours in the book, let alone things like symbolic content; there is a complete absence of mystical waffle. The book is also clearly written and generally easy to follow. However, in my view Power has not made a convincing case for what he claims.
It is uncontroversial that large paintings were frequently first done at small scale and then transferred to a wall or panel by some process of drawing up a grid. However, as far as I know the grid was made up of equally spaced lines (extant examples indicate this). Also, although the golden section was known and used, again as far as I know there is no historical evidence for things like Power's √2 point, let alone his "transfers" of such points.
So if there is a lack of historical evidence, the evidence must be internal to the works; indeed Power says: "[The Old Masters'] studies and sketches handed down to us show very few traces of these methods, while the finished pictures show a great many."
My first comment is that Power's method is on the whole not perceptually based. We know that a division of a line in a ratio somewhere in the range 3/5 to 2/3 is perceptually attractive, and this explains the photographer's "rule of thirds". But there is no reason to suppose that the exact value 0.61803... of the golden ratio is perceptually markedly better than say 0.625 (which is 5/8). There is no perceptual reason for things like the "transfer" of twice the short part of the golden section division of the long side (which occurs in Power's analysis of Raphael's Disputa).
So we are dealing with Power's geometrical ingenuity rather than perceptual givens, and Power has provided an array of lines and construction methods that I suspect can yield a good approximation to any ratio. The question is then whether the Old Masters used geometrical ingenuity in a similar way. It is not out of the question for a Renaissance master to be playing mathematical games in a painting, certainly with the golden section, but Power has not made a good case for the Old Masters to be playing his mathematical games. There is a range of geometrical ideas that Power could have discussed and didn't. Power doesn't provide the derivations for all of the lines in his analyses, but it is notable that he doesn't discuss equal divisions beyond the midpoint: there is no mention of thirds, quarters, fifths, etc. It is also notable that there is essentially no mention of the possibility of the same configuration occurring at different scales within the same painting. In geometrical terms, Power is concerned far more with congruence than with similarity. And there are always further possibilities: for example Power introduces ellipses but doesn't mention their focal points.
The "movable format" idea is intriguing; certainly for a swirling composition like the Last Judgement by Rubens, something like it appears more appropriate than fixed horizontal and vertical lines. However, I found the use of a pyramid drawn in perspective problematic. Power is not attempting to construct a three-dimensional model of the space implied by the painting; he is simply moving the perspective drawing of the pyramid as a flat object around the surface of the painting. This makes nonsense of any perspective within the drawing of the pyramid. On the whole, Power isn't much concerned with perspective. Raphael's Disputa has lines near the bottom whose spacing is determined by perspective, not by the sort of division that Power is interested in; Power doesn't consider these lines in his analysis.
I think that Power has driven his methods much too far, seeing things that are not there; I suspect that similar methods could provide quite different analyses of the same painting. The choice of significant points and lines is of course up to the judgement of the interpreter, and I am reluctant to challenge Power on this, but in Raphael's Crucifixion, for me the nail through Christ's feet is a prominent point in the painting; Power doesn't mention it, though he does mention other less prominent points. Then I could draw diagonal lines through the angels' feet and the nail to the faces of the kneeling figures, and start looking for similar triangles, and so on. If it is possible to give two quite different geometric analyses of the same work, both are likely to be illusory. I was somewhat more convinced by Power's discussion of cubist construction, since in cubist work it is reasonable to find both simple geometric shapes and the use of plan and elevation as described by Power.
As an aside, although the title The Elements of Pictorial Construction brings to mind Euclid's Elements of Geometry, Power is not using Euclid's methods (and does not claim to use them). The nine-sided figure Power finds in Raphael's Crucifixion cannot be constructed exactly by ruler and compass. Also Power has not read all of Euclid's Elements: he refers to the "dodecahedron and icosahedron, the proportions and structure of which had then [by the later Renaissance] been worked out". But the dodecahedron and icosahedron are discussed by Euclid.
It is interesting to consider what has happened in mathematics since Power's time. Power's geometric viewpoint comes across today as far too rigid. Power was not a professional mathematician, and it is not fair to the history of mathematics to take him as representative of his era, but certainly an emphasis on more qualitative approaches such as topology has become stronger. Topology was already well established by the 1930s, but may not have been accessible to people in Power's position, or may not have been seen as relevant. However, D'Arcy Thompson's On Growth and Form, published in 1917, might have caught Power's attention. Since Power's time a major impetus away from simple formulas has come from the computer. Chaos theory, fractals, and things like strange attractors and percolation theory have all given us new geometric objects that are quite different from the cubist cone and cylinder; although these new forms have been given solid mathematical underpinnings, the computer has helped in the discovery and investigation of the phenomena. And in general the absence of simple formulas in an area of investigation is much less of an obstacle than it was.
Power left a large sum of money to the University of Sydney for the purpose of making the latest ideas and theories in art available in Australia through lectures and through the purchase of artworks. The bequest eventually led to the establishment of the Museum of Contemporary Art in Sydney as well as the Power Institute within the University of Sydney. Power as an artist is currently attracting attention, with a major Power exhibition, in the form of a recreation of Power's solo exhibition in Paris in 1934, coming up at the University of Sydney's art gallery.
My interest in Power comes about because in 1932 he published a geometrically-based book (in Paris; both French and English versions appeared). The English version has the title The Elements of Pictorial Construction: A Study of the Methods of Old and Modern Masters. I recently had a chance to examine copies of both versions at the University of Sydney, thanks to Anthony Green, Senior Librarian of the Schaeffer Library and Ann Stephen, Senior Curator of the University of Sydney Art Collection.
I also found that the whole work (in French and English) is available online through the National Library of Australia (go to http://catalogue.nla.gov.au/ and search for "elements of pictorial construction"). However, the physical book has a unique feature: six pockets at the back, each of which contains a photograph of an artwork and one or more transparent sheets with lines drawn on them, which are to be placed over the photograph according to instructions in Power's text. These reveal features of the construction of the work, according to Power's analysis.
What is the basis of Power's work?
Power considers that he is recovering a method of construction used by the Old Masters. He starts from the idea that all the significant points in a masterpiece of painting are carefully placed. He draws horizontal and vertical lines from each such point to the edges of the painting, and considers in what proportion the edges are divided, expecting this to be significant. Apart from the midpoint of the edge, the Golden Section point (approximately 0.618) is a natural choice, but Power introduces others, including a point on the long side which is √2 or about 1.414 times the length of the short side, being the diagonal of the square on the short side. Then Power has a method of "transfer": for example, take the distance between the √2 point and the end of the long side, and mark off this distance on the short side. He even makes a second transfer of the remaining distance on the short side back to the long side. The result is a large number of horizontal and vertical lines (27 horizontal lines and 16 vertical in the case of the "Mond" Crucifixion by Raphael, in Power's first detailed analysis). Power also identifies an equal-sided nonagon (nine-sided figure) connecting significant points within the painting, and a hexagon surrounding the painting.
Power also has an idea of "movable format", the same configuration appearing in different parts of the one painting. He applies this to a Last Judgement by Rubens, with the movable configuration consisting of a perspective drawing of a cone with an inscribed square pyramid, which he sees as being used in approximately eight different positions. Having a actual piece of cellophane to move about is a great help! According to Power, the positions are not arbitrary, but each is obtained by rotating the configuration about a specific point, or a similar such move.
There are some other ideas about construction in the book as well, but the division of the edges of the painting and the "movable format" are the most important.
What do I think of the book?
Power sticks very closely to his subject of construction and does not consider colours in the book, let alone things like symbolic content; there is a complete absence of mystical waffle. The book is also clearly written and generally easy to follow. However, in my view Power has not made a convincing case for what he claims.
It is uncontroversial that large paintings were frequently first done at small scale and then transferred to a wall or panel by some process of drawing up a grid. However, as far as I know the grid was made up of equally spaced lines (extant examples indicate this). Also, although the golden section was known and used, again as far as I know there is no historical evidence for things like Power's √2 point, let alone his "transfers" of such points.
So if there is a lack of historical evidence, the evidence must be internal to the works; indeed Power says: "[The Old Masters'] studies and sketches handed down to us show very few traces of these methods, while the finished pictures show a great many."
My first comment is that Power's method is on the whole not perceptually based. We know that a division of a line in a ratio somewhere in the range 3/5 to 2/3 is perceptually attractive, and this explains the photographer's "rule of thirds". But there is no reason to suppose that the exact value 0.61803... of the golden ratio is perceptually markedly better than say 0.625 (which is 5/8). There is no perceptual reason for things like the "transfer" of twice the short part of the golden section division of the long side (which occurs in Power's analysis of Raphael's Disputa).
So we are dealing with Power's geometrical ingenuity rather than perceptual givens, and Power has provided an array of lines and construction methods that I suspect can yield a good approximation to any ratio. The question is then whether the Old Masters used geometrical ingenuity in a similar way. It is not out of the question for a Renaissance master to be playing mathematical games in a painting, certainly with the golden section, but Power has not made a good case for the Old Masters to be playing his mathematical games. There is a range of geometrical ideas that Power could have discussed and didn't. Power doesn't provide the derivations for all of the lines in his analyses, but it is notable that he doesn't discuss equal divisions beyond the midpoint: there is no mention of thirds, quarters, fifths, etc. It is also notable that there is essentially no mention of the possibility of the same configuration occurring at different scales within the same painting. In geometrical terms, Power is concerned far more with congruence than with similarity. And there are always further possibilities: for example Power introduces ellipses but doesn't mention their focal points.
The "movable format" idea is intriguing; certainly for a swirling composition like the Last Judgement by Rubens, something like it appears more appropriate than fixed horizontal and vertical lines. However, I found the use of a pyramid drawn in perspective problematic. Power is not attempting to construct a three-dimensional model of the space implied by the painting; he is simply moving the perspective drawing of the pyramid as a flat object around the surface of the painting. This makes nonsense of any perspective within the drawing of the pyramid. On the whole, Power isn't much concerned with perspective. Raphael's Disputa has lines near the bottom whose spacing is determined by perspective, not by the sort of division that Power is interested in; Power doesn't consider these lines in his analysis.
I think that Power has driven his methods much too far, seeing things that are not there; I suspect that similar methods could provide quite different analyses of the same painting. The choice of significant points and lines is of course up to the judgement of the interpreter, and I am reluctant to challenge Power on this, but in Raphael's Crucifixion, for me the nail through Christ's feet is a prominent point in the painting; Power doesn't mention it, though he does mention other less prominent points. Then I could draw diagonal lines through the angels' feet and the nail to the faces of the kneeling figures, and start looking for similar triangles, and so on. If it is possible to give two quite different geometric analyses of the same work, both are likely to be illusory. I was somewhat more convinced by Power's discussion of cubist construction, since in cubist work it is reasonable to find both simple geometric shapes and the use of plan and elevation as described by Power.
As an aside, although the title The Elements of Pictorial Construction brings to mind Euclid's Elements of Geometry, Power is not using Euclid's methods (and does not claim to use them). The nine-sided figure Power finds in Raphael's Crucifixion cannot be constructed exactly by ruler and compass. Also Power has not read all of Euclid's Elements: he refers to the "dodecahedron and icosahedron, the proportions and structure of which had then [by the later Renaissance] been worked out". But the dodecahedron and icosahedron are discussed by Euclid.
It is interesting to consider what has happened in mathematics since Power's time. Power's geometric viewpoint comes across today as far too rigid. Power was not a professional mathematician, and it is not fair to the history of mathematics to take him as representative of his era, but certainly an emphasis on more qualitative approaches such as topology has become stronger. Topology was already well established by the 1930s, but may not have been accessible to people in Power's position, or may not have been seen as relevant. However, D'Arcy Thompson's On Growth and Form, published in 1917, might have caught Power's attention. Since Power's time a major impetus away from simple formulas has come from the computer. Chaos theory, fractals, and things like strange attractors and percolation theory have all given us new geometric objects that are quite different from the cubist cone and cylinder; although these new forms have been given solid mathematical underpinnings, the computer has helped in the discovery and investigation of the phenomena. And in general the absence of simple formulas in an area of investigation is much less of an obstacle than it was.
Thursday, July 19, 2012
Is Electromagnetism a Fascist Theory?
Unfortunately I'm not joking.
Last week I attended the 2012 conference of the Art Association of Australia and New Zealand in Sydney. The final presentation was by a distinguished American curator, Dr Helen Molesworth, on the work of Josiah McElheny, an American artist who uses traditional glass-blowing techniques to make large, more or less abstract sculptures. Dr Molesworth's presentation was well-crafted and very interesting and engaging. McElheny has referenced science, in particular cosmology, in his more recent work, and towards the end of her presentation Dr Molesworth referred to the possibility of a "Theory of Everything", and said she didn't want a Theory of Everything: it would be fascist.
At the drinks and nibblies afterwards I asked Dr Molesworth if the theory of electromagnetism was fascist. She said she didn't know: she didn't have enough information. This startled me, and I said that the name "Theory of Everything" was a physicists' in-joke (referring as it does to a single theory that would bring together gravity and quantum mechanics), and that it certainly wouldn't produce a theory of the psyche or a theory of art. I also tried to say that the value of a scientific theory was to be found through observation, deduction and explanatory power, not political attitudes, but I don't think I explained myself well; she disagreed with what she heard me as saying.
I chose electromagnetism as it is a successful example in physics of the unification of previously disparate phenomena, and it would be part of a Theory of Everything, but I didn't explain this during our brief conversation.
I don't want to make too much of this, as it was a noisy environment and not a good time for serious discussion. I hope that we were talking at cross-purposes, but surely even to consider that the theory of electromagnetism might be fascist is to be grievously confused about the nature of theories in physics.
I was regrettably reminded of one of the silliest episodes in post-modernism, the (by now widely quoted) statement by the otherwise respected feminist scholar Luce Irigaray about Einstein's equation E = mc²:
Is E=Mc² a sexed equation? Perhaps it is. Let us make the hypothesis that it is insofar as it privileges the speed of light over other speeds that are vitally necessary to us. What seems to me to indicate the possible sexed nature of the equation is not directly its uses by nuclear weapons, rather it is having privileged that which goes faster.
I have seen attempted defences of this sort of writing along the lines that although the people writing like this appear to be using scientific terms, they actually have different meanings in mind for the words. I have not seen the source for the quotation from Irigaray above, but it is hard to give it any reasonable interpretation.
I did find a related article by Irigaray, "Is the Subject of Science Sexed?", Cultural Critique No.1 (Autumn 1985), pp. 73-88. In this Irigaray comments on a range of sciences, from psychoanalysis through biology to mathematics and physics, and by the time she gets to mathematics it is clear that she is using terminology from the subject without understanding it.
There is of course room for a serious study of what scientists do, what biases they bring to their work, who funds it, what questions are studied and what are not, and so forth, and indeed many scientists themselves are very much concerned about these questions. But statements like Irigaray's had the effect of bringing the whole area of so-called science studies into disrepute, and calling physical theories "fascist" does not help.
Last week I attended the 2012 conference of the Art Association of Australia and New Zealand in Sydney. The final presentation was by a distinguished American curator, Dr Helen Molesworth, on the work of Josiah McElheny, an American artist who uses traditional glass-blowing techniques to make large, more or less abstract sculptures. Dr Molesworth's presentation was well-crafted and very interesting and engaging. McElheny has referenced science, in particular cosmology, in his more recent work, and towards the end of her presentation Dr Molesworth referred to the possibility of a "Theory of Everything", and said she didn't want a Theory of Everything: it would be fascist.
At the drinks and nibblies afterwards I asked Dr Molesworth if the theory of electromagnetism was fascist. She said she didn't know: she didn't have enough information. This startled me, and I said that the name "Theory of Everything" was a physicists' in-joke (referring as it does to a single theory that would bring together gravity and quantum mechanics), and that it certainly wouldn't produce a theory of the psyche or a theory of art. I also tried to say that the value of a scientific theory was to be found through observation, deduction and explanatory power, not political attitudes, but I don't think I explained myself well; she disagreed with what she heard me as saying.
I chose electromagnetism as it is a successful example in physics of the unification of previously disparate phenomena, and it would be part of a Theory of Everything, but I didn't explain this during our brief conversation.
I don't want to make too much of this, as it was a noisy environment and not a good time for serious discussion. I hope that we were talking at cross-purposes, but surely even to consider that the theory of electromagnetism might be fascist is to be grievously confused about the nature of theories in physics.
I was regrettably reminded of one of the silliest episodes in post-modernism, the (by now widely quoted) statement by the otherwise respected feminist scholar Luce Irigaray about Einstein's equation E = mc²:
Is E=Mc² a sexed equation? Perhaps it is. Let us make the hypothesis that it is insofar as it privileges the speed of light over other speeds that are vitally necessary to us. What seems to me to indicate the possible sexed nature of the equation is not directly its uses by nuclear weapons, rather it is having privileged that which goes faster.
I have seen attempted defences of this sort of writing along the lines that although the people writing like this appear to be using scientific terms, they actually have different meanings in mind for the words. I have not seen the source for the quotation from Irigaray above, but it is hard to give it any reasonable interpretation.
I did find a related article by Irigaray, "Is the Subject of Science Sexed?", Cultural Critique No.1 (Autumn 1985), pp. 73-88. In this Irigaray comments on a range of sciences, from psychoanalysis through biology to mathematics and physics, and by the time she gets to mathematics it is clear that she is using terminology from the subject without understanding it.
There is of course room for a serious study of what scientists do, what biases they bring to their work, who funds it, what questions are studied and what are not, and so forth, and indeed many scientists themselves are very much concerned about these questions. But statements like Irigaray's had the effect of bringing the whole area of so-called science studies into disrepute, and calling physical theories "fascist" does not help.
Friday, June 29, 2012
Sydney Non Objective (SNO) in Sydney, July
In July I will be part of a four-person group exhibition at Sydney Non-Objective in Marrickville, Sydney. All four of us are abstract artists. The show is being put together by Wendy Kelly, and the artists are Wendy, Louise Blyton, Magda Cebokli and myself.
Sydney Non-Objective: First Floor, 175 Marrickville Rd, Marrickville, Sydney
Tel: +61-2-9560-3470 Email: info@sno.org.au Web: www.sno.org.au
Opening: 3pm Saturday 7th July 2012.
Exhibition: 8th - 29th July 2012.
SNO Gallery hours: 12 - 5 Friday-Sunday or by appointment.
Sydney Non-Objective: First Floor, 175 Marrickville Rd, Marrickville, Sydney
Tel: +61-2-9560-3470 Email: info@sno.org.au Web: www.sno.org.au
Opening: 3pm Saturday 7th July 2012.
Exhibition: 8th - 29th July 2012.
SNO Gallery hours: 12 - 5 Friday-Sunday or by appointment.
Sunday, June 17, 2012
The Popularity of Programming Languages
Over the last few months I have been renovating my programming
practice. Although I don't intend to change the language I
use, I thought I would see which are the most popular languages, and
try to look at them reasonably objectively. Popularity isn't
everything, but it is important. A popular language will have
more and better tools available, more books (usually) and more help
available on the Internet. It will also have more and better
programming libraries available, and that can make a critical
difference.
For the sort of programs I write, I need a general-purpose object-oriented language that is reasonably fast. (More on "object-oriented" below.)
There is a lot of more or less contradictory information on the Internet about which language is most popular. Two sites that track popularity, according to their own specific measures, are the TIOBE index (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html) and The Transparent Language Popularity Index, so-called because all its data is exposed (http://lang-index.sourceforge.net/). Both sites give the top eleven languages (as of June 2012) as:
C, Java, C++, Objective-C, C#, Visual Basic, PHP,
Python, Perl, Ruby, JavaScript
though the ordering differs.
Of the top eleven, PHP is a special-purpose language for web servers, and Javascript (which has very little to do with Java) is mainly used in web pages. The others fall into two groups: general-purpose programming languages (C, Java, C++, Objective-C, C#, Visual Basic) and so-called scripting languages (Python, Perl, Ruby). Scripting languages were originally designed to be easy to write very small programs in, to do things like take the output of one program, change its format, and feed it into another program. Scripting languages have tended to evolve towards being general-purpose, but they are still meant to be more "light-weight" than languages like C++.
Among the general-purpose languages in the top group, C is the odd one out. It is by far the the oldest (dating from 1973) and it is the only one that is not object-oriented. Object-orientation is a style of language design that allows a very useful division of a program into chunks; ideally these correspond well to concepts in the area the program is dealing with (whether it be a computer game or a telephone exchange). C was designed to write operating systems in; it was intended to be small, fast, portable between different makes of computer, and "close to the machine", allowing low-level operations directly. Remarkably, C is pretty much tied for first place with the much more recent language Java (from 1995).
The object-oriented languages fall into two groups. Java, C# and Visual Basic (at least in its .NET form) all run in so-called managed execution environments, which are intended to insulate the machine from bad behaviour on the part of the program. The penalty is slower speed and increased memory usage, since there is a lot of extra checking, though Java doesn't do too badly as far as speed is concerned. The other two, C++ and Objective-C, are both based directly on the C language, and like it don't have managed environments. C++ (in particular) is for computational tasks about as fast as any language gets, and about as frugal with memory.
As far as the way objects are handled, from what I know C++ is different from the rest: C++ handles objects directly, the others by some form of indirection (essentially pointers).
So, currently there are five popular general-purpose object-oriented programming languages: Java, C++, Objective-C, C# and Visual Basic.
If we look at vendors, C# and Visual Basic are associated with Microsoft and Objective-C is associated with Apple, though for Objective-C the compilers are open source. Java was developed by Sun Microsystems, now taken over by Oracle; Oracle has been trying to assert patent rights over Java, though much of it was released under an open source licence. C++ was originally developed by AT&T, but they seem to have been generous from the beginning in sharing it, and the compilers are now open source projects.
There are other general-purpose object-oriented programming languages. The two popularity indices mentioned above differ wildly in their orderings of the languages outside the top group, though the next most popular appears to be Delphi (also known as Object Pascal), which was associated with Apple at one time and has been around since 1986. (The underlying Pascal language dates from 1970 and thus predates C.) Two newer languages are D, an open-source project intended to serve as an improved C++, launched in 2001, and Go, launched by Google in 2009, and also open source.
The current revolution in hardware is the introduction of multi-core processors. Although there have been mechanisms for dealing with so-called concurrency for a long time, they have always been difficult to use. Go and D build in ways of handling concurrency, and maybe one of these languages will take over, though the more established languages are also changing to handle concurrency better. The situation is certainly not stable!
For the sort of programs I write, I need a general-purpose object-oriented language that is reasonably fast. (More on "object-oriented" below.)
There is a lot of more or less contradictory information on the Internet about which language is most popular. Two sites that track popularity, according to their own specific measures, are the TIOBE index (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html) and The Transparent Language Popularity Index, so-called because all its data is exposed (http://lang-index.sourceforge.net/). Both sites give the top eleven languages (as of June 2012) as:
C, Java, C++, Objective-C, C#, Visual Basic, PHP,
Python, Perl, Ruby, JavaScript
though the ordering differs.
Of the top eleven, PHP is a special-purpose language for web servers, and Javascript (which has very little to do with Java) is mainly used in web pages. The others fall into two groups: general-purpose programming languages (C, Java, C++, Objective-C, C#, Visual Basic) and so-called scripting languages (Python, Perl, Ruby). Scripting languages were originally designed to be easy to write very small programs in, to do things like take the output of one program, change its format, and feed it into another program. Scripting languages have tended to evolve towards being general-purpose, but they are still meant to be more "light-weight" than languages like C++.
Among the general-purpose languages in the top group, C is the odd one out. It is by far the the oldest (dating from 1973) and it is the only one that is not object-oriented. Object-orientation is a style of language design that allows a very useful division of a program into chunks; ideally these correspond well to concepts in the area the program is dealing with (whether it be a computer game or a telephone exchange). C was designed to write operating systems in; it was intended to be small, fast, portable between different makes of computer, and "close to the machine", allowing low-level operations directly. Remarkably, C is pretty much tied for first place with the much more recent language Java (from 1995).
The object-oriented languages fall into two groups. Java, C# and Visual Basic (at least in its .NET form) all run in so-called managed execution environments, which are intended to insulate the machine from bad behaviour on the part of the program. The penalty is slower speed and increased memory usage, since there is a lot of extra checking, though Java doesn't do too badly as far as speed is concerned. The other two, C++ and Objective-C, are both based directly on the C language, and like it don't have managed environments. C++ (in particular) is for computational tasks about as fast as any language gets, and about as frugal with memory.
As far as the way objects are handled, from what I know C++ is different from the rest: C++ handles objects directly, the others by some form of indirection (essentially pointers).
So, currently there are five popular general-purpose object-oriented programming languages: Java, C++, Objective-C, C# and Visual Basic.
If we look at vendors, C# and Visual Basic are associated with Microsoft and Objective-C is associated with Apple, though for Objective-C the compilers are open source. Java was developed by Sun Microsystems, now taken over by Oracle; Oracle has been trying to assert patent rights over Java, though much of it was released under an open source licence. C++ was originally developed by AT&T, but they seem to have been generous from the beginning in sharing it, and the compilers are now open source projects.
There are other general-purpose object-oriented programming languages. The two popularity indices mentioned above differ wildly in their orderings of the languages outside the top group, though the next most popular appears to be Delphi (also known as Object Pascal), which was associated with Apple at one time and has been around since 1986. (The underlying Pascal language dates from 1970 and thus predates C.) Two newer languages are D, an open-source project intended to serve as an improved C++, launched in 2001, and Go, launched by Google in 2009, and also open source.
The current revolution in hardware is the introduction of multi-core processors. Although there have been mechanisms for dealing with so-called concurrency for a long time, they have always been difficult to use. Go and D build in ways of handling concurrency, and maybe one of these languages will take over, though the more established languages are also changing to handle concurrency better. The situation is certainly not stable!
Tuesday, May 15, 2012
Human, Transhuman, Posthuman (Part 2)
Recently I attended the "Humanity+" conference in Melbourne
(http://hplusconf.com.au/). This is the second of two posts about the conference.
In my previous post I discussed some presentations at the Humanity+ conference that were closely related to current science. This time I will discuss some presentations related to art.
Firstly, Stelarc's presentation, entitled "Meat, Metal and Code". For those who don't know of him, Stelarc (http://stelarc.org) is an Australian artist who has really pushed the boundaries of the body in art. He is also a very engaging presenter, with an extraordinary laugh. He took us through some of his works, from his third hand (an artificial hand he controlled with electrodes attached to his stomach muscles) and the stomach sculpture (which he swallowed: it then unfolded and transmitted video of the interior of his stomach) to more recent work, including work with exoskeletons, and the work Ping, where his body was controlled via the Internet by people clicking on an interface which resulted in Stelarc's muscles being made to move by electric shocks. Stelarc was wearing VR goggles that let him see (via webcam) the face of the person who was controlling him.
Stelarc also talked about, and showed us, his extra ear. The extra ear was originally intended to be on the side of his head, but he ended up growing a small-scale ear-shaped piece of flesh on the inside of one arm, on an implanted scaffolding of artificial cartilage. He also had a microphone and Bluetooth transmitter implanted into the "ear", but he developed a serious infection and the microphone and transmitter had to be removed. He plans to try again with them, and also to implant a small speaker in a gap in his teeth, thus completing a sort of human telephone circuit. There are also plans to inject some of his stem cells to create an earlobe, which is missing at present.
Stelarc also ran the Prosthetic Head for us, an avatar of Stelarc which one converses with, Eliza-fashion, by typing statements or questions. The Head is on a screen. It is based on a scan of Stelarc's head, has changing facial expressions, and talks via speech synthesis. An extension of this work is the Articulated Head, which he only mentioned briefly. This is mounted on a robot arm and has an "attention facility": it is capable of getting bored and turning away from someone interacting with it.
Recently Stelarc reprised his early suspension works, where he had large hooks pushed through his skin and was suspended by steel cables. He did one this March, after a break of more than 20 years, suspended over a four-metre-long statue of his arm with the extra ear. He did say that doing a suspension again wasn't one of his best ideas!
Stelarc had a disconcerting habit during the presentation of referring to himself as "the body" or "this body" or "the artist", though he did say "I" some of the time. In response to a question he said: "There is not an 'I' that owns this body. This body exists and interacts."
Natasha Vita-More (which I think is an adopted name; http://www.natasha.cc) describes herself as a designer, though she has evidently had a very varied career; at one point she did pre-astronaut training. She described her "Primo Posthuman" design for a re-engineered body, with choice of gender, improved skin with adjustable colour, and so on. She made this design in collaboration with various professionals, but as far as I can tell it is essentially a piece of conceptual art, without technical detail of the sort I described in my previous post. She also showed us other works, based in part on things like scans of the bone density of her body; to some extent they are meditations on the frailty of the human body—hers. She said she isn't interested in radical body modification for herself (as things are now), though she has had cosmetic surgery.
Her main work seems to be tireless promotion of ideas around trans-humanism, a movement to improve the human condition by scientific means, leading ultimately to humans being able to take charge of their own evolution.
A surprise was the talk by Stuart Candy (http://futuryst.com/). Candy is a professional futurist: his day job is as a member of the "Foresight" team of the big engineering and construction firm Arup. He also has an adjunct position at the California College of the Arts.
Stuart explained what he does as a futurist. He doesn't try to "predict the future": that is futile. He does develop a range of possible future scenarios; the challenge then is to try to move towards the scenarios that we prefer and away from the ones we don't want. He also considered that the scenarios are best presented as stories or experiences, rather than through tables and charts.
This is where the surprise came in, as these stories or experiences are essentially imaginative artworks. Candy studied at the University of Hawaii under the noted futurist Jim Dator. While Candy was there, the State Government of Hawaii launched a project to discuss Hawaii in 2050 and asked the University to assist. The result was four experiential scenarios. In one, which was more or less "business as usual", the audience attended an election debate between the candidates for the Governorship of Hawaii in 2050; the candidates were corporations, as legal personhood was assumed to have advanced to the point where corporations could run for office. In another scenario the audience was herded into a room by gun-carrying soldiers for an indoctrination lecture. It was supposed that a global economic collapse had essentially separated Hawaii from the rest of the world; parts of the U.S. Army based in Hawaii had taken over, cloaking their authoritarian rule with respectability by restoring the Hawaiian monarchy, which was overthrown in a U.S.-backed coup in 1893. In all there were four such future scenarios.
At the California College of the Arts, Candy has encouraged students to come up with projects like the "Genetic Census of 2020", where the students got people to spit into a test-tube, supposedly to ascertain their genetic profiles. Candy's point was that these experiential scenarios present ideas about the future far more engagingly than any number of graphs, charts and technical reports. He showed us some pages from a "Summary for Policymakers" produced by the Intergovernmental Panel on Climate Change. It was full of dry graphs and charts, whose implications are in fact frightening, but which would have had far more impact on the policy makers if they had been supplemented with the sort of experiential scenarios Candy was showing us.
A change of pace was provided by a reading by Lisa Jacobson (http://lisajacobson.org/) of extracts from her verse novel The Sunlit Zone, which is being published this month. The novel is set partly in the year 2050, and the extracts we heard blended the everyday with occasional startling elements that are supposed no more remarkable in 2050 than an iPad is today. Jacobson has won awards for her poetry, but the verse novel is apparently a new form for her. She read well, and the half hour she had went all too quickly.
Nobody at the conference represented the cultural studies/critical theory side of things, which is understandable considering some of the hostile attitudes towards science that have come from that camp, and the only person who mentioned critical theory was Natasha Vita-More. She discussed briefly a 2011 book Transhumanism and Its Critics, edited by Gregory Hansell and William Grassie, which contains contributions from both sides.
For me the weekend was a very interesting glimpse into the transhumanist world, which I was previously only vaguely aware of. And if Aubrey de Grey's work "only" leads to an effective therapy for macular degeneration, more power to his elbow!
In my previous post I discussed some presentations at the Humanity+ conference that were closely related to current science. This time I will discuss some presentations related to art.
Firstly, Stelarc's presentation, entitled "Meat, Metal and Code". For those who don't know of him, Stelarc (http://stelarc.org) is an Australian artist who has really pushed the boundaries of the body in art. He is also a very engaging presenter, with an extraordinary laugh. He took us through some of his works, from his third hand (an artificial hand he controlled with electrodes attached to his stomach muscles) and the stomach sculpture (which he swallowed: it then unfolded and transmitted video of the interior of his stomach) to more recent work, including work with exoskeletons, and the work Ping, where his body was controlled via the Internet by people clicking on an interface which resulted in Stelarc's muscles being made to move by electric shocks. Stelarc was wearing VR goggles that let him see (via webcam) the face of the person who was controlling him.
Stelarc also talked about, and showed us, his extra ear. The extra ear was originally intended to be on the side of his head, but he ended up growing a small-scale ear-shaped piece of flesh on the inside of one arm, on an implanted scaffolding of artificial cartilage. He also had a microphone and Bluetooth transmitter implanted into the "ear", but he developed a serious infection and the microphone and transmitter had to be removed. He plans to try again with them, and also to implant a small speaker in a gap in his teeth, thus completing a sort of human telephone circuit. There are also plans to inject some of his stem cells to create an earlobe, which is missing at present.
Stelarc also ran the Prosthetic Head for us, an avatar of Stelarc which one converses with, Eliza-fashion, by typing statements or questions. The Head is on a screen. It is based on a scan of Stelarc's head, has changing facial expressions, and talks via speech synthesis. An extension of this work is the Articulated Head, which he only mentioned briefly. This is mounted on a robot arm and has an "attention facility": it is capable of getting bored and turning away from someone interacting with it.
Recently Stelarc reprised his early suspension works, where he had large hooks pushed through his skin and was suspended by steel cables. He did one this March, after a break of more than 20 years, suspended over a four-metre-long statue of his arm with the extra ear. He did say that doing a suspension again wasn't one of his best ideas!
Stelarc had a disconcerting habit during the presentation of referring to himself as "the body" or "this body" or "the artist", though he did say "I" some of the time. In response to a question he said: "There is not an 'I' that owns this body. This body exists and interacts."
Natasha Vita-More (which I think is an adopted name; http://www.natasha.cc) describes herself as a designer, though she has evidently had a very varied career; at one point she did pre-astronaut training. She described her "Primo Posthuman" design for a re-engineered body, with choice of gender, improved skin with adjustable colour, and so on. She made this design in collaboration with various professionals, but as far as I can tell it is essentially a piece of conceptual art, without technical detail of the sort I described in my previous post. She also showed us other works, based in part on things like scans of the bone density of her body; to some extent they are meditations on the frailty of the human body—hers. She said she isn't interested in radical body modification for herself (as things are now), though she has had cosmetic surgery.
Her main work seems to be tireless promotion of ideas around trans-humanism, a movement to improve the human condition by scientific means, leading ultimately to humans being able to take charge of their own evolution.
A surprise was the talk by Stuart Candy (http://futuryst.com/). Candy is a professional futurist: his day job is as a member of the "Foresight" team of the big engineering and construction firm Arup. He also has an adjunct position at the California College of the Arts.
Stuart explained what he does as a futurist. He doesn't try to "predict the future": that is futile. He does develop a range of possible future scenarios; the challenge then is to try to move towards the scenarios that we prefer and away from the ones we don't want. He also considered that the scenarios are best presented as stories or experiences, rather than through tables and charts.
This is where the surprise came in, as these stories or experiences are essentially imaginative artworks. Candy studied at the University of Hawaii under the noted futurist Jim Dator. While Candy was there, the State Government of Hawaii launched a project to discuss Hawaii in 2050 and asked the University to assist. The result was four experiential scenarios. In one, which was more or less "business as usual", the audience attended an election debate between the candidates for the Governorship of Hawaii in 2050; the candidates were corporations, as legal personhood was assumed to have advanced to the point where corporations could run for office. In another scenario the audience was herded into a room by gun-carrying soldiers for an indoctrination lecture. It was supposed that a global economic collapse had essentially separated Hawaii from the rest of the world; parts of the U.S. Army based in Hawaii had taken over, cloaking their authoritarian rule with respectability by restoring the Hawaiian monarchy, which was overthrown in a U.S.-backed coup in 1893. In all there were four such future scenarios.
At the California College of the Arts, Candy has encouraged students to come up with projects like the "Genetic Census of 2020", where the students got people to spit into a test-tube, supposedly to ascertain their genetic profiles. Candy's point was that these experiential scenarios present ideas about the future far more engagingly than any number of graphs, charts and technical reports. He showed us some pages from a "Summary for Policymakers" produced by the Intergovernmental Panel on Climate Change. It was full of dry graphs and charts, whose implications are in fact frightening, but which would have had far more impact on the policy makers if they had been supplemented with the sort of experiential scenarios Candy was showing us.
A change of pace was provided by a reading by Lisa Jacobson (http://lisajacobson.org/) of extracts from her verse novel The Sunlit Zone, which is being published this month. The novel is set partly in the year 2050, and the extracts we heard blended the everyday with occasional startling elements that are supposed no more remarkable in 2050 than an iPad is today. Jacobson has won awards for her poetry, but the verse novel is apparently a new form for her. She read well, and the half hour she had went all too quickly.
Nobody at the conference represented the cultural studies/critical theory side of things, which is understandable considering some of the hostile attitudes towards science that have come from that camp, and the only person who mentioned critical theory was Natasha Vita-More. She discussed briefly a 2011 book Transhumanism and Its Critics, edited by Gregory Hansell and William Grassie, which contains contributions from both sides.
For me the weekend was a very interesting glimpse into the transhumanist world, which I was previously only vaguely aware of. And if Aubrey de Grey's work "only" leads to an effective therapy for macular degeneration, more power to his elbow!
Tuesday, May 8, 2012
Human, Transhuman, Posthuman (Part 1)
Last weekend I attended the "Humanity+" conference in Melbourne
(http://hplusconf.com.au), held at RMIT. It consisted of an
eclectic mix of presentations by invited speakers, without
contributed papers or a published proceedings, though videos of the
talks will become available. The conference was under the
auspices of the Humanity+ organisation (http://humanityplus.org),
whose aim is to promote thinking about the "next steps" of
humanity. The main areas of focus appear to be biomedical and
bioengineering developments for longer and healthier life, leading
on to enhancements of the body, and artificial intelligence and
enhancements of the mind. The chair of Humanity+, Natasha
Vita-More, was one of the presenters. I went because I thought
the gerontologist Aubrey de Grey would be worth hearing, and because
the artist Stelarc was giving a presentation.
This conference was more optimistic than pessimistic. Climate change and population pressures were there in the background, and sustainability was a theme, but on the whole the intent was to look beyond these problems to longer-term possible futures for humanity. The organiser was Adam Ford, who has just become a board member of Humanity+, and who has had a considerable involvement in this general area.
Maybe 80 people attended, predominantly but by no means exclusively male, and a mixture of young and old, with relatively few people in the middle age range. I got the impression that almost everyone there had a background in science, engineering or computing.
Aubrey de Grey was well worth hearing. His view on ageing is that normal metabolic processes produce "damage" of various kinds, such as junk inside cells that the body cannot dissolve. We can tolerate a certain amount of such damage, but eventually it starts to harm us. De Grey listed all the classes of damage that are known (and indicated that no fundamentally new classes of damage had come to light in the last 30 years), and indicated plausible approaches to dealing with all of them. He mentioned two specific projects at his laboratory dealing with junk inside the cell, targeted at macular degeneration, which is a leading cause of blindness, and at atherosclerosis, which is inflammation of the walls of the arteries, heading to heart disease and strokes.
All of this comes under the heading of "regenerative medicine", therapies to rejuvenate (that is, to make young again) systems in the body by clearing out damage and taking the bodily systems back some way towards the healthy young adult state. Once such therapies are in place for all the major types of damage (which is quite a few years away), de Grey thinks that we will be able to have another 30 years of healthy middle age. These days 60 is the new 50; with these therapies 80 or 90 would be the new 50. But that is only the start. As techniques improve, clearing out a greater proportion of damage, repeated rejuvenation would allow enormous prolongation of healthy, active life, to ultimately maybe 1,000 years. This doesn't imply a cure for cancer, but it does imply a method of avoiding cancer by manipulating telomeres (the caps at the end of DNA strands).
All of this provoked a lot of discussion, and de Grey devoted his second presentation to discussing objections to his program. The diseases of ageing are not just a first world problem: de Grey said that already two-thirds of the deaths in the world are due to them. Of course if we do have the potential to live to 1,000 years there will have to be massive changes in society, but de Grey pointed out that by the time such long life becomes feasible there will have been massive changes in society anyway.
Incidentally de Grey is not a food faddist or anything of that sort. He was asked about diet, and said that as long as one is reasonably sensible about diet and exercise (and doesn't smoke), things like the "paleo diet" and the like don't achieve much. And he enjoyed a beer at the pub afterwards.
The other presentation that contained a road map for future developments was that of Tim Josling on artificial intelligence. He outlined the so-called hype cycle that tends to apply to new technologies. Once a new technology becomes known, at first there is a great deal of hype, resulting in wildly inflated expectations. When the technology doesn't live up to these, there is a "trough of disillusionment", and then after than attitudes to the technology finally settle to a realistic view of what it can achieve.
Artificial intelligence (AI) went through this cycle: after quite a long initial period of hype the "AI winter" descended in the 1980s, when funding dried up and AI was generally regarded to have failed. In fact it developed quietly in various specialised areas. Josling listed several techniques developed years or decades ago that were impractical at the time but are now coming in to their own as increased computer power has made them feasible. Incidentally Josling is more optimistic about the continuation of Moore's Law (that the number of transistors on a chip doubles every two years) than Herb Sutter (whom I mentioned in a previous post), but it doesn't matter for Josling's argument whether increased computing power arrives via Moore's Law in one box or via networks, as Sutter expects.
Josling expects that more and more low-level white-collar jobs will be cheaper to do by machines, on a relatively short time frame, and he ended by posing the question: "Leisured aristocracy or unemployed underclass?"
This sort of prophecy was made in my youth, and hasn't really come to pass. However, the "acceptable" minimum rate of unemployment has risen from 2% to 5% in my lifetime, and since the official figures are constructed to be as low as possible, the true unemployment figure is at least 10%. I also think that the availability of cheap Third World workers has delayed the development of automation, but that is beginning to come to an end. Eventually the machines will be cheaper than even a Third World worker.
In the background of Josling's presentation is a concept known as "The Singularity", and there was a panel discussion around this at the conference. The Singularity is when machines become smarter than we are; this may be a long way off, but it is hard to argue convincingly that it can never happen. The Singularity is a sort of "event horizon", as we cannot predict what would happen after that. As far as raw processing power is concerned, by one estimate a current desktop machine with a good graphics card has maybe 1/2000 of the raw power of a human brain. Networks of 2000 such machines already exist. Though one of the panellists, Colin Hales, indicated that recent discoveries have indicated that the brain may have far more power than the above estimate implies.
The work up until now has been in specialised domains, for example making driverless trucks for mining sites. There was mention of a possible approach to general artificial intelligence being pioneered by Marcus Hutter at the Australian National University. Josling indicated that the promising advances in artificial intelligence involve various forms of machine learning (and I got the impression that this applies to Hutter's work); this led into a discussion of risks. If a machine has learnt from experience rather than being explicitly programmed (and this already happens in some areas) then we don't know in detail how it does what it does. If it does something unexpected and kills or injures someone, it is not at all clear who should be held accountable. One of the attendees, who works as a safety engineer (I didn't catch his name) said that once a technology such as that for driverless trucks is mature, it is more reliable than having human drivers; it is the early period of introduction of such technologies that is really dangerous. In this context, the Google Car has driven itself autonomously around Los Angeles. One of the panellists, James Newton-Thomas, who works with autonomous mining equipment, indicated that the current approach is to segregate the equipment behind physical barriers, as well as fitting independent safety systems.
A discussion that was only touched on at the conference was how to make sure that a super-intelligent machine would be friendly towards us, and there was some discussion about the relationship among consciousness, intelligence and morality. There was also some discussion about the uses to which governments and large corporations would put super-intelligent machines. The prospect of large-scale technological unemployment and the thought-police-like powers already available via automated surveillance and data mining are much more immediate concerns.
(To be continued...)
This conference was more optimistic than pessimistic. Climate change and population pressures were there in the background, and sustainability was a theme, but on the whole the intent was to look beyond these problems to longer-term possible futures for humanity. The organiser was Adam Ford, who has just become a board member of Humanity+, and who has had a considerable involvement in this general area.
Maybe 80 people attended, predominantly but by no means exclusively male, and a mixture of young and old, with relatively few people in the middle age range. I got the impression that almost everyone there had a background in science, engineering or computing.
Aubrey de Grey was well worth hearing. His view on ageing is that normal metabolic processes produce "damage" of various kinds, such as junk inside cells that the body cannot dissolve. We can tolerate a certain amount of such damage, but eventually it starts to harm us. De Grey listed all the classes of damage that are known (and indicated that no fundamentally new classes of damage had come to light in the last 30 years), and indicated plausible approaches to dealing with all of them. He mentioned two specific projects at his laboratory dealing with junk inside the cell, targeted at macular degeneration, which is a leading cause of blindness, and at atherosclerosis, which is inflammation of the walls of the arteries, heading to heart disease and strokes.
All of this comes under the heading of "regenerative medicine", therapies to rejuvenate (that is, to make young again) systems in the body by clearing out damage and taking the bodily systems back some way towards the healthy young adult state. Once such therapies are in place for all the major types of damage (which is quite a few years away), de Grey thinks that we will be able to have another 30 years of healthy middle age. These days 60 is the new 50; with these therapies 80 or 90 would be the new 50. But that is only the start. As techniques improve, clearing out a greater proportion of damage, repeated rejuvenation would allow enormous prolongation of healthy, active life, to ultimately maybe 1,000 years. This doesn't imply a cure for cancer, but it does imply a method of avoiding cancer by manipulating telomeres (the caps at the end of DNA strands).
All of this provoked a lot of discussion, and de Grey devoted his second presentation to discussing objections to his program. The diseases of ageing are not just a first world problem: de Grey said that already two-thirds of the deaths in the world are due to them. Of course if we do have the potential to live to 1,000 years there will have to be massive changes in society, but de Grey pointed out that by the time such long life becomes feasible there will have been massive changes in society anyway.
Incidentally de Grey is not a food faddist or anything of that sort. He was asked about diet, and said that as long as one is reasonably sensible about diet and exercise (and doesn't smoke), things like the "paleo diet" and the like don't achieve much. And he enjoyed a beer at the pub afterwards.
The other presentation that contained a road map for future developments was that of Tim Josling on artificial intelligence. He outlined the so-called hype cycle that tends to apply to new technologies. Once a new technology becomes known, at first there is a great deal of hype, resulting in wildly inflated expectations. When the technology doesn't live up to these, there is a "trough of disillusionment", and then after than attitudes to the technology finally settle to a realistic view of what it can achieve.
Artificial intelligence (AI) went through this cycle: after quite a long initial period of hype the "AI winter" descended in the 1980s, when funding dried up and AI was generally regarded to have failed. In fact it developed quietly in various specialised areas. Josling listed several techniques developed years or decades ago that were impractical at the time but are now coming in to their own as increased computer power has made them feasible. Incidentally Josling is more optimistic about the continuation of Moore's Law (that the number of transistors on a chip doubles every two years) than Herb Sutter (whom I mentioned in a previous post), but it doesn't matter for Josling's argument whether increased computing power arrives via Moore's Law in one box or via networks, as Sutter expects.
Josling expects that more and more low-level white-collar jobs will be cheaper to do by machines, on a relatively short time frame, and he ended by posing the question: "Leisured aristocracy or unemployed underclass?"
This sort of prophecy was made in my youth, and hasn't really come to pass. However, the "acceptable" minimum rate of unemployment has risen from 2% to 5% in my lifetime, and since the official figures are constructed to be as low as possible, the true unemployment figure is at least 10%. I also think that the availability of cheap Third World workers has delayed the development of automation, but that is beginning to come to an end. Eventually the machines will be cheaper than even a Third World worker.
In the background of Josling's presentation is a concept known as "The Singularity", and there was a panel discussion around this at the conference. The Singularity is when machines become smarter than we are; this may be a long way off, but it is hard to argue convincingly that it can never happen. The Singularity is a sort of "event horizon", as we cannot predict what would happen after that. As far as raw processing power is concerned, by one estimate a current desktop machine with a good graphics card has maybe 1/2000 of the raw power of a human brain. Networks of 2000 such machines already exist. Though one of the panellists, Colin Hales, indicated that recent discoveries have indicated that the brain may have far more power than the above estimate implies.
The work up until now has been in specialised domains, for example making driverless trucks for mining sites. There was mention of a possible approach to general artificial intelligence being pioneered by Marcus Hutter at the Australian National University. Josling indicated that the promising advances in artificial intelligence involve various forms of machine learning (and I got the impression that this applies to Hutter's work); this led into a discussion of risks. If a machine has learnt from experience rather than being explicitly programmed (and this already happens in some areas) then we don't know in detail how it does what it does. If it does something unexpected and kills or injures someone, it is not at all clear who should be held accountable. One of the attendees, who works as a safety engineer (I didn't catch his name) said that once a technology such as that for driverless trucks is mature, it is more reliable than having human drivers; it is the early period of introduction of such technologies that is really dangerous. In this context, the Google Car has driven itself autonomously around Los Angeles. One of the panellists, James Newton-Thomas, who works with autonomous mining equipment, indicated that the current approach is to segregate the equipment behind physical barriers, as well as fitting independent safety systems.
A discussion that was only touched on at the conference was how to make sure that a super-intelligent machine would be friendly towards us, and there was some discussion about the relationship among consciousness, intelligence and morality. There was also some discussion about the uses to which governments and large corporations would put super-intelligent machines. The prospect of large-scale technological unemployment and the thought-police-like powers already available via automated surveillance and data mining are much more immediate concerns.
(To be continued...)
Monday, April 30, 2012
A Personal History of Computer Hardware
Reading Herb Sutter's comments on changes in computer hardware ("The
Free Lunch Is Over", from 2004
(http://www.gotw.ca/publications/concurrency-ddj.htm), and "Welcome
to the Jungle", from 2011
(http://herbsutter.com/welcome-to-the-jungle) led me to think about the computers I have engaged
with over the years.
I had fleeting encounters with computers as a university student; this was at a time when a whole university had just a handful of computers. My first real engagement with computers was in the late 1960s when I got a summer job at a computing laboratory run by CSIRO, the Commonwealth (of Australia) Scientific and Industrial Research Organisation. The machine was a Control Data 3200, which had (I think) 32,000 24-bit words of memory. That is 96 kilobytes (though the byte wasn't in use then), less than one thousandth of the memory of any video card today, let alone the memory of a whole computer. It occupied the whole of a large room, being made of discrete transistors (not integrated circuits, i.e. "chips"). Input was by punched card, one card per line of program; you put the bundle of cards in a box, and waited some hours for the program to be run, since the computer required specialised human operators. Then you looked at the printed output, found the missing comma in your program and tried again. The machine had four magnetic tape units (one tape held about 5 megabytes), and there was a monstrous line printer. I think there was also a pen plotter, though I didn't use it. As a great privilege I got to go once or twice into the machine room and actually sit at the console and type commands.
Despite all the obvious differences, the basic architecture of both the hardware and the software was remarkably similar to that which prevailed across the whole of Sutter's "Free lunch" period, 1975-2005. There was a single processing unit, a quantity of memory (RAM), and slower but more capacious external storage, in this case provided by the magnetic tape drives. I did some programming in assembly language, and the underlying operations that the machine carried out (load, store, add, shift, jump, and so forth) are still there, though the way these operations are carried out inside the CPU has become much more complex and there are new types of operation (I don't think there were any stack manipulation instructions then, let alone vector instructions). The higher-level language was Fortran, and far as I remember the cycle of compile (separately for each "compilation unit"), link, load, run was the same as that still used today with languages like C++.
I went to England for further study, and encountered my first "departmental" computer, meaning that it belonged to the Mathematics Department, not the University as a whole. It was a PDP-8 computer, the size of a bar fridge, it had (I think) the equivalent of 8 kilobytes of memory, and the program was input via paper tape. I took a course on Lisp using this machine; it was the first interactive language I encountered, where I could change things on the fly. Around this time I visited a friend at Cambridge University and encountered for the first time the arrangement of numerous terminals connected to a single computer. By this time integrated circuits were being used, though the single-chip microprocessor didn't arrive until a little later. Also hard drives were arriving, though they were the size of washing machines or bigger.
My working life was spent in University mathematics departments, so computers were always there, though often just in the background. The system of numerous terminals connected to a single computer, probably in another building, remained dominant for quite some time. For a while the terminals were teletypes; they physically typed onto paper. The Control key on computer keyboards dates from the teletype era: it was used to control the teletype by, for example, advancing the paper a line (control-J), or ringing the bell on the teletype (control-G). The resulting non-printing "control characters" are still used in computer text files. In the 1960s a character set only held 64 characters including the control characters; there was only room for UPPER CASE letters. When character sets with 128 characters (7 bits) came into use, lower case letters became available, and computer output became much more readable.
The teletypes gave way to the ubiquitous green-screen monitors, 80 characters across and 24 or 25 lines deep. What look like descendants of these can still be seen in shop checkout counters.
At some point the mathematics typesetting program TeX arrived, and we all became amateur typesetters. Before that, mathematical typing was done by administrative staff, and it was a specialised skill, using IBM golfball typewriters. TeX allowed the production of better-looking results than any typewriter could achieve, but it wasn't easy to use, and really only people from mathematics and related disciplines took to it. It was and is open-source software and remains the standard method of producing mathematical documents.
The next big change was the spread of personal computers. The first one of these I got to use was an Apple II that belonged to a friend. I went round to his place, and he sat me down in front of the machine and then went out to do some errand. I knew that in principle I couldn't harm the computer just by pressing keys, but I was still a bit nervous (it was expensive). I touched a key, there was a loud bang, and the computer stopped working. The machine was full of plug-in cards, and it turned out that a sharp protrusion on one card had managed to eat its way into a capacitor on a neighbouring card, resulting in a destructive short circuit.
The first computer that I owned myself (1985) was a Commodore 64; the name indicated that it had 64 kilobytes of memory in its small plastic box, that is two thirds of the memory of the room-filling machine of the late 1960s. It also had an inbuilt sound synthesiser chip, and it was the only computer I have ever used that had a genuine random number generator. Usually there is a pseudo-random number generator, a small program that generates a determinate sequence of numbers once the starting point is set, but the Commodore 64 could read the analogue noise generator circuit in the sound chip, which gave genuine physically-based random numbers. The Commodore was much cheaper than the Apple, but it didn't have a floppy disk drive, only a very slow unit that stored data on audio cassettes. It has been said that the Commodore 64 was the last computer that one person could understand all of; it even came with a circuit diagram.
These home computers had some of the attributes of a video game console and certainly helped the evolution of computers into multi-media machines.
In 1989 the Internet proper arrived in Australia with a satellite link from Australia to the mainland U.S. via Hawaii, and the establishment of what was called AARNET by a consortium of Australian universities and the CSIRO. Previously there had been more local Australian networks, with international email available, though not easy to use. A lot of the network developments happened in University computer science departments, with mathematics, physics and engineering departments not far behind. General use outside Universities didn't start in Australia until about 1993.
At home I bought an Atari, also in 1989; I was getting involved in electronic music, and the Atari was well adapted for that. Meantime at work workstations had arrived, desktop computers in their own right, with much better displays than the old terminals, and networked together. A little later I got a Sun desktop computer at work. It had 4 megabytes of memory (I think), but by default it only had an 80 megabyte hard drive. This was nowhere near enough, and I got an additional 600 megabyte disk drive, which cost over $2000. Twenty years later, a drive with 1,000 times the capacity costs around one twentieth of the price, not allowing for inflation. I don't think anyone foresaw this extraordinary increase in hard drive capacity.
The Sun workstation had an additional piece of hardware that could be used as a sound card, though it was actually a general scientific data collector. It contained a so-called DSP (Digital Signal Processor) chip, that for certain purposes was much faster than the main processor. DSP chips are still used in specialised applications, including sound cards.
After that the World Wide Web appeared, via the Mosaic browser. The IBM PC and clones gradually become dominant; at work they were connected to a central server, and were more likely to run Linux than Windows. I also used a PC at home; I changed to the Macintosh in 2006.
A computing-related development that came at work shortly before I retired was the establishment of an "access grid room", essentially a well-equipped and well-connected video conferencing room allowing the sharing of specialised mathematics courses between universities. Another development late in my working life, and one related to Sutter's comments, was the building of super-computer class machines by hooking together a network of 100 or more PCs. Smaller versions of these clusters were within the reach of individual University departments or research centres. I didn't have an excuse to seek access to them.
The electronic computer was born a little before I was, but stored program machines did not arrive until after I was born, the earliest electronic computers not being stored-program. The transistor was also born shortly after I was, so the twin revolutions of computing as we know it and of micro-electronics have taken place in my lifetime.
I had fleeting encounters with computers as a university student; this was at a time when a whole university had just a handful of computers. My first real engagement with computers was in the late 1960s when I got a summer job at a computing laboratory run by CSIRO, the Commonwealth (of Australia) Scientific and Industrial Research Organisation. The machine was a Control Data 3200, which had (I think) 32,000 24-bit words of memory. That is 96 kilobytes (though the byte wasn't in use then), less than one thousandth of the memory of any video card today, let alone the memory of a whole computer. It occupied the whole of a large room, being made of discrete transistors (not integrated circuits, i.e. "chips"). Input was by punched card, one card per line of program; you put the bundle of cards in a box, and waited some hours for the program to be run, since the computer required specialised human operators. Then you looked at the printed output, found the missing comma in your program and tried again. The machine had four magnetic tape units (one tape held about 5 megabytes), and there was a monstrous line printer. I think there was also a pen plotter, though I didn't use it. As a great privilege I got to go once or twice into the machine room and actually sit at the console and type commands.
Despite all the obvious differences, the basic architecture of both the hardware and the software was remarkably similar to that which prevailed across the whole of Sutter's "Free lunch" period, 1975-2005. There was a single processing unit, a quantity of memory (RAM), and slower but more capacious external storage, in this case provided by the magnetic tape drives. I did some programming in assembly language, and the underlying operations that the machine carried out (load, store, add, shift, jump, and so forth) are still there, though the way these operations are carried out inside the CPU has become much more complex and there are new types of operation (I don't think there were any stack manipulation instructions then, let alone vector instructions). The higher-level language was Fortran, and far as I remember the cycle of compile (separately for each "compilation unit"), link, load, run was the same as that still used today with languages like C++.
I went to England for further study, and encountered my first "departmental" computer, meaning that it belonged to the Mathematics Department, not the University as a whole. It was a PDP-8 computer, the size of a bar fridge, it had (I think) the equivalent of 8 kilobytes of memory, and the program was input via paper tape. I took a course on Lisp using this machine; it was the first interactive language I encountered, where I could change things on the fly. Around this time I visited a friend at Cambridge University and encountered for the first time the arrangement of numerous terminals connected to a single computer. By this time integrated circuits were being used, though the single-chip microprocessor didn't arrive until a little later. Also hard drives were arriving, though they were the size of washing machines or bigger.
My working life was spent in University mathematics departments, so computers were always there, though often just in the background. The system of numerous terminals connected to a single computer, probably in another building, remained dominant for quite some time. For a while the terminals were teletypes; they physically typed onto paper. The Control key on computer keyboards dates from the teletype era: it was used to control the teletype by, for example, advancing the paper a line (control-J), or ringing the bell on the teletype (control-G). The resulting non-printing "control characters" are still used in computer text files. In the 1960s a character set only held 64 characters including the control characters; there was only room for UPPER CASE letters. When character sets with 128 characters (7 bits) came into use, lower case letters became available, and computer output became much more readable.
The teletypes gave way to the ubiquitous green-screen monitors, 80 characters across and 24 or 25 lines deep. What look like descendants of these can still be seen in shop checkout counters.
At some point the mathematics typesetting program TeX arrived, and we all became amateur typesetters. Before that, mathematical typing was done by administrative staff, and it was a specialised skill, using IBM golfball typewriters. TeX allowed the production of better-looking results than any typewriter could achieve, but it wasn't easy to use, and really only people from mathematics and related disciplines took to it. It was and is open-source software and remains the standard method of producing mathematical documents.
The next big change was the spread of personal computers. The first one of these I got to use was an Apple II that belonged to a friend. I went round to his place, and he sat me down in front of the machine and then went out to do some errand. I knew that in principle I couldn't harm the computer just by pressing keys, but I was still a bit nervous (it was expensive). I touched a key, there was a loud bang, and the computer stopped working. The machine was full of plug-in cards, and it turned out that a sharp protrusion on one card had managed to eat its way into a capacitor on a neighbouring card, resulting in a destructive short circuit.
The first computer that I owned myself (1985) was a Commodore 64; the name indicated that it had 64 kilobytes of memory in its small plastic box, that is two thirds of the memory of the room-filling machine of the late 1960s. It also had an inbuilt sound synthesiser chip, and it was the only computer I have ever used that had a genuine random number generator. Usually there is a pseudo-random number generator, a small program that generates a determinate sequence of numbers once the starting point is set, but the Commodore 64 could read the analogue noise generator circuit in the sound chip, which gave genuine physically-based random numbers. The Commodore was much cheaper than the Apple, but it didn't have a floppy disk drive, only a very slow unit that stored data on audio cassettes. It has been said that the Commodore 64 was the last computer that one person could understand all of; it even came with a circuit diagram.
These home computers had some of the attributes of a video game console and certainly helped the evolution of computers into multi-media machines.
In 1989 the Internet proper arrived in Australia with a satellite link from Australia to the mainland U.S. via Hawaii, and the establishment of what was called AARNET by a consortium of Australian universities and the CSIRO. Previously there had been more local Australian networks, with international email available, though not easy to use. A lot of the network developments happened in University computer science departments, with mathematics, physics and engineering departments not far behind. General use outside Universities didn't start in Australia until about 1993.
At home I bought an Atari, also in 1989; I was getting involved in electronic music, and the Atari was well adapted for that. Meantime at work workstations had arrived, desktop computers in their own right, with much better displays than the old terminals, and networked together. A little later I got a Sun desktop computer at work. It had 4 megabytes of memory (I think), but by default it only had an 80 megabyte hard drive. This was nowhere near enough, and I got an additional 600 megabyte disk drive, which cost over $2000. Twenty years later, a drive with 1,000 times the capacity costs around one twentieth of the price, not allowing for inflation. I don't think anyone foresaw this extraordinary increase in hard drive capacity.
The Sun workstation had an additional piece of hardware that could be used as a sound card, though it was actually a general scientific data collector. It contained a so-called DSP (Digital Signal Processor) chip, that for certain purposes was much faster than the main processor. DSP chips are still used in specialised applications, including sound cards.
After that the World Wide Web appeared, via the Mosaic browser. The IBM PC and clones gradually become dominant; at work they were connected to a central server, and were more likely to run Linux than Windows. I also used a PC at home; I changed to the Macintosh in 2006.
A computing-related development that came at work shortly before I retired was the establishment of an "access grid room", essentially a well-equipped and well-connected video conferencing room allowing the sharing of specialised mathematics courses between universities. Another development late in my working life, and one related to Sutter's comments, was the building of super-computer class machines by hooking together a network of 100 or more PCs. Smaller versions of these clusters were within the reach of individual University departments or research centres. I didn't have an excuse to seek access to them.
The electronic computer was born a little before I was, but stored program machines did not arrive until after I was born, the earliest electronic computers not being stored-program. The transistor was also born shortly after I was, so the twin revolutions of computing as we know it and of micro-electronics have taken place in my lifetime.
Thursday, April 19, 2012
There Is No Free Lunch in the Jungle
I have not normally been posting
on technical topics, and I am not a professional programmer. But I do spend a fair bit of time writing programs for artistic purposes. Professional programmers
won't find anything of technical interest.
Recently I came across two articles by Herb Sutter, entitled "The Free Lunch Is Over", from 2004 (http://www.gotw.ca/publications/concurrency-ddj.htm), and "Welcome to the Jungle", from 2011 (http://herbsutter.com/welcome-to-the-jungle). Together they chart fundamental changes in the way that computer hardware is organised, and the effect that this is having on computer programs and computer programmers. Sutter is a programming guru who works for Microsoft, and he is particularly interested in changes to programming techniques.
In "The Free Lunch Is Over", Sutter presciently pointed out that the era of ever faster and more powerful computer processors is ending. The free lunch was the continual increase in computer processor speeds, sustained over a very long period (Sutter says roughly 1975 to 2005, but 1975 is an approximate starting date for desktop computers; for bigger computers it surely extends further back). This meant that software developers didn't have to worry too much about inefficient software; it might be a bit slow today, but tomorrow's machines will run it fast enough. Sutter's article, which first appeared in 2004, pointed out that processor clock speed had started to level out. Since then, there has been almost no increase in clock speed, which has stagnated at something under 4 gigahertz; the obstacle is the amount of heat generated in the small space of the chip. Sutter's first era is the era of the free lunch of ever-increasing processor speeds
It is still possible to pack ever more transistors into a chip, so since 2005 there has been a proliferation of multi-core chips, where each "core" is equivalent to the whole processor of an earlier machine. Today typical desk-top machines have four cores, and even phones and tablets are beginning to have two cores. Different programs can run at the same time on different cores, but to really make use of the cores a single program has to utilise several cores simultaneously. This requires a big change on the part of programmers, who need to acquire new tools and a new mindset. Various approaches to what is variously called parallel programming, concurrency or multi-threading have been around for a long time, but now they suddenly become central. Sutter's second era is "multi-core", that of machines with a relatively small number of powerful cores. The first article takes us to this point.
In the second article, Sutter considers that the "multi-core" era is already ending even before we have learnt to cope with it. The third era is that of "hetero-core", the era of heterogeneous cores, which according to Sutter started in 2011. As far as the actual hardware is concerned, the third era arrived when powerful graphics cards started to be fitted to home computers for computer games. These graphics cards contain a large number (for example 100) of very small specialised cores, originally only capable of processing pixels for display. These small cores have gradually become more general-purpose, and there has been considerable interest in scientific computing circles in harnessing their power for general-purpose computation, not just graphics. This interest is now going mainstream, but it brings with yet more challenges for programmers, as now, added to the already difficult challenge of adapting a program to make use of multiple cores, different parts of the one program may be running on cores with very different capabilities.
Sutter has the "hetero-core" era ending some time in the 2020s because he thinks that is when Moore's Law (that the number of transistors on a chip doubles every two years) will finally end. At that point our desktop and laptop and pocket computing devices will have as much power as they are going to get. Sutter thinks by then another trend will have already taken over, the availability of "hardware as a service": enormous clusters of computers available to be used over the Internet by anyone, for a fee. This provides still another challenge for programmers: a program will run partly on the by then 1,000 or more heterogeneous cores in the user's local machine (desktop, laptop, tablet or phone), and partly on a much bigger collection of cores available at the other end of a wi-fi link. Sutter considers that building larger and larger networks of computers will be, for the foreseeable future, much easier than cramming more and more transistors into a single chip or box, so growth in computing power will take place less in individual machines and more in the availability of networks of computers. As Sutter points out, already Amazon and others offer large clusters of computers for hire; he gives the example of a cluster with 30,000 virtual cores that was (virtually) put together for a pharmaceutical company who hired it for one day, at a cost of under $1500 per hour. The calculations would have taken years on a desktop computer.
Interesting times!
Recently I came across two articles by Herb Sutter, entitled "The Free Lunch Is Over", from 2004 (http://www.gotw.ca/publications/concurrency-ddj.htm), and "Welcome to the Jungle", from 2011 (http://herbsutter.com/welcome-to-the-jungle). Together they chart fundamental changes in the way that computer hardware is organised, and the effect that this is having on computer programs and computer programmers. Sutter is a programming guru who works for Microsoft, and he is particularly interested in changes to programming techniques.
In "The Free Lunch Is Over", Sutter presciently pointed out that the era of ever faster and more powerful computer processors is ending. The free lunch was the continual increase in computer processor speeds, sustained over a very long period (Sutter says roughly 1975 to 2005, but 1975 is an approximate starting date for desktop computers; for bigger computers it surely extends further back). This meant that software developers didn't have to worry too much about inefficient software; it might be a bit slow today, but tomorrow's machines will run it fast enough. Sutter's article, which first appeared in 2004, pointed out that processor clock speed had started to level out. Since then, there has been almost no increase in clock speed, which has stagnated at something under 4 gigahertz; the obstacle is the amount of heat generated in the small space of the chip. Sutter's first era is the era of the free lunch of ever-increasing processor speeds
It is still possible to pack ever more transistors into a chip, so since 2005 there has been a proliferation of multi-core chips, where each "core" is equivalent to the whole processor of an earlier machine. Today typical desk-top machines have four cores, and even phones and tablets are beginning to have two cores. Different programs can run at the same time on different cores, but to really make use of the cores a single program has to utilise several cores simultaneously. This requires a big change on the part of programmers, who need to acquire new tools and a new mindset. Various approaches to what is variously called parallel programming, concurrency or multi-threading have been around for a long time, but now they suddenly become central. Sutter's second era is "multi-core", that of machines with a relatively small number of powerful cores. The first article takes us to this point.
In the second article, Sutter considers that the "multi-core" era is already ending even before we have learnt to cope with it. The third era is that of "hetero-core", the era of heterogeneous cores, which according to Sutter started in 2011. As far as the actual hardware is concerned, the third era arrived when powerful graphics cards started to be fitted to home computers for computer games. These graphics cards contain a large number (for example 100) of very small specialised cores, originally only capable of processing pixels for display. These small cores have gradually become more general-purpose, and there has been considerable interest in scientific computing circles in harnessing their power for general-purpose computation, not just graphics. This interest is now going mainstream, but it brings with yet more challenges for programmers, as now, added to the already difficult challenge of adapting a program to make use of multiple cores, different parts of the one program may be running on cores with very different capabilities.
Sutter has the "hetero-core" era ending some time in the 2020s because he thinks that is when Moore's Law (that the number of transistors on a chip doubles every two years) will finally end. At that point our desktop and laptop and pocket computing devices will have as much power as they are going to get. Sutter thinks by then another trend will have already taken over, the availability of "hardware as a service": enormous clusters of computers available to be used over the Internet by anyone, for a fee. This provides still another challenge for programmers: a program will run partly on the by then 1,000 or more heterogeneous cores in the user's local machine (desktop, laptop, tablet or phone), and partly on a much bigger collection of cores available at the other end of a wi-fi link. Sutter considers that building larger and larger networks of computers will be, for the foreseeable future, much easier than cramming more and more transistors into a single chip or box, so growth in computing power will take place less in individual machines and more in the availability of networks of computers. As Sutter points out, already Amazon and others offer large clusters of computers for hire; he gives the example of a cluster with 30,000 virtual cores that was (virtually) put together for a pharmaceutical company who hired it for one day, at a cost of under $1500 per hour. The calculations would have taken years on a desktop computer.
Interesting times!
Thursday, February 2, 2012
Red Brick Group Show, Ballarat
The Red Brick Gallery and Emporium in Ballarat is holding a group show, with lots of people involved. I have put in two prints from my Shaping Evolution series. The Gallery is run by two energetic artists, Steph Wallace and Marcia King, and has shown a lot of work by local artists/craftspeople.
Where: Red Brick Gallery, 218A Skipton St, Ballarat VIC 3350. (Near the corner with South St.)
Tel: 0402 416 097.
Opening: Friday February 3rd, 6.00-8.00pm.
Exhibition dates: February 3rd – February 16th, 2012.
Gallery hours: Tues – Sun, 10am – 5pm.
Info: www.http://redbrickgallery.com.au/.
Where: Red Brick Gallery, 218A Skipton St, Ballarat VIC 3350. (Near the corner with South St.)
Tel: 0402 416 097.
Opening: Friday February 3rd, 6.00-8.00pm.
Exhibition dates: February 3rd – February 16th, 2012.
Gallery hours: Tues – Sun, 10am – 5pm.
Info: www.http://redbrickgallery.com.au/.
Thursday, January 26, 2012
"Art Sparks" Creative Gathering, Ballarat
This is an initiative by Amy Tsilemanis to bring together people interested in the arts in Ballarat. The next gathering is on Tuesday 31st of January at Linda Franklin's South Street Art Studio.There will be a vegetarian feast, and then five artists, including me, will be performing or showing work. They range over music, storytelling and visual art; I will play a couple of my abstract videos. A good chance to meet and talk with people. The five featured artists are Al Wunder, Yasmin Cole, Gordon Monro, Anne Langdon and Janette Wotherspoon.
Where: South Street Art Studio, 410 South Street, Ballarat VIC 3350. (This is the old church on the corner with Errard St.)
Tel: 0438 826 500.
When: Tuesday 31st of January 201, 6.30 - 9.30pm.
Cost: Suggested: gold coin donation.
Info: On Facebook - search for "Art Sparks in Ballarat" and "South Street Art Studio".
Where: South Street Art Studio, 410 South Street, Ballarat VIC 3350. (This is the old church on the corner with Errard St.)
Tel: 0438 826 500.
When: Tuesday 31st of January 201, 6.30 - 9.30pm.
Cost: Suggested: gold coin donation.
Info: On Facebook - search for "Art Sparks in Ballarat" and "South Street Art Studio".
Subscribe to:
Posts (Atom)