Number 84     April 2002





Centres of Research Excellence

The Numbers Game
Aitken Prize Winners
CENTREFOLD Geoff Whittle

MATHEMATICAL MINIATURE 17 The three-eighths rule and a figure eight three body orbit



The Allan Wilson Centre for Molecular Ecology and Evolution held its 2-day planning meeting at Massey University recently. This research group, which has members from both the mathematics and molecular biology communities, is one of the five Centres of Research Excellence recently selected by the Royal Society of New Zealand for support under the government's recent initiative. The research activities of the Centre are focused around four research themes: "Rates and modes of evolution", "Biodiversity", "Human settlement of Aotearoa/New Zealand", and "New ecological and biological models". The Centre is hosted by Massey University, with the twelve Principal and Associate Investigators based at Auckland, Massey, Victoria, Canterbury and Otago Universities.

The Centre's name commemorates the New Zealander Allan Wilson, 1936-1991, who revolutionized the study of human evolution and is the only New Zealander to receive the MacArthur "Genius" Award. He spent his whole career at Berkeley where he introduced the study of evolution by molecular clocks, which led to the "Out of Africa" hypothesis from the study of mitochondrial DNA and the founding of the modern field of mathematical phylogenetics. (See for a biography of Allan Wilson and further information about the Centre.)

Two members of NZMS play leading roles in the Allan Wilson Centre. Professor Mike Hendy from Massey University is the Executive Director of the Centre, and Associate Professor Mike Steel from the University of Canterbury is Director of the "New ecological and biological models" research Programme. The planning meeting worked towards the formal establishment of the Allan Wilson Centre from 1 July 2002.

The first publication identified as having originated from the Allan Wilson Centre appeared as the cover item of Science on 22 March, reporting a project, led by Professor David Lambert (Massey), which read mitochondrial DNA extracted from extant and ancient (frozen in permafrost) penguin samples. The ancient samples, which are in time horizons beneath current nesting sites in the Ross Sea region of Antarctica, have been carbon dated revealing a range of dates up to 8000 years before present. The variations in the DNA extracted provides the first direct measurement of the rate of nucleotide substitution, which for this species, appears to be significantly faster than previously estimated. The mathematics behind this analysis was developed by Barbara Holland (Massey), and Alan Rodrigo and Alexsei Drummond (Auckland).

Left to right:

back row: Dave Lambert (MU), David Penny (MU), Mike Hendy (MU), Susan Wright (MU), Craig Millar (AU), Charles Daugherty (VUW), Stan Moore (MU);

front row: Mike Steel (CU), Pete Lockhart (MU), Howard Ross (AU), Lisa Matisoo-Smith (AU), Hamish Spencer (OU).

Absent are Allen Rodrigo (AU) and Ryk Ward (Oxford, but about to move to AU).

Mike Hendy



On 5 March 2002 the government announced the 5 successful bidders for the first round of Centres of Research Excellence. One of them was the NZ Institute of Mathematics and its Applications (NZIMA), directed by Marston Conder and Vaughan Jones. The final structure and plans of the NZIMA have not yet been determined---indeed, all the CoREs are currently preparing their final budgets. I spoke to Marston Conder about how he sees the Institute developing. (NB - These aren't his actual words, I just reconstructed his replies from my notes.)

Robert McLachlan

RM: Has the level of funding been determined yet?
MC: No. We requested $2.1m per year, and all the CoREs have been asked to prepare budgets for around 2/3 of their initial request, or $1.4m per year in our case. It also turns out that the capital component, which we had hoped was in addition to the annual funding, might be just an advance on that funding. That is, the depreciation on any capital items has to be funded from the annual grant.
RM: What capital items were part of the NZIMA proposal?
MC: Largely a contribution to the University of Auckland High Performance Computing Facility, which already exists [a 16-node, 16 Gflop Silicon Graphics cluster, housed at the Department of Engineering Science]. As part of the budget process we will be considering our contribution to the HPCF.
RM: Will the NZIMA have a permanent home?
MC: Just a few offices at Auckland, for the Directors (myself and Vaughan Jones), an administrator, and one or two others for visitors at this stage. Most of the activities will be hosted at various places around the country, either in mathematical science departments or at workshop venues. 50\% of my time will be at the Institute, and we are currently negotiating to have Vaughan Jones, who already spends one month a year in New Zealand, increase his commitment.
RM: The main activity of the Institute is the series of 6-month programmes. How will be programmes be selected?
MC: That's not completely determined yet. We'll be canvassing overseas institutes like the Fields and MSRI to see how they do it. Certainly we will advertise in all the newsletters of professional bodies (ANZIAM, NZMS, NZSA, ORSNZ etc.), and hope to visit all NZ universities with mathematical science departments in order to describe the NZIMA and solicit proposals. Of course at times NZIMA board members may have conflicts of interest, so we will be seeking advice from our International Advisory Panel. A proposal will involve a theme and its justification, a programme director, a list of likely invited speakers, and a venue. Programme directors may be based anywhere in NZ, and we will encourage workshops to be held at a range of locations.
RM: A lot of the money will be spent on postdoc and student stipends, which of course run for more than 6 months. How are you going to balance that?
MC: We hope to fund at least one 2-year postdoc, one 3-year PhD scholarship, and one 18-month Masters scholarship per programme. That way the benefits of the programme will extend into the future far beyond 6 months. That is, at any time we might be supporting 4 postdocs, 6 PhD students, and about 3 Masters students.
RM: There might be a supply problem getting good people for those positions. Will you be looking internationally and maybe funding international student fees?
MC: We would certainly look internationally for postdocs, but while we'd be interested in attracting top international PhD students, it's unlikely we would pay their fees. What is more likely that arrangements could be made with a host institution so that the host and the NZIMA together could come up with a package. We also hope to attract good people by offering competitive salaries or scholarships. Depending on the budget, I hope there will also be money for open postgraduate scholarships (not tied to the 6-month programmes).
RM: You also have planned the prestigious Maclaurin Fellowships, analogous to the Royal Society's James Cook Fellowships. How will they be organized?
MC: These will be completely open to competition and not necessarily related to the 6-month programmes. Unfortunately, they might suffer depending on the budget. We hope we can have at least one full annual Fellowship for a New Zealand resident, tenable anywhere, and one smaller Fellowship to bring a foreign researcher to New Zealand, for which we could pay their expenses and a stipend for a short period.
RM: The description of the NZIMA on the RSNZ web site goes on at length about all the exciting applications of maths which are going to transform the New Zealand economy. "In an increasingly complex world, the use of mathematical techniques to enhance good decision-making will provide New Zealanders with a competitive advantage." Seriously, how do you plan to balance pure maths, applied maths, and applications?
MC: Well we didn't have complete control over what went on the web site! The strength of the proposal was across a broad spectrum of significant interests and activity and in the mathematical sciences. We envisage that of the two programmes each year, one would be in fundamental research and one would have a tangible applications focus, for example bioinformatics or bioengineering.
RM: The NZMRI, which has been running successful summer workshops for several years, is a partner with the NZIMA. How do you see the relationship between the two?
MC: The NZMRI has a commitment to the Marsden Fund to run 3 annual workshops, of which one has just been held in Napier and one is planned for New Plymouth next January. That function of the NZMRI will continue in partnership with the NZIMA. The precise relationship is one of the things we're going to have to work out. For example, if the NZMRI didn't apply to the Marsden Fund in 2004 that could potentially free up some funds for other research in the mathematical and information sciences.
RM: Will the NZIMA be able to run extra workshops or conferences of its own?
MC: Unfortunately extra activities like that are likely to suffer from the budget cut, as the 6-month programmes are really central. We might be able to do some, or perhaps now that the Institute is a going concern we can go out and raise additional funding, e.g. from industry or charitable sources, to allow us to support a full range of activities (like other such institutes do).
RM: When will the NZIMA open for business?
MC: Formally, probably in June or July after the funding has been determined. To run a full 6-month programme starting in July we'll have to work pretty fast.
RM: How long will the NZIMA exist?
MC: We're not sure yet if the initial funding is for 6 years or 4 years, or or whether the CoRE fund will continue at the end of that period. What is certain is that we have to make the most of this fabulous opportunity we've been given.
RM: Thank you for your time.


Having had occasion to play around with the Magnus series for the solution of linear ODEs , I was chuffed just now to discover that I am in fact the grandson of Magnus! My whakapapa starts McLachlan, student of Keller, student of Magnus, student of Dehn, student of Hilbert, then Lindemann, Klein, Lipschitz, Dirichlet, then (this is where things get vague) Poisson and Fourier. O, what a falling off was there!

You can get all this information easily from the Mathematics Genealogy Project, at, including the fact that Hilbert has 5651 listed descendants. I got this tip while browsing around the archives of John Baez's This Week's Finds in Mathematical Physics at (issue 166), which houses an incredible wealth of material on all manner of topics. He has a nice summary of all the achievements of his predecessors and how they fit together. Indeed if you want to learn about some obscure algebraic structure like triality or 2-categories you could do worse than start here and follow the references. The latest issue, 175, for example, goes from the historical problem of determining longitude, to why the days of the week are named after the planets in the order Saturn (Saturday), Sun (Sunday), Moon (Monday), ... , to explaining the different types of factors of von Neumann algebras.

Robert McLachlan

[Catalogue to an exhibition at the Adam Art Gallery, Victoria University]

Numbers Game:
Creative connections between art and mathematics

My father was l'engegnè, (the engineer), with his pockets always bulging with books and known to all the pork butchers because he checked with his logarithmic ruler the multiplication for the prosciutto purchase.
Primo Levi, The Periodic Table


A discipline of specialist notations and calculations based on numbers, maths is inherent in any quantification of shape, size and space. The ramifications of mathematics are intrusive and ubiquitous. Telecommunications, time keeping, clocking speed, mapping the landscape, budgeting---everyday our lives are shaped by numerical codes and networks. At the macro level, our understanding of the universe is based on systems, science reveals patterns in nature and we acquiesce unconsciously to notions of universal laws.

As a number of related sciences seemingly supporting order and finitude, mathematics is less frequently acknowledged as a determinant of our cultural lives. Early attempts to base mathematics on logic frequently assumed indisputable and absolute reasoning. Only later when different `logics' were discovered by both anthropologists and mathematical logicians, did it become clear that logic was a cultural artefact, not a necessary component of every culture.

Theories used to explain our societies and cultures are apparently based on empirical evidence but social statistics are determined entirely by context and assumptions. In New Zealand the definition of Maori in the national Census has constantly altered as acknowledgement of racial mix has broadened. A number of works in this exhibition refer to interconnections between the operation and authority of symbolic languages and ways we understand such tools. Peter Robinson is conscious of the personal and cultural values assigned to measures. Institutional appraisals of identity, and personal evaluations of self as they may be judged against some standard of `otherness', for Maori as well as many other indigenous and racial groups, is reflected in Robinson's percentage paintings.

Orbital Elements of the Comets Known to Halley by Peter James Smith
Reproduced by permission of the artist

As both a statistician and an artist, Peter James Smith reflects on ways in which numerical data operate as information. Across history civilisations have developed sophisticated number systems in order to count and calculate. Symbols based on a range of number bases were formative in establishing Aztec, Babylonian, Chinese, Greek, Roman and Arab civilisations (Arabic numerals remain in use today). In Smith's Random Numbers, chaotic and subjective qualities are apparent amidst the perfect and rational properties ascribed to numbers. His painting Orbital Elements of the comets known to Halley refers to an episode in humankind's search for explanations of natural phenomena---Halley's enlightenment quest to replace the supernatural features of astronomy with empirical understanding. Smith acknowledges the human urge to both explain and control the world around us, while accepting that we react to nature as romantic and sublime.

The development of the computer in the second half of last century opened up new potentialities for creativity as much as for business or learning. Composer Michael Norris has pursued his work with electroacoustics to a new level with Silence Lives in Blue Rooms. Each time the audience enters the soundscape the artificial intelligence of the programme will have developed within this audio world further. A reading of `In den Nachmittag gefl\"ustert' by Georg Trakl undergoes the algorithmic operations of Norris's programming and is influenced by chance happenings to result in seemingly free and natural sounds.


Today when computers operate on the binary logic of zeroes and ones and world markets trade enormous notional sums, zero is intrinsic not only to numerical representations but also to the fabric of life. Zero is historically interesting for its relatively late invention in around the sixth century AD, while numerically it is also an oddity---for example it is excluded from counting (beginning at one), and it has unique properties in arithmetic operations (multiplication, division etc).

Julia Morison's work Amperzand II highlights the paradoxical nature of the zero. Invented to symbolise empty space in place-value notation, philosophically zero is associated with the void and absence. If as Morison has said, these works are almost like thought or speech bubbles, they contain nothing but our own mental construct. Conceptually, nought provides a symbol to document magnitudes beyond our comprehension, to count the inestimable.

Zero and zero and zero and zero ad infinitum remain zero, as Amperzand signifies. Infinity and zero come together most commonly in our endeavour to understand space---the galaxy, black holes, the Big Bang. While scientists would tell us there is no such thing as empty space, Paul Hartigan's SKY 02 reinstates an imaginary wide blue yonder---immeasurable, indeterminable, unending.


Serious pursuits inevitably have a flip side, entertainment can also be intellectual or earnest. As long as maths is compulsory in our education system there will have to be sugar pills to make learning fun. Books such as The Man Who Counted by Brazilian mathematician Malba Tahan give peculiar insights into mathematical history, delivering messages of the ethical and economic rewards awaiting mathematical astuteness (thankfully leavened with humour). In the field of play, conventional games from sports to board games rely on players' calculations and are in essence competitions for the highest score.

Ruth Watson has often used games in her work, transforming playing boards, chess sets and jigsaw puzzles, or re-contextualising scrabble pieces (from the exhibitions A E I O U and Among the Scrabble). Beyond the obvious associations of game playing, Scrabble denotes a hierarchical position for the English language. Those who through good fortune are given the tools to acquire a voice in the dominant language are placed in the superior position of determining how and with whom they communicate and what is said. Michael Parekowhai also reflects on the acquisition and application of language, by employing a popular mathematical device. Ataarangi resembles a construction of enlarged Cuisinaire rods. Through these objects and the title, the work refers to the changing use of Cuisinaire from a maths teaching aid teaching spoken Maori. Confronting written and oral languages, English (the sculpture reads `HE' on its side) and Maori traditions, Parekowhai suggests the reinvigoration of systems and ideas as cultures and identities assert their own existence.

Ataarangi by Michael Parekowhai
Photograph courtesy Adam Art Gallery


Until the nineteenth century and physiological advances in understanding the operations of the eye, a primary objective of artists was the reproduction of the real world. Theories of perception and optics dating from the time of Aristotle understood vision according to a Cartesian model of perspective organised around the idea of a disembodied, monocular eye. Visual languages of artmaking and design relied on drawing systems, from the simplest single point perspective to more developed orthographic projections based on projective geometry. Such systems gave spatial relationships a correspondence with their counterparts in the external world. The notion that visual rays from the eyes met at the centre of single point perspective allowed Alberti to define the picture plane as a window and D\"urer to depict the use of drawing machines. With these devices the artist looked through a frame or sight vane, observing his subject matter in relation to the gridded squares within the frame.

Over the last two centuries geometry and description have given way to subjectivity and suggestion in art making. In his Rakaumangamanga paintings, Robert Ellis transforms the topology of a mountain site in the Bay of Islands into a conjunction of plan, elevation and sections. His seemingly cartographic images conceal the lived meanings and importance of this sacred area for Nga Puhi under European signs of charting and ownership. Ellis indicates that mapping systems, like numbers, do not have universal reference but are exclusive in their metaphorical transferral of understanding, knowledge and power.


The quest for understanding and appropriating symmetry and proportion has been evident since the Ancient Greeks assigned metaphysical properties to naturally occurring phenomena. Billy Apple often works with ratios and proportions, especially permutations of the Golden Section. For this exhibition, Apple has utilised a central feature of the Adam Art Gallery building and worked with the relative qualities of that space. Painted in safety orange on the wall above its physical location, Mind the Gap replicates the dimensions of the void on the upper level of the Chartwell Galleries, bringing 22.32 percent of the unusable floor area of these Galleries into the exhibition area.

Preliminary Study for University of Otago Library Mural 1966 by Colin McCahon
Mind the Gap by Billy Apple
Photograph courtesy Adam Art Gallery

Cities, machines and other systems can abstractly be compared to forms and systems in nature. For example, the spiral created by the golden section is observed in many plants and shells, while golden section proportions have been observed in buildings since the Parthenon and Great Pyramid of Giza. Neil Dawson's House Alteration works transform the iconic form of the residence into simplified models. Dawson uses simple mathematical processes (enlargement, rotation) to manipulate the schematic structure of the house, continually challenging our understanding of its form and function. Dawson makes tangible the allegorical possibilities of rigorous operations such as perspective and illusion.

Liz Coats employs geometry as a starting point in paintings made with liquid and transparent colours, arriving at images not entirely under her control. The images arise partly from Coats's interest in symmetry/asymmetry in the natural environment and experienced world. Central to the work are also the processes of growth and movement that occur during both the creation and viewing of the works.

In creating an infinitely repeatable three-dimensional mosiac, like a sort of perfect crazing, Chiara Corbelletto is also reflecting on patterns appearing in the natural and biological worlds. Often based on tessellation and rotations of a unitary form, Corbelletto's modular sculptural hangings highlight the potential for variety within unity, for shifts in scale and for the interaction of forms with the surrounding environment. Recent work by Simeon Nelson is based on manipulation of syntactical or numerical rules, rules that may be further warped or controlled by the artist. The ultimate results are mapped in three dimensions. Forms of works such as the Calculus series evoke macro and microstructures and the networks of living and man-made systems.


It is a truism to say that much abstract art that looks geometric and systematically proportioned has in reality been the result of intuitive or subjective considerations. By contrast, there is often little to belie the calculated or sequential systems of organisation of some art making. Systematic processes are applied as structuring devices in John Hurrell's methodology. Geometric paintings from the late 1970s came about through an extensive process of positioning predetermined forms according to dice throws, followed by further intuitive shifts. In two-dimensional works from the 1980s and 1990s, printed maps comprised a ready-made symbolic and formal base material as the prime compositional ingredient. By choosing a singular aspect of the map or its indexical operation, Hurrell features selected attributes in each work and the blankness of black paint obliterates any other detail.

A set of wall paintings by Simon Morris spotlight the artist's interest in the operation of minimalist abstraction as it is generated within a limited set of options. The compositional placement of colour bars depends on the flip of a coin enacted in the gallery space. Performing the same operations across wall paintings over time indicates possible permutations or may replicate positions. Apparent formal order arises out of the disorder of performance and chance.

Aberrant forms by Richard Reddaway also inhabit the gallery building. Mirrored sections appear in atomistic units that couple and congregate in larger fields or crystal growths. Reddaway's concepts are more speculative than pseudo-scientific, delving into theories of complex behaviour such as chaos theory or growth forms and non-linear systems.

Engineer Horst Kiechle utilises complex concepts (such as intuitive irregularity, natural imperfection and marginalised geometries) in analogous ways to counter simplified and reductive understandings of society. In site specific and virtual architectonic projects, Kiechle inserts complex and irregular growths in spaces for residents to come to terms with. Basic mathematical logic is corrupted and bent by irregularity and intuition. Kiechle is conscious not to smooth out the details he believes are often lost in the abstractions of precise disciplines or in our search for concrete truths.


Academically a division has existed between the `hard' sciences and `soft' arts, both in terms of a physical versus a literary understanding of the world and in the relative values placed upon these disciplines. The distinction was most notably recognised in the late 1950s by C.P. Snow, who addressed the perceived separation of the `two cultures' as distinguished by scientific conventions of observation, experiment and empirical measurement as opposed to the rhetoric, creativity and imagination of the arts[1]. Of course these generalisations are simplistic. Both fields utilise languages and systems to visualise experience of the world and offer questions and parallels rather than answers. Both operate through acts of the imagination, are the result of inductive leaps or are directed by incorrect conjectures and prodded by criticism.

Dick Frizzell relies on the natural intrigue of signs and symbols to send a serious message. His double-sided comment Faith points to the implicit trust we place in all things scientific. Frizzell simultaneously questions any meaning that might be ascribed to mathematical symbols and languages. Communication in a culture based on symbols relies on each of us to exercise a naming function on new symbols or to develop a vocabulary of signs. To communicate information signs must operate in a set of logical possibilities. Works produced by Terrence Handscomb in the late 1980s questioned the values and meanings assigned to symbolic codes and called attention to the weight of information carried by, and validity of, such conventions. Combinations of text and icon, formulae and grammar, in works such as close the canal, frustrate the viewer with brief passages of recognition and comprehension. Ultimately inaccessible as a whole unless Handscomb's personal code can be broken, this work makes evident the problematic nature of representation. Colin McCahon, like Ralph Hotere, included numbers and segments of calendars in numerous works, often as a reference to the stations of the cross. McCahon produced sets of numerals from the late 1950s onwards, in which the conjunction of I, One accentuated the artist's religious symbolism. Works employing groups of numbers emphasised the significance of progression, (Teaching Aids, Walk, Jump I-IV) but with varied rhythms and associations. The idea of broader academic connections---blackboards, calculating, measuring or computing---as well as metaphysical notions, appeared in the numerically saturated composition of Preliminary Study for University of Otago Library Mural 1966. Early 1940s landscape compositions by McCahon, such as Otago Peninsula, utilised the proportions of the golden mean and logarithmic spirals[2]. The Song of the Shining Cuckoo links the panoramic progression and stations of the cross to Maori beliefs regarding the departing spirit. Based on a poem written by Ralph Hotere's father, Tangirau Hotere, it depicts the flight of a cuckoo above the shifting sands of Muriwai beach, a symbol of the recently departed spirit making its way to Te Rerenga Wairua, from where it will begin the journey to ancestral Hawaiki.


Art and mathematics cross paths where the processes of cognition and intuition meet. In this space, order meets disorder and universal laws confront conjecture. Jacky Redgate worked from eighteenth century mathematical formulae derived form the discipline of solid geometry to create the six computer rendered solids of equal volume but differing shape in Equal Solids. Redgate unsettles experience and perception through the application of calculations dating back to Isaac Newton. Ideas reappear as computer lathed forms that defy comprehension.

Equal Solids by Jacky Redgate
Photograph courtesy Adam Art Gallery

While we imagine we can understand the world sufficiently from our personal perspective, we quickly flounder when thrown into aspects of fields beyond our own sphere. A reminder of why maths is compulsory learning and also of the encounters possible outside the square!

[1]    C.P. Snow, The Two Cultures and the Scientific Revolution, Cambridge University Press, New York, 1959.
[2]    Gordon H. Brown, Colin McCahon: Artist, AH & AW Reed Ltd., Wellington, 1994, p. 143.


Robert Caplan, The Nothing That Is, A Natural History of Zero, Allen Lane The Penguin Press, London, 1999.

H.S.M. Coxeter, M. Emmer, P. Penrose and M.L. Teuber (eds.) M.C. Escher: Art and Science, North-Holland, Amsterdam, 1987.

Apostolos Doxiadis, Uncle Petros and Goldbach's Conjecture, Faber and Faber, London, 2000.

Michele Emmer (ed.), The Visual Mind, Art and Mathematics, MIT Press, a Leonardo Book, Cambridge, Mass. USA, 1993.

Douglas R. Hofstadter, Godel, Escher, Bach: an Eternal Golden Braid, Basic Books Inc., New York, 1979.

Victor J. Katz, A History of Mathematics, An Introduction, Adison Wesley, Reading, Mass. USA, 1988.

Jerry Ravetz, Ziauddin Sardar and Borin Van Loon, Introducing Mathematics, Allen and Unwin, Sydney 1999.

Malba Tahan, The Man Who Counted, Leslie Clark and Alistair Reid (trans.), Canongate, Edinburgh, 1993.

Zara Stanhope
Adam Art Gallery

[The Aitken prize is awarded annually by the NZMS for the best student presentation at the Colloquium. The winners for 2000 and 2001 here present short versions of their talks.]


The construction method of resolutions and Dowker spaces

Brian van Dam, The University of Auckland

Historical Background. During the 1960's, Pasynkov and Ponomarev examined structural aspects of product spaces, and attempted to formalise a generalisation of product spaces that would allow different behaviour locally. This concept evolved into the formal definition of a resolution by Fedorcuk in 1968. Later, Ul'janov set out a general framework for resolutions and gave conditions for certain topological properties to be inherited by the resolution from the spaces it was built up from. Fedorcuk's special resolutions and Ul'janov's general resolutions were used to construct spaces that exhibited differing dimensions. Following their work, the theory of resolutions stagnated until Watson wrote a survey article on the topic [2], as he developed the notion of topological languages.

Resolutions. Resolutions are a construction method. Take any topological space, and magnify each point x of that base space X; under magnification it is realised that there was an entire space Yx where before we thought there was only one point x. These newly discovered spaces Yx are the fibres of a resolution, and are the spaces whose union forms the resolved space or resolution Zp. The resolution topology tp derives its structure from a family of maps fx : X\x ® Yx, each from the original space X less a point x to the corresponding fibre Yx.

It is this family of maps fx : xX that distinguishes the structure of the resolution from that of a product space. Resolutions can vary from being a disjoint union of spaces (when the space X is discrete) to being a product (when the base space is a point), but the real variety lies in between these extremes.

Watson has shown in [2] that many well-known topological spaces can be constructed as resolutions, or more typically, as a given subspace of a relatively simple resolution. These include the Alexandroff Duplicate, Bubble Plane and Double Arrow space, built from [0,1] and constant, sector or order maps.

Dowker Spaces. Dowker spaces are an interesting and important class of spaces, first introduced by Dowker in 1951, that arose from questions related to homotopy extension.

Definition 1 A Dowker space is a normal space X for which X x [0,1] is not a normal space.

After Dowker, topologists examined the underlying questions about the behaviour of the topological property of normality in product spaces; "under what conditions would normality be preserved by a product with two factors? with one factor [0,1]?" It turned out that testing for absence of a covering property, countable metacompactness, in x was equivalent to testing for presence of normality in the product X x [0,1].

Theorem 1 A space x is Dowker X is normal and not countably metacompact.

However, it took some time before Rudin constructed the first Dowker space in [1], and few other examples have been found since. In particular, Rudin's example of 1971 is still one of the few examples constructed in ZFC - most remaining examples require other set-theoretic assumptions.

Given the difficulty of constructing real Dowker spaces, it was hoped that the "product space" origins of resolutions would provide a means for constructing Dowker spaces, which derive from a product structure. However, when a normal resolution Zp has a closed projection map p : Zp ® X, given by p(x,y) = x for all y Yx, then the spaces x and Yx that Zp was built from are also normal spaces, for each x X. Fibres and base space then have countable metacompactness, which is passed on to the resolution, precluding Zp from being a Dowker space.

Theorem 2 Let Zp be a resolution with closed projection map p. If Zp is a Dowker space, then X and Yx, for each x X, are also Dowker spaces.

So, the "window of opportunity" for creating new Dowker spaces via resolutions is limited to those resolutions that have non-normal base space or do not have closed map p. There is still the possibility of using resolutions to gain or enhance other desirable properties of a given Dowker space though. The further option of exploring subspaces of resolutions for Dowker spaces exists, but it is no easier to create a Dowker space as a subspace of a resolution than it is to create a Dowker space as a subspace of any other type of space.


[1] Rudin, Mary Ellen; Dowker Spaces; Handbook of Set-Theoretic Topology, (Editors: K. Kunen and J. E. Vaughan); Elsevier Scientific Publishing; 1984; pp761-780.
[2] Watson, Stephen; The Construction of Topological Spaces: Planks and Resolutions; Recent Progress in General Topology, (Editors: M. Husek and J. van Mill); Elsevier Science Publishers B.V.; Chapter 20; 1992; pp673-757.


Median networks: A visual representation of ancient Adelie penguin DNA

Barbara Holland, Massey University

Adelie penguin breeding grounds in the Antarctic are situated above layers of preserved subfossil bones. It is possible to dig at these sites and uncover serially preserved samples of Adelie penguin bones. The extremely cold and dry Antarctic conditions act much like a refrigerator and help to preserve the ancient DNA within the bones.

As part of a Marsden funded project Massey ecologists David Lambert and Peter Ritchie made several expeditions to the Antarctic, particularly in the Ross sea area, where they recovered serially preserved ancient bones and also many blood samples from living Adelie penguins. DNA sequences were recovered from the first Hypervariable region (HVRI) of the mitochondrial DNA, including ancient mtDNA from bones up to 6000 years old. The HVRI section of mtDNA sequence evolves very quickly compared to the rest of the mtDNA and significant variation in the sequence was observed in the samples, both within the modern samples (taken from live Adelies) and within the ancient samples. The total number of samples was approximately 320 modern and 80 ancient, although not all of these samples represented different haplotypes (i.e. some had identical DNA for the region of sequence studied).

This data set provides a unique opportunity to study evolution in action, no other circumstances are known in which such a large quantity of dated DNA can be retrieved in such good condition. It was of particular interest to use the data set to estimate a rate of evolution for HVRI, as previous rate estimates are based on the calibration by the fossil record. It was the aim of my analysis, firstly, to provide a useful visual representation of the data set, and secondly, to develop a method to measure the rate of mtDNA evolution in the HVRI region.

Mitochondrial DNA is maternally inherited and thus, in theory, it should be possible to reconstruct a family tree for the samples in the same way that phylogenetic trees are reconstructed between species. In practice this turned out to be impossible as the phylogenetic signal in the DNA was obscured by a combination of factors including:

  • Noise due to multiple mutations at single sites,
  • A strong bias towards transitions vs transversions which makes the data virtually 2 state (i.e. most sites have either nucleotides A and G or nucleotides C and T but other combinations are rare),
  • A small number of changes between samples overall.

A single well supported family tree cannot be found for this data. Also, methods for building phylogenetic trees place all taxa at the external nodes, whereas here we needed a representation that could incorporate the ancient samples as well, i.e. a representation in which internal nodes could be labelled.

Median networks (Bandelt et al., Mol. Biol. Evol., 1999) were chosen to represent the data as they display the conflict between incompatible patterns in the sequence alignment as cycles in a graph, rather than resolving the conflict, perhaps arbitrarily, as in a tree. Each node in the median network graph represents a sequence, some of which will have been observed in the sample and others which are hypothetical intermediates. An edge connecting two nodes represents a single change in the sequence at a particular site. All edges that represent a change at the same site are parallel in the network. The median network is a subgroup of the n dimensional hypercube, where n is the number of incompatible sites (i.e. sites that cannot be depicted on the same tree with a minimal number of mutations).

The Adelie median network shows 2 major haplotypes in the data that can be further divided into 3 and 4 main haplotypes respectively. These 7 main groups have clearly been present for at least the last 6000 years as the dated ancient samples also display these main haplotypes. Cycles within the network show that there are many possible trees that would require the same number of mutations to explain the data. Hence an estimate of the rate based on a single tree purported to represent the family tree would be suspect.

The following method was proposed to measure the rate based on the median network representation. It aims to identify those haplotypes that are present in the modern sample but are not found in the ancient sample at some fixed time in the past. The method is based on the strong assumption of complete haplotype sampling in both the modern population and for a specific time period in the ancient population (both these assumptions were shown to be unrealistic).

  1. Within the median network, we find the minimum spanning tree (MST) connecting the modern samples and the MST connecting the ancient samples that have the maximum overlap, i.e. they share the most edges and hence the fewest mutations are proposed twice.
  2. Count the extra number of substitutions required to explain the modern samples, i.e. the number of edges that are in the MST of the modern samples but not in the MST of the ancient samples. These substitutions are presumed to have occurred in the time since the dates of the ancient samples.
  3. Divide by the average age of the ancient samples and the number of sites.

Using this method gives a rate of approximately 1 change per site per million years, this is about 5 times faster than the currently accepted rate based on fossil evidence.

In continuing work (after my talk at NZMS) it was found that the method outlined above provides very biased results due to the violation of its assumptions of complete haplotype sampling. A Monte Carlo Markov Chain method (Drummond et al., 2001) was applied to the data to effectively integrate over all possible trees as an alternative approach to avoid the problem of ambiguity of the correct phylogenetic relationships. To read about the results of this MCMC approach and see the median networks see Lambert et al. (Science, 2002).


Cover semi-complete topological groups

Sivajah Somasundaram, The University of Waikato

Abstract: In this article we show that the result in [4] which argues that every strongly Baire semitopological group is a topological group is close to a characterisation in the sense that every Baire topological group which has at least one q-point is strongly Baire.

A semitopological (paratopological) [topological] group is a group endowed with a topology for which multiplication is separately continuous (multiplication is jointly continuous but inversion may not be continuous) [multiplication is jointly continuous and inversion is continuous].

It is natural to ask when a semitopological group is a topological group. Research on this question began in 1936 when D. Montgomery [6] showed that every completely metrizable semitopological group is a paratopological group. Later in 1957, R. Ellis [2, 3] showed that every locally compact semitopological group is a topological group. Then in 1960, W. Zelazko [8] used Montgomery's result to show that every completely metrizable semitopological group is a topological group. Much later in 1996, A. Bouziad [1] improved both of these results by showing that every ech-complete semitopological group is a topological group. Recall that both locally compact and completely metrizable spaces are ech-complete.

Most recently though Kenderov, Kortezov and Moors showed in [4] that every almost ech-complete semitopological group is a topological group. Their contribution to this problem is based upon the following game. Let (X,t) be a topological space. On X we shall consider the -game played between two players a and b. A play of the -game is a decreasing sequence of non-empty open sets AnBn ... B2 A1B1 which have been chosen alternately; the An's by a and the Bn's by b. We will declare that a wins a play {(An,Bn): n } of the -game if An is non-empty and each sequence {An: n} with An An has a cluster-point in X. Otherwise player b is said to have won this play. A strategy t for player b is a rule that tells him/her how to play (possibly depending on all the previous moves of player a). Since the moves of player b may depend on the moves of player a, we denote the nth move of player b by, t(A1, A2, ... , An-1). We say that t is a winning strategy, if using it, player b wins every play, independently of the moves of player a. We will call a topological space (X,t) a strongly Baire space or strongly b-unfavourable if player b does not have a winning strategy in the -game played on X. Examples of strongly Baire spaces include Baire metric spaces and locally countably \v ech-complete spaces.

A topological space X is called a Baire space if for each sequence {On : n} of dense open subsets of X, On is dense in X. A point xX is called a q-point if there exists a sequence of neighbourhoods {Vn : n} of x such that every sequence {xn : n} with xnVn has a cluster-point in X. A topological space X is called a q-space if every point in it is a q-point. It follows from Theorem 1 in [7] that each strongly Baire space is a Baire space and it is easy to see that every strongly Baire space has at least one q-point. Hence it is natural to ask whether every Baire, q-space is strongly Baire. This is not true in general (e.g., the Sorgenfrey line is an example of a Baire q-space that is not strongly Baire) but it may hold for topological groups.

Before we proceed we need one more definition. We say that a topological space (X,t) is cover semi-complete if there exists a pseudo metric d on X such that: (i) each d-convergent sequence in X has a t-cluster point in X and; (ii) X is "fragmented" by d, i.e., for every e > 0 and every non-empty set A of X there exists a non-empty relatively open subset B of A with d-diam(B)< e.

Lemma 1 [4, Theorem 3] If X is a Baire, cover semi-complete topological space then X is a strongly Baire space.

Theorem 1 For a topological group (G,, t), if it is a Baire space and has a q-point then it is strongly Baire.

Proof. By Lemma 1 it is sufficient to show that G is cover semi-complete. Since G has a q-point and is a group we may assume without loss of generality that e (the identity element in G) is a q-point. Thus there exists a sequence {Vn : n} of neighbourhoods of e such that every sequence {vn: n} with vnVn has a t-cluster point in G. Then we have from the continuity of multiplication and inversion that for every neighbourhood Vn of e , there exists a symmetric neighbourhood Un of e (i.e., = Un) such that eUn+1 .Un+1.Un+1 UnVn for all n. Hence we can define n := {(x,y) G x G : x yUn}. It follows from this that:

  1. each n contains the diagonal;
  2. each n :is symmetric;
  3. for all n.

Hence from the Metrization Lemma in [5, p.185] there exists a pseudo metric d on G such that the d-topology is coarser than the t-topology and each d-convergent sequence in G has a t-cluster point in G. Thus (G,, t) is indeed cover semi-complete and hence strongly Baire.


[1] A. Bouziad, Every ech-analytic Baire semitopological group is a topological group, Proc. Amer. Math. Soc. 124 (1996) 953--959.
[2] R. Ellis, Locally compact transformation groups, Duke Math. J. 24 (1957) 119--125.
[3] R. Ellis, A note on the continuity of the inverse, Proc. Amer. Math. Soc. 8 (1957) 372--373.
[4] P. S. Kenderov, I. S. Kortezov and W. B. Moors, Topological games and topological groups, Topology Appl. 109 (2001) 157--165.
[5] J. L. Kelley, General Topology, Graduate Texts in Mathematics, 27, Springer-Verlag, New York-Berlin, 1975.
[6] D. Montgomery, Continuity in topological groups, Bull. Amer. Math. Soc. 42 (1936) 879--882.
[7] J. Saint Raymond, Jeux topologiques et espaces de Namioka, Proc. Amer. Math. Soc. 87 (1983) 499--504.
[8] W. Zelazko, A theorem on B0 division algebras, Bull. Acad. Pol. Sci. 8 (1960) 373--375.


On Mathematical Modelling of Granulation: Static Liquid Bridges

Patrick Rynhart, Massey University

Granulation is an important industrial process for the size enlargement of powders. It has a number of uses; to reduce dustiness, to improve product handling, to avoid segregation of powder components, and as a precursor to other processes. The mechanism of granulation requires primary (powder) particles to undergo continuous recirculation. After applying a binder fluid in a spray, particles are coated with a thin layer of liquid. By a process of collision, wetted particles adhere together, held by liquid bridges. During subsequent collisions, additional particles join the newly formed agglomerates. Thus, this process leads to granule formation. For my thesis, I am studying all of these processes, but in this talk I investigate the problem of static liquid bridges.

The Young-Laplace equation for a static surface dividing two fluids with pressure difference Dp and surface tension g is

For a cylindrically symmetric bridge between two particles (as in figure 1), equation (1) can be written

or, in non-dimensional variables X = , R = , Dp = where s is the length scale,

Here the notation R' = and R" = is used. Initial values for the bridge height R0 and tangent R0' are specified. The angle which the bridge makes contact with the tangent plane to the spheres is the contact angle q and is specified for a given problem. The starting value for the bridge height (occurring at X0) is

R0 = sin a = RAsin a

and that of the slope at the point of contact is

R0' = cot (a + q)

Although equation (3) is well known, it is usually solved numerically. In this talk I solve it analytically and describe all of the physical solutions. We begin by making the substitution

U =

Differentiating U with respect to X gives

Rearranging the right hand sides of (4) and (5), substituting these equations into (5) and applying the chain rule (where ), (3) can be written as the following first order differential equation,

Integrating (6) gives

where the constant of integration E is the energy of the liquid bridge surface. Equation (3) defines a Hamiltonian dynamical system and hence the energy E is conserved. By combining (4) and (7),

Substituting (4) into (7) and rearranging gives R' as

Rearranging the above, the shape of the bridge (where R0 R R1) is given by the integral

If E = 0 then (9) can be solved to give

showing that the liquid bridge then has a spherical shape.

If E 0, (9) can be completed using integral tables; the following parametric solution in terms of X is produced,



x = c/h

such that x R h, and where E and F are Jacobi elliptic functions of the first kind. Equation (11}) defines the shape (R) of a bridge parameterised by the position X, where the energy levels Ee are determined from (8). Upon consideration of the discriminant of the the quadratic in X2 of (9), it can be shown that EDP < 0.5.

Phase portrait

The energy level E is related to the height and slope of the bridge surface (R,R') by equation (8). Boundary conditions on R and R', along with the pressure difference DP determine the contour for a particular liquid bridge. Generic contours, characterising all liquid bridge configurations, can be obtained from (8) by scaling. Upon introducing = RDP and = XDP, it follows that


An angle f measured with respect to the horizontal coordinate X is introduced where R' = tan f and therefore = sec f. In terms of f, equation (12) becomes

EDP = for DP > 0     (13a)

EDP = for DP < 0     (13b)


E = Rcos f for DP = 0     (13c)

The phase portraits for equations (13a) to (13c) are shown in figures 2, 3 and 4. These figures show that 5 distinct types of liquid bridges exist. With reference to figure 2, for EDP > 0, periodic solutions exist for |f| < 90°. For this case, the shape of the liquid surface is that of a `wavy' cylinder. For the contour EDP = 0.5, f 0° and this corresponds to the cylinder solution. For EDP < 0, the liquid surface begins with initial height R0, and curves upwards reaching a maximum height Rmax >R0. The critical contour at f = 90° ( EDP = 0) is the sphere described in (10), which separates the cylinder and upwardly curved solutions.

When the pressure inside the bridge is equal to the external (ambient) pressure (as in figure 4), two types of liquid bridges occur : for |f| 90°, the bridges start with initial height R0 and then curve inward achieving a height Rmin < R0. For f = 90°, the solution corresponds to two vertical planes separated by fluid. When the pressure inside the bridge is lower than ambient ( DP < 0), the bridges curve inward as shown in figure 3.


Geoff Whittle

In 2001, Geoffrey Peter Whittle was promoted to a personal chair in Mathematics at Victoria University of Wellington less than ten years after his initial appointment there as a Lecturer. During Geoff's decade at Vic, his career has flourished. He was awarded the New Zealand Mathematical Society's Research Award in 1996 and he has become internationally recognized as a world leader in discrete mathematics, particularly matroid theory. Geoff is also a fine teacher being both a stimulating and energetic lecturer, and an enthusiastic and successful postgraduate supervisor with one of his Ph.D. students having won the RSNZ's Hatherton Award in 1998. One of Geoff's former M.Sc. students is now pursuing her doctorate in Oxford and a second one will follow her to Oxford later this year. In this article, we review some aspects of Geoff's life and describe the significance of his research and why it has attracted such international attention.

Geoff Whittle was born in 1950 in Launceston, a city in northern Tasmania whose famous sons include David Boon and Ricky Ponting. Geoff completed high school in Launceston in 1968 and then spent a year as a mine worker in Tasmania, Western Australia, and the Northern Territory. The following year, he travelled throughout Asia and Europe. Among the most memorable incidents on these travels was having an infected tooth removed in Baghdad without the benefit of anaethesia! In 1971, he began a degree in Philosophy and Mathematics at the University of Tasmania graduating with a B.A. in 1973. Two Mathematics courses that he took in this period were particularly influential. The first was a course in point-set topology, which was taught by Howard Cook using the Moore method. This method, pioneered by the famous University of Texas topologist R.L. Moore, requires the students to prove all of the theorems. It was Geoff's introduction to the life of a mathematician. Howard Cook, who was one of Moore's more than fifty Ph.D. students, was Professor of Pure Mathematics at the University of Tasmania for two years beginning in 1972 before returning to his former post at the University of Houston. The other undergraduate course that profoundly influenced Geoff was a course on projective geometry taught by Don Row, who was to become, nearly ten years later, Geoff's Ph.D. supervisor. A constant theme in Geoff's research has been the successful harnessing of his considerable geometric insight.

After two years as a high school teacher of Mathematics and Science, Geoff returned to the University of Tasmania in 1976 to complete his Honours degree in Philosophy and Mathematics. For many years, Don Row taught an honours course in matroid theory that followed on from his third-year projective geometry course. This course influenced several students including Geoff, James Oxley, and Dirk Vertigan (both now at Louisiana State University) to pursue the study of matroids. In Geoff's case, this pursuit was interrupted by three years as a Lecturer in Teacher Education in Tasmania and then two years as a Mathematics Lecturer at the University of the South Pacific in Fiji. Geoff returned to his studies at the University of Tasmania in 1982 and, while working as a Tutor in Mathematics, completed a Ph.D. in 1984. He stayed on as a Tutor after receiving his doctorate and then, from 1989 until 1991, was a Research Fellow in Mathematics.

Geoff had wanted to stay in Tasmania but, when it became clear that a permanent job there was not on the horizon, he applied for a Lectureship in Wellington and he moved to New Zealand in 1992. He was promoted to Senior Lecturer in 1994, to Reader in 1997, and to a personal chair last year. In 1998, Geoff spent a term as a Visiting Research Fellow at Merton College, Oxford with Dominic Welsh. He has made ten short-term research visits to Louisiana State to collaborate with Oxley and Vertigan. In the late 1990s, Geoff began collaborating with another Australian matroid theorist, Jim Geelen of the University of Waterloo. This collaboration has taken Geoff to Waterloo four times and brought Jim to Wellington for six months from September, 2001.

What is matroid theory and what are Geoff's contributions to the subject? The rest of this article will deal briefly with these questions. Matroids were introduced by Hassler Whitney in his 1935 paper "On the abstract properties of linear dependence". Independence is a core concept in mathematics and, in defining matroids, Whitney attempted to capture the fundamental aspects of independence that are common in graph theory and linear algebra. Subsequent work has shown that Whitney's definition is quite robust and also encompasses several other natural notions of independence including algebraic independence over a field. Matroids arise naturally in timetabling problems and indeed play a central role in combinatorial optimization since they are exactly the structures for which a locally greedy stategy can be guaranteed to produce a global maximum. Whitney called a set of columns of a matrix independent if it is linearly independent, while a set of edges of a graph is independent if it is a forest, that is, it does not contain the edges of any closed path. A matroid consists of a finite set and a collection of its subsets called independent sets that behave both like the sets of linearly independent columns of a matrix and like the forests in a graph. Specifically, the collection of independent sets is non-empty, it is closed under taking subsets, and if one independent set is larger than another, then the larger set contains an element not in the smaller one that can be added to the smaller one to produce another independent set. A matroid that arises from a matrix over a field F is called F-representable, while a matroid that arises from a graph is graphic. Every graphic matroid is F-representable for all fields F, but there are matroids, the smallest with eight elements, that are not F-representable for any F.

In Geoff's Ph.D. thesis and his early papers, he worked on Crapo and Rota's "critical problem".The critical exponent of a matroid representable over GF(q) is a parameter that generalizes the chromatic and flow numbers of a graph, and the redundancy of a linear code. The matroids that are, in a certain strong sense, minimal having a fixed critical exponent are called tangential blocks. Geoff solved a number of problems of Dominic Welsh by providing a series of general constructions for tangential blocks, which revealed that such structures were far more numerous than had previously been believed. In the early 1990s, Geoff decided to begin attacking some of the many notoriously difficult unsolved problems in matroid representability. This was a very wise decision for the success of Geoff's work in this area led to his NZMS Research Award and has resulted in a series of important advances in an area that, only a decade ago, had looked totally intractable. The Research Award was given for Geoff's work on ternary matroids, those representable over the 3-element field. Every binary matroid M is representable over all fields of characteristic two. If M is representable over some additional field, then it follows from fundamental work of Tutte from the 1950s that M is representable over GF(3). Geoff proved the beautiful result that if a ternary matroid is representable over some field of characteristic other than three, then it must be representable over one of GF(2), GF(4), GF(5), GF(7), or GF(8).

Geoff's current research programme has two closely related strands. The first is to extend Neil Robertson and Paul Seymour's fundamental theory of graph minors to matroids representable over finite fields. Robertson and Seymour proved that, in every infinite sequence of graphs, there is one that is a minor of another, where H is a minor of G if H can be obtained from G by deleting edges or vertices or contracting edges. It is well-known that this theorem does not extend to the class of all matroids. Whether the theorem extends to the class of matroids representable over a fixed finite field GF(q) is the major unsolved problem of this strand. The second strand of Geoff's current programme is the pursuit of Rota's 1971 conjecture that, for every prime power q, the set of minor-minimal matroids not representable over GF(q) is finite. In collaboration with Bert Gerards, Geoff and Jim Geelen showed that the Robertson-Seymour theorem extends to the GF(q)-representable matroids of bounded branch-width, loosely speaking, those that are not too tightly connected. In addition, Geoff and Jim have proved that there are only finitely many excluded minors for GF(q)-representability whose branch-width is less than some fixed bound. These are important and exciting developments and Geoff has firmly established himself as a world leader in matroid theory. Those of us who work in this area hope that the very fruitful last decade that Geoff enjoyed will be exceeded by an even more productive next decade.

James Oxley (Louisiana State University)
Charles Semple (University of Canterbury)

Garry J. Tee

Centrefolds Index


Dr Markus Neuhäuser

Dr Tammy Smith

Dr Markus Neuhäuser

I joined the Department of Mathematics and Statistics at the University of Otago in January as a Senior Lecturer in Statistics. My arrival in the Department prompted the arrival of daughter Victoria in March.

I was born and raised in Germany and have my diploma and doctorate from the University of Dortmund, Department of Statistics. From 1996 to 2001 I worked as a biostatistician within the pharmaceutical industry. I am a holder of the certificate "Biometry in Medicine", which confirms that I am an appropriately qualified and experienced statistician for the planning and analysis of clinical trials according to the German regulatory authority BfArM, as well as the European Agency for the Evaluation of Medicinal Products EMEA. I would like to utilize my experience to foster the teaching in biostatistics at Otago and look forward to collaborating with colleagues in the Department of Preventive and Social Medicine.

My current research interests are nonparametric methods, multiple comparisons as well as location-scale tests and their application in life sciences, especially in drug development, ecology, and ornithology. As well as my work in industry, I have also collaborated with ornithologists in Germany.

Dr Thomasin (Tammy) Smith

Hello fellow mathematicians! In July I will be joining the Mathematics Disipline at Massey University, Palmerston North as a Lecturer. In 1994 my husband Brian and I moved from the United States to New Zealand for a two year working holiday and eight years onward now call New Zealand home. Much of my positive "kiwi experience" thus far has been associated with Massey University, so I am pleased to be given the opportunity to continue to work in the environment that has treated me so well. I was born and raised in the States (Indiana and Texas) with a three year Mexican holiday (Torreon, Coahuila) in junior high. I did my BSc in Physics and Mathematics at New Mexico State University and obtained a Secondary Teaching Certificate in Mathematics from the University of Texas at Austin. I then taught high school mathematics for two years in inner-city Detroit, Michigan. In 1995 I returned to University, completing an MSc in abstract algebra (arithmetic degree theory) and a doctorate on mathematical modeling of hydrothermal eruptions at Massey University. More recently I have had the opportunity to apply my modeling experience to the study of the three-dimensional structure of hair molecules while working as a postdoctoral fellow in the Physics discipline at Massey.




Information has been received about the following publications. Anyone interested in reviewing any of these books should contact

David Alcorn
Department of Mathematics
University of Auckland

Andrievskii VV, Discrepancy of signed measures and polynomial approximation. (Springer Monographs in Mathematics) 438pp.
Arveson W, A short course on spectral theory. (Graduate Texts in Mathematics, 209) 135pp.
Aubert G, Mathematical problems in image processing. (Applied Mathematical Sciences, 147) 286pp.
Baker A, Matrix groups. (Springer Undergraduate Mathematics Series) 330pp.
Bielecki TR, Credit risk: Modeling, valuation and hedging. (Springer Finance) 500pp.
Blyth TW, Further linear algebra. (Springer Undergraduate mathematics Series) 230pp.
Cederberg JN, A course in modern geometries. (2nd ed) (Undergraduate Texts in Mathematics) 439pp.
Cyganowski S, From elementary probability to stochastic differential equations with MAPLE. (Universitext) 310pp.
Harville DA, Matrix algebra: Exercises and solutions. 271pp.
Heyde CC, Statisticians of the centuries. 480pp.
Hiriart-Urruty J-B, Fundamentals of convex analysis. (Grundlehren Text editions) 259pp.
Hoppensteadt FC, Modeling and simulation in medicine and the life sciences. (Texts in Applied Mathematics, 10) 354pp.
Jost J, Riemannian geometry and geometric analysis. (3rd ed) (Universitext) 532pp.
Kac V, Quantum calculus. 112pp.
Lang S, Short calculus. (Undergraduate Texts in Mathematics) 260pp.
Logan JD, Transport modeling in hydrogeochemical systems. (Interdisciplinary Applied Mathematics, 15) 223pp.
Mandelbrot B, Gaussian self-affinity and fractals. 654pp.
Martinez A, An introduction to semiclassical and microlocal analysis. (Universitext) 190pp.
Molloy M, Graph colouring and the probabilistic method. (Algorithms and Combinatorics, 23) 326pp.
O Searcoid M, Elements of abstract analysis. (Springer Undergraduate Mathematics Series) 298pp.
Rordam M, Classification of nuclear C*-algebras. Entropy in Operator algebras. (Encyclopaedia of Mathematical Sciences, 126) 198pp.
Ryaben'kii VS, Method of difference potentials and its applications. (Springer Series in Computational Mathematics, 30) 538pp.
Saranen J, Periodic integral and pseudodifferential equations with numerical approximation. (Springer Monographs in Mathematics) 452pp.
Saxe K, Beginning functional analysis. (Undergraduate Texts in Mathematics) 197pp.
Schneider P, Nonarchimedean functional analysis. (Springer Monographs in Mathematics) 156pp.
Stillwell J, Mathematics and its history. (2nd ed) (Undergraduate Texts in Mathematics) 542pp.
Stubhaug A, The mathematician Sophus Lie. 555pp.
Toth G, Finite Möbius groups, minimal immersionsof spheres, and moduli. (Universitext) 317pp.


Information and Coding Theory
by Gareth A Jones and J Mary Jones, Springer Undergraduate Mathematics Series,
Springer-Verlag, Berlin, 2000, 210pp, DM 59.00. ISBN 1-85233-622-6.

I first taught Coding Theory in 1996 when I inherited a third year combinatorics course on the untimely death of Derrick Breach. I naturally based my course on what was already there, and slowly developed it over the years. It now stands alone as a 24 lecture course, attracting 40 or more students, more than half of them electrical engineers. My course deals with the mathematics behind codes, but is attuned to the applications rather than to the abstract algebra.

I therefore could not resist the opportunity, as I began to teach this course for the last time, to review this book, whose coverage is quite different from that of my own course.

The first two chapters deal with codes of variable word length, the aim being to construct a code as compact as possible to convey the data. In the first chapter the key problem is unique decodability: if the word length varies you need to be able to identify the ends of the words, preferrably as soon as they are received. The second chapter deals with the optimality of such codes, culminating in the construction of Huffman codes.

The third and fourth chapters are concerned with entropy and Chapter 5 is centred on Shannon's Theorem. An outline of the proof, with admitted gaps, is given at this point, and a complete proof is given in an appendix at the end of the book: I commend this approach when the theorem is important and the outline of the proof reasonably simple but there are troublesome details. After the proof the chapter comments on the implications of the theorem, which proves the existence of `good' codes, but gives no indication of how to construct them, and its converse.

In Chapter 6 we begin the search for these good error-correcting codes. The Hamming Bound limits the size of the code for a given length and error-correcting capability, and the Gilbert-Varshamov bound guarantees the existence of a code with given length and distance provided the bound is met. I find the introduction of the Gilbert-Varshamov bound at this point unsatisfactory since no actual construction is given. The chapter closes with the construction of Hadamard codes.

The seventh chapter deals with the basics of linear codes. Generator and parity check matrices are introduced, but there is no clear instruction on how to construct the latter from the former in the general case when the message and check digits are interleaved. The Hamming and Golay codes are described, but I could find no statement of the theorem that these are the only perfect binary codes. There is a brief indication of how to use Steiner triple systems to construct codes. The chapter ends with the coset and syndrome approaches to error correction. The back cover claims that the book deals with Reed-Muller codes, but in fact the only reference to them is a single exercise set for the student.

To my mind the book stops too soon. There is nothing on cyclic codes, and although finite fields other than are mentioned, they get no real use. Nor is there anything on burst errors or the technique of interleaving to make burst errors easier to correct. The authors acknowledge that cyclic codes are desirable, but plead lack of space in the course on which the book is based.

As is reasonable, the authors assume a background in linear algebra: anyone who understands the terms basis, rank and dimension for real vector spaces should have no difficulty with this aspect. Some probability is also used, but again it is very basic.

The style of writing is very concise, with little `chat'. The book could well be used as a text in a second or third year course in information and coding theory, which could be satisfying to mathematics students, though my engineering students would find it too abstract and insufficiently related to their everyday experience, and my computer science students would appreciate a more algorithmic approach.

David Robinson
University of Canterbury

Fourier and Wavelet Analysis
by George Bachman, Lawrence Narici, and Edward Beckenstein,
Springer-Verlag, Berlin, 2000, 510pp, DM 119.00. ISBN 0-387-98899-8.

This seems to me a slightly unusual book although its singularity perhaps will diminish with time. In the Universitext series to which it belongs there are found expositions that vary quite widely in difficulty, purpose and ambition; several are in effect "primary secondary sources", being the most convenient reference for the results they contain, whilst others, whatever their merits, are textbooks whose topics are also treated in other readily available works. This book approximates the second type. It contains nothing not easily found in other books, nor even any notable innovations in the proofs. Nevertheless it has a lot to offer. I enjoyed reading it and learnt quite a lot from it, and I imagine that the same might be said of most readers who do not claim specialised knowledge.

The authors' general intention was, I suppose, to write a book that would present the prerequisites for wavelet theory in a form that would be as unterrifying and undemanding as possible, without excluding other interesting topics on the way. To do this they assume here and there (giving references) the odd fact from more advanced analysis, such as Lebesgue's theorem on points of approximate continuity, Fatou's lemma, or the Riesz-Fischer theorem. This seems sensible and for the same reason I also approve of their habit of quoting rather than simply referring back to results they have established much earlier in the book. As a consequence the book is notably less off-putting than its fairly solid content might warrant. The authors even express in their foreword the hope that the book is "fun", and I think it is, far more so than most books on a similar level.

It is in seven chapters. 1, Metric and Normed Spaces, 34pp; 2, Analysis, 54pp (the metric space approach to functions and continuity); 3, Bases, 50pp. These are intended to form an introduction to abstract methods in analysis, as if for an "American graduate student". Then 4, Fourier Series, 124pp, and 5, The Fourier Transform, 120pp. The sixth chapter discusses the Fourier transform on /(n) and, very briefly, the Fast Fourier Transform in 28pp, and the seventh (86pp) introduces wavelets.

Wavelets are fashionable. The bibliography (of 114 items) lists around twelve titles of books with some pretensions to expounding their theory, but those few that I have seen are rather heavy going; if they aim at applications, they present the basic calculations (such things as deriving a mother wavelet from a Riesz multiresolution analysis) in a manner that bemuses the unprepared reader by its apparent lack of motivation, whilst if they are more theoretical they frighten him or her off by assuming a conceptual background that the reader may lack. At least in principle, the authors here provide all the concepts that are needed in the compass of a single book and their treatment is correspondingly more clear and straightforward, especially for the novice, than some others. On the other hand, they do not go much beyond the basic calculations and therefore, disingenuously, describe the result as Wavelets for idiots? (sic). There are again no applications, and so not much motivation either.

These basic calculations are often ingenious (and somewhat repetitive) applications of Plancherel's theorem on the line---which is, one supposes, the reason for Chapter 5, which in turn is the reason for Chapter 4. The authors' approach to the Fourier analysis is classical, as they announce it will be in their foreword. , , and are the only groups allowed, and the theorems given are such as appear in, let us say, Carslaw; that is to say, pointwise convergence is much in evidence. For instance the Gibbs phenomenon is properly discussed. The advantage of this treatment, apart from its greater accessibility, is that the exposition is also a liberal education in analysis in which the more abstract ideas of the first three chapters are seen in action. The text is nevertheless sufficiently "modern" to mention Carleson's theorem (with references) and Kolmogorov's example (page 211).

So far I have praised the book. It is eminently readable (granted the usual American peculiarities), it has an interesting choice of topics discussed at a level suitable for a fairly wide readership, and some amusing and informative asides (although I am not always convinced of their literal accuracy---for instance the legend of the purchase of the Byrsa at Carthage related on page 224 is I think glancingly referred to in exactly one line of the Aeneid (book 1, line 368), and not at all in Tate's libretto for Dido and Aeneas. More seriously, I am not at all sure from the references to hand precisely who proved or published precisely what about Weierstrass's nowhere differentiable functions. Sikorski and Edwards both suggest the first paper to appear was that of du Bois-Reymond in 1875. The version stated on page 161 of the book is possibly not Weierstrass's but Hardy's (1916). But I cannot easily check the original sources.) However the book also has some moderately serious defects.

The foreword tells us that it was written with Scientific Word and Scientific Workplace, and that "the experience was ... interesting" (ellipsis the authors'). It shows. Both the vertical and the horizontal spacing are erratic and odd things happen from time to time; something has dropped out at the bottom of page 211, the page formatting on page 242 has gone slightly wrong, and so on. There are many "misprints"; most are trifling and would not discompose an experienced mathematician; as a couple of random examples out of very many, on page 338 line 8 the formula

tk(t) = (|t|-1)1[-1,1](t)

is clearly untrue (take t = 0), and on page 416 line 2 what is meant is

Vj =cl[fj,k : k]

and not Vj =cl[fj,k : j,k] . This sort of thing, and there is a lot of it, might well baffle or mislead an unwary student and so might the occasional careless word that gives a wrong impression, as on page 469 line 1 where "thus" suggests, falsely, that it is the immediately preceding argument that establishes linearity and boundedness. Sometimes a word or phrase is used before it is defined, or a statement is made as if it were obvious without any indication that it is about to be proved. However, there are also substantial errors both of logic and of fact. A couple that strike the most superficial eye are on page 36, line 22 where the (unreduced) suspension of a square is unjustly accused of being a cube, and on page 466, line 13 where it is stated that the union of two orthonormal bases is linearly dependent. On page 248, line 5 et seqq., the estimates need altering because of the negative sign; after this the last line should be 3p2e. On page 232 the (very simple) proof of 4.14.9 is quite wrong. 4.20.2(b) on page 259 is false, which means that the proof of 4.20.3 also needs correction. For the experienced reader errors like these (and I have a list of many more) may add flavour to the text and a good and forewarned student may also find them stimulating. Nevertheless, they do make it risky to use the book as a casual reference or as a textbook. There are also exercises by the way. The easy ones are very, very easy; the difficult ones are very difficult, and are furnished with "hints" that are usually full solutions. I have not checked all of them.

This book is therefore a curious mixture. If you want the standard elementary computations on wavelets or the classical theorems of Fourier analysis set out at a relatively genteel trot against a reasonable but not advanced mathematical background it is rather a good source. It is informative, interestingly and clearly written with intelligent comments and pleasing explanations, a delight to read. But you must be very cautious before accepting the details.

Christopher Atkin
Victoria University of Wellington


An Introduction to Riemann-Finsler Geometry,
by D. Bao, S.-S. Chern and Z. Shen, Graduate Texts in Mathematics, 200,
Springer-Verlag, Berlin, 2000, 431pp, DM 98.00. ISBN 0-387-98948-X.

Finsler geometry is often described as a generalisation of Riemannian geometry with the metric depending not only on position but also on direction. The name refers to Paul Finsler who wrote a thesis `On Curves and Surfaces in Generalised Metric Spaces' under Caratheodory in 1918. These generalised spaces were pointed out by Riemann in his inaugural lecture but the topic was set aside with the comment that the calculations were quite time-consuming. The use of Riemann-Finsler in the title of this book is a reminder of Riemann's contribution. Riemann-Finsler geometry is not a new branch of the subject --- this is simply a book about Finsler geometry.

Finsler's work concerned the geometry of certain problems in the calculus of variations (a topic which Hilbert included in his 23 unsolved problems in 1900). The subsequent development of the local geometry as a generalisation of Riemannian geometry inevitably produced a variety of definitions for key concepts. For example, the Levi-Civita connection of Riemannian geometry has two characteristic properties: (1) its torsion is zero, (2) it is compatible with the metric. Berwald and Cartan generalised different aspects of this connection to Finsler geometry and obtained two different connections which possessed the second property but not the first, while Chern's connection (which is identical with Rund's) has the first property but not the second. It is now known that there is no connection in a non-Riemannian Finsler space which has both these properties. The theory of fibre bundles has contributed to our understanding of the underlying structure of a Finsler manifold, although differing viewpoints remain.

This book provides a systematic development of the elementary and essential aspects of the subject. It establishes a Finsler manifold and explores its geometry, paying homage to the historical concepts that are relevant to its purposes. We are introduced to the Finsler versions of well-known theorems from Riemannian geometry and finally we study the Riemannian manifold as a special case of the Finsler theory. Inevitably there is a selection of material and here the bias is towards global geometry.

The book is set out in three parts: Finsler Manifolds and their Curvature, Calculus of Variations and Comparison Theorems, Special Finsler Spaces over the Reals.

The first part describes the Finsler structure, F : TM ® [0,) of a manifold M. The Chern connection acts on the pulled back vector bundle p*TM over either the slit tangent bundle TM \ 0 (for local problems) or the tangent sphere bundle SM (for global problems). The authors manage to deal with both situations at once by using only objects on TM \ 0 which are invariant under positive rescaling. Surprisingly the Chern connection plays a parallel role to the Levi-Civita connection in Riemannian geometry although the latter acts on the bundle TM over M. The traditional tensor notation is used when it is convenient and there are many calculations in this medium as well as in the coordinate free notation. We meet various curvature tensors, the Bianchi identities, Schur's lemma and a generalised Gauss-Bonnet theorem for a Landsberg surface which is a particular type of Finsler surface. Maple code is included for some curvature calculations, and it is suggested that a geometry-minded computer scientist may help advance the field significantly.

Part 2 deals with the calculus of variations and comparison theorems, placing some concepts of global Riemannian geometry into the broader Finsler setting. These include variations of arc length, Jacobi fields, the Gauss lemma, the Hopf-Rinow theorem, the Bonnet-Meyers theorem, the Cartan-Hadamard theorem and Roache's first theorem.

Part 3 devotes a chapter to each of the special cases: Berwald and Randers spaces, spaces of constant flag curvature, and finally Riemannian and Minkowski manifolds.

This is a book designed for graduate students (and researchers) to work through. It is carefully written with remarks, explanations and reminders of earlier material when it is relevant to the current situation. There are many exercises to consolidate and extend the reader's knowledge, the more difficult ones include step-by-step guidance. References are listed at the end of each chapter inviting on-going research. The authors suggest that this book could be used for a 3 semester graduate course, for students with a working knowledge of tensors and manifolds and some familiarity with surfaces and Gaussian curvature.

Gillian Thornley
Massey University


The Teaching and Learning of Mathematics at University Level
An ICMI Study (New ICMI Study Series, volume 7)
edited by Derek Holton, with section editors Michele Artigue, Urs Kirchgrber,
Joel Hillel, Mogens Niss and Alan Schoenfeld
Kluwer Academic Publishers, Dordrecht/Boston/London, 2001, 560pp. EUR 206, USD 190, GBP 128. ISBN 0-7923-7191-7

This is rather a difficult book to review as a whole, being more in the nature of a volume of conference proceedings. It consists of 46 articles on mathematics education at university level, arranged in seven sections: Introduction, Practice, Research, Mathematics and Other Disciplines, Technology, Assessment, Teacher Education. According to the editor's preface, the basis for the book was the material developed at the study conference on the Teaching and Learning of Mathematics at University Level held in Singapore in December 1988. However, the book is not exactly the proceedings of that conference: versions of some of the conference papers appeared in a special issue (February 2000) of the International Journal of Mathematics Education in Science and Technology, and the present volume includes some material not presented at the conference. The preface does not make it clear whether this extra material consists of expansions of conference presentations or entirely new articles.

This reviewer noticed a few trivial and obvious misprints, and one incorrect mathematical example on p116; but the book has a systematic production fault which was initially an irritation and eventually became thoroughly annoying. In more than one-third of the articles, textual material was lost between the bottom of the first page and the top of the second — at least one line was missing in each case, and possibly more. There was of course no way of knowing how much had been omitted. This fault occurred on the page boundaries 13-14, 87-88, 137-138, 179-180, 185-186, 199-200, 207-208, 221-222, 255-256, 283-284, 321-322, 371-372, 395-396, 431-432, 481-482, 501-502, 529-530 and 539-540 (on p501, for good measure, half of one sentence has disappeared). This kind of production flaw is not acceptable in any book, least of all one as overpriced as the volume under review.

Turning now to the contents of the book: the intended readership seems to be primarily teachers of mathematics at university level rather than mathematics education researchers, and this review is written from that point of view (which conveniently is the reviewer's own perspective).

Two articles in particular caught this reviewer's attention and would be good starting-points for anyone wanting to explore this collection. The opening article by Claudi Alsina ("Why the Professor Must be a Stimulating Teacher") provides an excellent introduction to current issues in tertiary mathematics education and convincingly urges the need for change. The article by Lynn Steen ("Revolution by Stealth: Redefining University Mathematics) offers a provocative discussion of the changes in the mathematics that is taught in universities, changes brought about by public demands for "relevance" and "accountability".

Generally, as is only to be expected in a collection of this nature, the articles are of variable quality and interest. For this reviewer, the second section on Practice was the most valuable. It contains a number of examples of new approaches to teaching as well as useful general discussions of issues in curriculum development and teaching practice. The sections on Technology and Assessment include some useful examples of different approaches and ideas. Also of great interest to me was an article in the Research section by Jean-Luc Dorier and Anna Sierpinska, on research into the teaching and learning of linear algebra. The authors raise some deep questions about the nature of the difficulties that students have in learning linear algebra and come to some surprising conclusions about how the subject should be taught.

By contrast, a number of articles describing education systems and organisations in various countries and institutions were rather pedestrian presentations, though of some interest in showing the variety of structures that exist.

This is a book for dipping into rather than sustained reading. Alan Schoenfeld in his article "Purposes and Methods of Research in Mathematics Education" puts it well: mathematicians approaching research results in mathematics education (including this volume) "should not look for definitive answers, but for ideas they can use" (p235). As university mathematicians face changes and challenges in the content and context of their teaching, the appearance of a volume such as this is timely. It is a great pity that its exorbitant price will probably deter most people from adding it to their individual collections, but it should certainly be on library shelves as a valuable and thought-provoking source of ideas.

Michael Carter
Massey University


The University of Auckland, 7-10 December 2001

The annual `Lie meeting' had its origins as an informal gathering, usually held at the ANU, to present talks and discuss research developments in Lie groups, their representations and a broad range of related areas. This event has grown over the years to a significant conference held at various locations in Australia and attracting up to 70 participants in some years. For this last meeting the title was adjusted; `Australia' became `Australasian'! It is the first time it has been held outside Australia.

In the traditions of this conference series most attendees gave talks. This meeting was distinguished by a particularly diverse range of topics covered in these. This included applied mathematics, mathematical physics, analysis and quantum groups. Despite this range the common link of Lie theory made these accessible and stimulating for the majority of participants and most presentations were of a very high standard. The full length talks were given by Thomas Branson (Univ. Iowa), Michael Cowling (UNSW), Gaven Martin (Auckland), Andrew Mathas (Univ. Sydney), Alexander Molev (Univ. Sydney), Brynjulf Owren (Norwegian Univ. Science and Technology), Boris Pavlov (Auckland), David Robinson (King's College London) and Paul Sorba (Lab. Theoretical Phys. Annecy, France).

Attendance was no doubt affected slightly by the World Trade Center event on September 11, that date being around the time most overseas participants would have been considering booking their tickets. Nevertheless we had over 30 participants, a majority being from Australia and several from further afield. The conference was run from Friday through the weekend to Monday in order to be close in time to the 2001 New Zealand Mathematics Colloquium and to avoid clashes with various meetings in Australia.

There was no official excursion but the organisers and other locals helped many visiting participants locate good sources of food and wine. The conference dinner also served partly as an excursion as this was held on the Saturday night at Gaven Martin's residence in Albany.

The conference was financially supported locally by the Mathematics Department at the University of Auckland and the Marsden grants of Vladimir Pestov and Gaven Martin. (All participants paid for their own travel.)

Rod Gover

Westport 19-22 February 2002

This was the second conference on this theme, the first having been held in Christchurch in February 1999. The conference was based at the University of Canterbury's field station at Westport. The venue provided easy access to a variety of walks and scenic attractions forming a beautiful backdrop for those all important non structured interactions between delegates.

The theme was interpreted widely and we had talks on a wide variety of topics. A sample at the theoretical end were talks on how to form bases for Multivariate Spline Spaces, on Pythagorean-hodograph curves and their properties, and on wavelets and Riesz bases. At the more directly applied end were talks on how to model the geometry of leaves for the purposes of building virtual plants, on using variational splines with restricted range contraints for reconstruction of images from lidar data, and on visualisation with positivity constraints. There were participants from Australia, Canada, England, Germany, Korea, Singapore, Thailand and New Zealand.

The organisers are grateful to the New Zealand Mathematical Society, the University of Auckland, the University of Canterbury and Lincoln University for their support of this successful conference.

Conference Participants Outside the Punakaiki Rocks Hotel

Back Row: James Lyness (Argonne), Zouwei Shen (Singapore), Shayne Waldron (Auckland), Reinhard Klette (Auckland), Elena Berdysheva (Erlangen), Ian Sloan (New South Wales), Keith Unsworth (Lincoln).
Row 3: Rua Murray (Waikato), Marian Neamtu (Vanderbilt), Boris Kvasov (Suranaree, Thailand), S.L. Lee (Singapore), Ken Brodlie (Leeds), K.H. Kwon (KAIST, Korea), Jason McEwen (ARANZ, Christchurch).
Row 2: Tim Mitchell (ARANZ Christchurch), Tim McLennan (ARANZ, Christchurch), Garry Newsam (DSTO Adelaide).
Row 1: Rida Farouki (UC Davis), Bruce van Brunt (Massey), Len Bos (Calgary), Rick Beatson (Canterbury), Birgit Loch (Queensland), Martin Buhmann (Giessen).

Rick Beatson for Keith Unsworth (Lincoln)
Shayne Waldron (Auckland)


The annual New Zealand phylogeny workshop was held this year at Aotearoa Lodge, Whitianga, \linebreak "NZ's premier beach resort", in the second week of February. These workshops have been running since 1996 and provide an excellent opportunity for Mathematicians and Biologists to get together and tell disparaging jokes about each other, play frisbee and of course make significant breakthroughs in the field of BioMathematics. This year we had ~50 Biologists, Computer Scientists, Statisticians, Physicists, Mathematicians and other people spanning these disciplines in attendence from as far afield as UK, France, Israel, Sweden, and Australia.

Talks worthy of note were Scott Baker on BioSurveillance, using mitochondrial DNA sequences and phylogenetic analysis to identify the source of various whale products and Mike Charleston from Oxford discussing methods of detecting and representing coevolution of different biological systems.

Other interesting talks were Elchanan Mossel from Microsoft on using statistical physics to study phase transitions in phylogeny. Also, Hamish Keston, an Auckland University undergraduate, who despite my bumbling introduction delivered a fantastic talk on a summer project he worked on, in his third year for Allen Rodrigo, this involved comparing 2 validation techniques used in phylogeny known as the bootstrap and the jackknife.

(Un)fortunately several of the participants, most notably Mike Hendy, David Penny and Peter Lockhart, had to leave the conference prematurely due to being short listed for recipients of CoRE funding, which they later received (See for more information). They were lucky to make it out of town at all as both roads out of Whitianga were closed on the Thursday due to flooding.

The organisor this year, Allen Rodrigo (School of Biological Sciences, University of Auckland) did a fabulous job organising this fruitful event and I'm sure everyone is looking forward to the workshop next year, which is to be organised by Mike Steel and held in early February at Kaikoura.

Paul Gardner

SEEM4: Fourth Conference On Statistics In Ecology
And Environmental Monitoring

Population Dynamics: The Interface Between Models and Data
9-13 December, 2002


Pre-Conference Workshop on Matrix Population Models
4-6 December 2002

Centre for Applications of Statistics and Mathematics, University of Otago, Dunedin, New Zealand.
Web page:
Email enquiries:

Conference Programme
The purpose of the conference is to bring together ecologists, statisticians, fisheries scientists and modellers in order to discuss common issues in the modelling of population dynamics. Our hope is that ecologists and other scientists can benefit from insight that statisticians can provide on the latest techniques in parameter estimation, and that statisticians can better understand the needs of ecologists by becoming familiar with the types of population dynamics models that are currently being used.

Invited Speakers
Three leading researchers in this area have agreed to give invited talks, and to lead an end-of-conference forum.
Hal Caswell (Senior Scientist, Woods Hole Oceanographic Institution, USA) is well known for his work on population matrix models, and a second edition of his landmark book, Matrix Population Models: Construction, Analysis, and Interpretation, has recently been published by Sinauer Associates, Massachusetts, USA. (
Jean-Dominique Lebreton (Head of the Department of Population Biology, Centre for Functional and Evolutionary Ecology, CNRS, Montpellier, France) is famous for his work on both mark-recapture methods and population dynamics. He brings a breadth of knowledge to the conference that is rare, in that he is an experienced practitioner in both biometry and population modelling.
Byron Morgan (Professor of Applied Statistics, University of Kent, UK) is an internationally reknowned applied statistician. Recently his interest in statistical ecology has lead him and his co-workers to develop new statistical methodology for evaluating animal population dynamics, which he will present at the conference. (

Conference Proceedings
The proceedings of the conference will appear in a special issue of the Australian and New Zealand Journal of Statistics. If you wish to have your talk considered for the proceedings, you will need to provide a manuscript version to the organizers at the conference.

Pre-Conference Workshop
Hal Caswell and Jean-Dominique Lebreton have kindly agreed to run a 2.5-day workshop on matrix population models prior to the conference (4-6 December). The number of participants will be restricted to 20. The workshop is aimed primarily at quantitative ecologists and population biologists, either in fundamental or applied research. The material presented is of interest both in animal and plant population dynamics, and useful both for research and management aspects. At the end of the workshop, participants should be able to build and run basic matrix models using available software, to address questions on population dynamics using such models, and to have a clear view of the generalizations that are available and relevant to their research.

Participants are expected to have a general interest in quantitative approaches, familiarity with basic mechanisms in population dynamics (life cycle, fecundity and mortality/survival processes), knowledge of basic calculus and matrix algebra, and some acquaintance with general statistical procedures, in particular generalized linear models.

The workshop content will be a balance between modelling approaches per se and estimation procedures/models use in practice, and will include the following topics: Matrix model formulation; From life cycle to matrix; Linear and nonlinear models; Deterministic and stochastic models; Linear models (transient analysis, asymptotic analysis, eigenvalues, eigenvectors); Sensitivity analysis; Matrix models for classical life tables; Estimation from transition frequency data; Stochastic models; Density-dependent models; Estimation from mark-recapture and other kinds of individual history data.

Registration Fees
The conference registration fee is NZ$500, with a student fee of NZ$250. The workshop registration fee is NZ$300 (no student discount). If you wish to register for both the conference and workshop, the combined fee is NZ$750 ($NZ500 for students). A 20% late conference registration fee will be imposed after 31st October 2002.

Important Dates
Early Registration Before Friday, 31 October 2002
Abstracts Deadline Friday, 31 August 2002

New Plymouth 4-11 January 2003

This year the annual NZMRI summer workshop will be based in beautiful New Plymouth in the North Island of New Zealand. This follows previous workshops in Huia (1994), Tolaga Bay (1996, 1997), Napier (1998, 2002), Raglan (1999), Kaikoura (2000), and Nelson (2001).

The topic for New Plymouth (2003), will be

"Combinatorics and Combinatorial Aspects of Biology,"

although this will be interpreted broadly. As usual, families are invited to come. We especially encourage New Zealand graduate and senior students, and ask that most of the talks be directed at a graduate student level. The standard format is that we have lectures in the morning, the afternoons are left free for individual pursuits, families, sightseeing, mathematical discussions, and the like, and we have lectures in the early evening after dinner. There will also be one day off (traditionally Wednesday).

We have a stellar group of speakers, all of whom are world renowned mathematicians and computer scientists, and very fine speakers. We have asked that the speakers give a series of 2-3 lectures for this workshop, the first two being easily accessibly to graduate students.

Our speakers currently include (in no particular order):

Karl Broman. (Johns Hopkins University, Recombination Mapping),
Mike Hallett. (Magill University, Parametric Aspects of Computational Biology),
Niel Robertson. (Ohio State University, The Graph Minors Project),
Martin Grohe. (University of Edinburgh, Logical Aspects of Graphs),
Andreas Dress. (University of Bielefeld, Overview of Combinatorial Biology),
Tandy Warnow. (University of Texas, Mathematical Aspects of Phylogeny),
Lior Pachter. (University of California, Berkeley, Genefinding),
Terry Speed. (University of California, Berkeley, Mathematical Aspects of Gene Expression), and
Richard Stanley. (MIT, Enumerative Combinatorics).

It is primarily being organized by Geoff Whittle and Rod Downey of Victoria University {,} who will be happy to answer any questions. This should be a wonderful workshop.

Important Note
As usual, we will cover most local costs. This include most lunches, dinners and accommodations. This year we will make sure that the meeting does not get too big. We will soon be putting out an announcement asking for numbers. If you wish to be on such a list, please e-mail Geoff Whittle.

Conferences in 2002

July 7-10 (Fraser Island, Queensland) Australian Workshop on Combinatorial Algorithms (AWOCA 2002)
email: Diane Donovan (

July 7-12 (Sydney) Algorithmic Number Theory Symposium V

July 8-12 (Melbourne) 14th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2002)
See December Newsletter for first announcement and call for papers.

September 29 - October 3 (Brisbane) 5th Biennial Conference of the Engineering Mathematics and Applications Conference
email: Mike Pemberton (




Executive summary for authors of research papers in journals

Endorsed by the Executive Committee of the IMU in its 68th's session in Princeton, NJ,
May 14-15, 2001

The number of mathematical papers that are stored or circulated as electronic files is increasing steadily. It is important that copyright agreements should keep in step with this development, and not inhibit mathematical authors or their publishers from making best use of the electronic medium together with more traditional media. While most mathematicians have no desire to learn the subtleties of copyright law, there are some general principles that they should keep in mind when discussing copyright for research papers with their publishers.

  1. A copyright agreement with your publisher is a bargain struck between his interests and yours. You are entitled to look out for your interests. Most journal publishers have a standard copyright form, and may be unwilling to vary it for individual authors. But nothing prevents you from asking, if you see room for improvement. Pressure from authors may lead publishers to change their standard contracts.
  2. Three groups of people have an interest in your paper:
    1. Yourself and your employer (who may in some countries be automatically the original copyright holder and hence a party to the copyright agreement);
    2. The journal publisher;
    3. Users of paper who are not parties to the copyright agreement, including readers and libraries.

    One of the main purposes of your copyright agreement is to control how your publisher or you make the paper available to this third group. Publishers will hardly allow individual authors to dictate agreements with libraries. But if you know that a certain journal publisher makes life hard for libraries, you can take this into account when choosing where to submit your paper.

  3. There is no ideal copyright agreement for all situations. But in general your agreement should contain the following features:
    1. You allow your publisher to publish the paper, including all required attachments if it is an electronic paper.
    2. You give your publisher rights to authorize other people or institutions to copy your paper under reasonable conditions, and to abstract and archive your paper.
    3. Your publisher allows you to make reprints of the paper electronically available in a form that makes it clear where the paper is published.
    4. You promise your publisher that you have taken all reasonable steps to ensure that your paper contains nothing that is libellous or infringes copyright.
    5. Your publisher will authorize reprinting of your paper in collections and will take all reasonable steps to inform you when he does this.
  4. Should you grant full copyright to the publisher? In some jurisdictions it is impossible to transfer full copyright from author to publisher; instead the author gives the publisher an exclusive right to do the things that publishers need to do, and these things need to be spelt out in the agreement. This way of proceeding is possible in all jurisdictions, and it has the merit of being clear and honest about what is allowed or required. \end{enumerate}

The complete copyright checklist was written by Wilfrid Hodges. It was approved and is recommended by the Committee on Electronic Information and Communication of the International Mathematical Union (IMU)


Endorsed by the IMU Executive Committee on May 15, 2001 in its 68th's session in Princeton, NJ.

Open access to the mathematical literature is an important goal. Each of us can contribute to that goal by making available electronically as much of our own work as feasible.

Our recent work is likely already in computer readable form and should be made available variously in TeX source, dvi, pdf (Adobe Acrobat), or PostScript form. Publications from the pre-TeX era can be scanned and/or digitally photographed. Retyping in TeX is not as unthinkable as first appears.

Our action will have greatly enlarged the reservoir of freely available primary mathematical material, particularly helping scientists working without adequate library access.

This statement was written and recommended by the Committee on Electronic Information and Communication (CEIC) of the International Mathematical Union (IMU).


The New Zealand Mathematics Colloquium 2002 will be in Auckland. As in recent years, it will be held towards the end of the year, probably in late November or early December. If you have a preference for a certain date, please get in touch with David Gauld.


The Mathematics and Information Sciences panel of the Marsden fund received 73 applications this year, compared to 74 last year. Of these, 15 were FastStart applications (two year grants of $50,000 p.a. for researchers within 7 years of their PhD). Its expected that a similar amount of money will be available this year as last year. By the time you read this, the results of the first round will be known. Good luck everybody!


The New Zealand Mathematical Society coordinates and provides some financial support for a tour of NZ universities by a visiting mathematician. Usually this person-known as the NZMS Visiting Lecturer-will spent two to three days at each of the six main university centres, and give at least two lectures at each place: one for a general audience, and one more closely tied to his or her own particular research interests.

Recent NZMS Visiting Lecturers have included John Loxton (Macquarie University), Andreas Dress (University of Bielefeld), Colin Maclachlan (University of Aberdeen), Roger Grimshaw (Monash University), Valerie Isham (University College London), John Fauvel (Open University), and John Guckenheimer (Cornell University).

In 2002, there are two NZMS Visiting Lecturers: Professor John Butcher (University of Auckland) and Dr Jim Geelen (University of Waterloo). John will be visiting the universities in May and early June. The contact person for John is Bill Barton (University of Auckland), Jim will be visiting the universities later this year. The contact person for Jim is Geoff Whittle (Victoria University),

Charles Semple


The Society has decided that there will now be Graduate Members, Accredited Members and Fellows of the NZMS. The deadline for applications is Wednesday May 1st, 2002. If you would like to be considered or would like to nominate someone could you send for applications forms to

The Accreditation Secretary
C/- Department of Mathematics and Statistics
University of Otago
PO Box 56

To help you understand better what each of the categories of membership are, I have added a copy of Article IV of the Constitution.

Article IV: Optional Accreditation

An Ordinary Member (or Reciprocity Member) may apply to the Council to become a Graduate Member, Accredited Member, or Fellow. The Council shall make and issue, and may revise from time to time, Rules which shall give effect to the following requirements.

  1. A Graduate Member shall have completed a degree or diploma at a recognised university or other tertiary institution, the studies for which shall include mathematics as a major component, and shall be currently employed or occupied in the development, application or teaching of mathematics.
  2. An Accredited Member shall have completed a postgraduate degree in mathematics at a recognised university or other tertiary institution, or shall have equivalent qualifications, and shall have been employed for the preceding three years in a position requiring the development, application or teaching of mathematics.
  3. A Fellow shall be a person who currently has or previously has had the qualifications of an Accredited Member and who, in addition, is deemed by the Accreditation Committee (see paragraph below) to have demonstrated a high level of attainment or responsibility in mathematics and to have made a substantial contribution to mathematics or to the profession of mathematician or to the teaching or application of mathematics.

An Honorary Member shall have the right to become a Fellow immediately upon application to the Council and without payment of a fee.

The Council shall establish an Accreditation Committee to consider applications for designation as a Graduate Member, Accredited Member or Fellow, and to administer the Rules described in the first paragraph of this Article. In its determinations, the Accreditation Committee shall discount interruptions to employment such as temporary unemployment and parental leave.

A Graduate Member may use the abbreviation GNZMS, an Accredited Member may use the abbreviation MNZMS, and a Fellow may use the abbreviation FNZMS. These designations and the corresponding abbreviations are the rights of that class of Member only while the member remains a financial member of the Society and while the occupational requirements outlined in the first paragraph of this Article continue to be satisfied. The occupational requirements shall be deemed to be satisfied by Honorary Members and in the case of interruptions to employment such as temporary unemployment and parental leave, and they shall not be applied in the case of retirement or promotion to an administrative or other position.

A fee shall accompany each application to the Accreditation Committee. The fee shall be additional to the annual subscription charged by the Society and shall be the only charge for accreditation.


If you have any queries could you please direct them to me at the above address or by email (

Derek Holton
Chair, Accreditation Committee


I would like to thank the NZMS for giving me partial financial support (NZ $500.00) to attend a MODSIM 2001 international conference at the Australian National University (ANU), Canberra, Australia, December 10-13,2001. This conference was organised by the modelling and simulation society of Australia and New Zealand.

In the conference, I presented a paper titled "Source term estimation of pollution from instantaneous point source" which was part of my PhD research under the supervision of Prof Robert McKibbin, Assoc Prof Robert McLachlan and Dr Igor Boglaev. This paper was published in the volume 2 of the MODSIM 2001 Congress proceedings.

One another contributed paper under the title "Market dynamics of allocating land to biofuel and forest sinks" also published in volume 3 of the same proceedings. This was joint work I did with Dr Aroon Parshotam, Dr Peter Read, Dr A.Korobeinikov and J. Lermit.

There were about 360 participants from 36 different countries including Australia and New Zealand. It was a great opportunity for me to participate in the international conference. Also, I used this opportunity to meet experts in the field, and to meet other PhD students to share their experience in the research field. Finally, I also would like to thank my supervisors and the institute at Massey University for their support.

P Kathirgamanathan
Massey University


ANZIAM2002 was held in Canberra between February 2 and 6. Ten attendees where from New Zealand, as well as two of the invited speakers, who gave superb presentations. Robert McKibbin's opening invited talk was accompanied by exploding fireworks, and flying shuttlecocks and squash balls--everyone was on the edge of their seats. What a wonderful way to begin a conference. Mick Roberts' invited talk was also excellent, and retained the award for having the most sheep per overhead in any of the talks presented, easily beating off the challenge from Larry Forbes, who was also the only Australian ever to include NZ among tropical islands! Larry won the Cherry-ripe prize for best non-student talk, and will be organising ANZIAM2004 in Hobart. The student talks were again excellent, with a joint winner for the Cherry prize.

Subject to a postal ballot, ANZIAM2003 will not be held, because of the demands of ICIAM2003 (July 7-11) on ANZIAM members who are part organisers with the Australian Mathematics Society. A generous earlybird registration will apply to ICIAM2003, to assist with cashflow. Department Heads should budget to take advantage of this earlybird registration by paying registrations of A$572 before 29 November, 2002.

ICIAM2003 is expecting about 2000 attendees. This will be the largest such mathematics meeting in the Australasian region. As well as keynote addresses from internationally renouned mathematicians, and embedded meetings such as NZMC, there will be a series of minisymposia, organised by those interested. There are two-hour slots (four 30 minute presentations) each morning and afternoon for minisymposia, in many parallel sessions. Please consider organising a minisymposium in your area of interest.

The Mathematics in Industry Study Group MISG2003 will continue in Adelaide, but MISG2004 and MISG2005 will move to New Zealand for the first time. This is a unique opportunity for all staff and students in mathematics in New Zealand, and especially those in applied maths, statistics and operations research, to spend one week of intense collaborative mathematical effort to work on an industrial problem.

MISG is one of the highlights of the Australian mathematical calendar, with over 100 mathematicians (students, university staff, industrialists) attending to formulate/speculate/solve about six industrial problems. Each attendee works on mainly one of these problems, and the results are written up.

The physical requirements for MISG is easy access to library and computing facilities, as well as cheap accommodation and work facilities. That means MISG will be held on a university campus. Additionally, local (NZ) organisers need to find and formulate about six problems for MISG, and to organise the meeting. If you are interested in helping, please contact Graham Weir.

MISG2004 will be held in New Zealand from Monday January 26 to Friday January 30, so that there is time for attendees to travel to Hobart for ANZIAM2004, between Sunday February 1 to Thursday 5. This means that Australia Day (Jan 26) will fall within MISG, which is a disadvantage for some potential attendees. One way to avoid this problem for MISG2005 would be to hold ANZIAM2005 first, between Tuesday Jan 25 to Sat 29, and then to hold MISG2005 from Monday Jan 31 to Friday Feb 4, which allows us to miss Waitangi Day, and for MISG to miss Australia Day. Please pass any thoughts you have on these dates to Graham Weir.

A significant effort from the New Zealand applied mathematics community will be needed in order to make real successes of MISG2004 and MISG2005, as well as ANZIAM2005. I hope we can rely on your support for what should be an exciting time for applied mathematics in New Zealand.

Graham Weir
Industrial Research Ltd



The three-eighths rule and a figure eight three body orbit

The name Runge, whose famous paper appeared just over 100 years ago, is forever associated with the numerical methods based on this principle of nesting quadrature formulae within quadrature formulae. We want step by step methods in which the approximate solution is advanced from one point to the next in steps that can be made small with the hope of steadily increasing accuracy. Thus if an approximate solution yn-1 is known at xn-1 then the approximation at xn = xn-1 + h is given by

yn = yn-1 + h(b1F1 + b2F2 + ... +bsFs),

where, for i = 1,2,...s, Fi is an approximation to the derivative of y(x) evaluated at xi-1 + hci. These approximations to y'(xi-1 + hci) are found from the given differential equation using internal approximations Yi y(xn-1 + hci). In the spirit of Nummerspiel I will focus on one of the classical Newton-Cotes formulae known as the ``three-eighths rule". This is slightly more accurate than the popular Simpson's rule, but more expensive to use because it requires one more evaluation of the integrand. For the three-eighths rule the bi and ci are given by the vectors

To turn this quadrature rule into a Runge-Kutta method, so that it can be used to solve differential equations, we need four internal quadrature rules which are chosen so that rule number i = 1,2,3,4 uses only the abscissa number j if Fj has already been evaluated. The internal rules also have to be cunningly chosen so that the errors committed in the stages cancel each other out as much as possible when combined into the overall quadrature rule. It turns out that the internal rules must be those given in turn by the vectors

[0  0  0  0],      [1  -1  1  0].

The numerical method based on these quadrature rules was discovered by Kutta as an example from his complete classification of fourth order Runge-Kutta methods.

My calculation of the figure eight three body orbit, using this method, is shown on this page. It uses initial data computed by Carles Simo and just follows each of the three bodies far enough for them to reach where the one they are following started from. I have indicated the initial point I have used in my computations by representing the masses by filled discs . When they reach the position shown as , and again at , the three masses form an isosceles triangle.

To learn more about this and other interesting orbits, good places to start are

John Butcher