• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Wednesday, 10 Sep 2014 20:11

Guest post by Bruce Bartlett

I recently put an article on the arXiv:

It’s about Chris Schommer-Pries’s recent strictification result from his updated thesis, that every symmetric monoidal bicategory is equivalent to a quasistrict one. Since symmetric monoidal bicategories can be viewed as the syntax for ‘stable 3-dimensional algebra’, one aim of the paper is to write out this stuff out in a diagrammatic notation, like this:

pic

The other aim is to try to strip down the definition of a ‘quasistrict symmetric monoidal bicategory’, emphasizing the central role played by the interchangor isomorphisms. Let me explain a bit more.

Motivation

Firstly, some motivation. For a long time now I’ve been finishing up a project together with Chris Douglas, Chris Schommer-Pries and Jamie Vicary about 1-2-3 topological quantum field theories. The starting point is a generators-and-relations presentation of the oriented 3-dimensional bordism bicategory (objects are closed 1-manifolds, morphisms are two-dimensional bordisms, and 2-morphisms are diffeomorphism classes of three-dimensional bordisms between those). So, you present a symmetric monoidal bicategory from a bunch of generating objects, 1-morphisms, and 2-morphisms, and a bunch of relations between the 2-morphisms. These relations are written diagrammatically. For instance, the ‘pentagon relation’ looks like this:

pic

To make rigorous sense of these diagrams, we needed a theory of presenting symmetric monoidal bicategories via generators-and-relations in the above sense. So, Chris Schommer-Pries worked such a theory out, using computads, and proved the above strictification result. This implies that we could use the simple pictures above to perform calculations.

Strictifying symmetric monoidal bicategories

The full algebraic definition of a symmetric monoidal bicategory is quite intimidating, amounting to a large amount of data satisfying a host of diagrams. A self-contained definition can be found in this paper of Mike Stay. So, it’s of interest to see how much of this data can be strictified, at the cost of passing to an equivalent symmetric monoidal bicategory.

Before Schommer-Pries’s result, the best strictification result was that of Gurski and Osorno.

Theorem (GO). Every symmetric monoidal bicategory is equivalent to a semistrict symmetric monoidal 2-category.

Very roughly, a semistrict symmetric monoidal 2-category consists of a strict 2-category equipped with a strict tensor product, plus the following coherence data (see eg. HDA1 for a fuller account) satisfying a bunch of equations:

  • tensor naturators, i.e. 2-isomorphisms Φ f,g:(fg)(fg)(ff)(gg)\Phi_{f,g} : (f' \otimes g') \circ (f \otimes g) \Rightarrow (f' \circ f) \otimes (g' \circ g)
  • braidings, i.e. 1-morphisms β A,B:ABBA\beta_{A,B} : A \otimes B \rightarrow B \otimes A
  • braiding naturators, i.e. 2-isomorphisms β f,g:β A,B(fg)(gf)β A,B\beta_{f,g} : \beta_{A,B} \circ (f \otimes g) \Rightarrow (g \otimes f) \circ \beta_{A,B}
  • braiding bilinearators, i.e. 2-isomorphisms R (A|B,C):(idR B,C)(R A,Bid)R A,BCR_{(A|B, C)} : (id \otimes R_{B,C}) \circ (R_{A,B} \otimes id) \Rightarrow R_{A, B\otimes C}
  • symmetrizors, i.e. 2-isomorphisms ν A,B:id ABR B,AR A,B\nu_{A,B} : id_{A \otimes B} \Rightarrow R_{B,A} \circ R_{A,B}

So — Gurski and Osorno’s result represents a lot of progress. It says that the other coherence data in a symmetric monoidal bicategory (associators for the underlying bicategory, associators for the underlying monoidal bicategory, pentagonator, unitors, adjunction data, …) can be eliminated, or more precisely, strictified.

Schommer-Pries’s result goes further.

Theorem (S-P). Every semistrict monoidal bicategory is equivalent to a quasistrict symmetric monoidal 2-category.

A quasistrict symmetric monoidal 2-category is a semistrict symmetric monoidal 2-category where the braiding bilinearators and symmetrizors are equal to the identity. So - only the tensor naturators, braiding 1-morphisms, and braiding naturators remain!

The method of proof is to show that every symmetric monoidal bicategory admits a certain kind of presentation by generators-and-relations (a ‘quasistrict 3-computad’). And the gismo built out of a quasistrict 3-computad is a quasistrict symmetric monoidal 2-category! Q.E.D.

Stringent symmetric monoidal 2-categories

In my article, I reformulate the definition of a quasistrict symmetric monoidal 2-category a bit, removing redundant data. Firstly, the tensor naturators Φ (f,g),(f,g)\Phi_{(f',g'),(f,g)} are fully determined by their underlying interchangors ϕ f,g\phi_{f,g},

(1)ϕ f,g=Φ (f,id),(id,g):(fid)(idg)(idg)(fid) \phi_{f,g} = \Phi_{(f, id), (id, g)} : (f \otimes id) \circ (id \otimes g) \Rightarrow (id \otimes g) \circ (f \otimes id)

This much is well-known. But also, the braiding naturators are fully determined by the interchangors. So, I define a stringent symmetric monoidal 2-category purely in terms of this coherence data: interchangors, and braiding 1-morphisms. I show that they’re equivalent to quasistrict symmetric monoidal bicategories.

Wire diagrams

The ‘stringent’ version of the definition is handy, because it admits a nice graphical calculus which I call ‘wire diagrams’. I needed a new name just to distinguish them from vanilla-flavoured string diagrams for 2-categories where the objects of the 2-category correspond to planar regions; now the objects of the 2-category correspond to lines. But it’s really just a rotated version of string diagrams in 3 dimensions. So, the basic setup is as follows:

pic

But to keep things nice and planar, we’ll draw this as follows:

pic

These diagrams are interpreted according to the prescription: tensor first, then compose! So, the interchangor isomorphisms look as follows:

pic

So, what I do is write out the definitions of quasistrict and stringent symmetric monoidal 2-categories in terms of wire diagrams, and use this graphical calculus to prove that they’re the same thing.

That’s good for us, because it turns out these ‘wire diagrams’ are precisely the diagrammatic notation we were using for the generators-and-relations presentation of the oriented 3-dimensional bordism bicategory. For instance, I hope you can see the interchangor ϕ\phi being used in the ‘pentagon relation’ I drew near the top of this post. So, that diagrammatic notation has been justified.

Author: "willerton (S.Willerton@sheffield.ac.uk)" Tags: "Categories"
Send by mail Print  Save  Delicious 
Date: Saturday, 06 Sep 2014 09:58

Ronnie Brown has brought to my attention a talk he gave recently at the Workshop Constructive Mathematics and Models of Type Theory, IHP Paris, 02 June 2014 - 06 June 2014.

Title: Intuitions for cubical methods in nonabelian algebraic topology

Abstract: The talk will start from the 1-dimensional Seifert-van Kampen Theorem for the fundamental group, then groupoid, and so to a use of strict double groupoids for higher versions. These allow for some precise nonabelian calculations of some homotopy types, obtained by a gluing process. Cubical methods are involved because of the ease of writing multiple compositions, leading to “algebraic inverses to subdivision”, relevant to higher dimensional local-to-global problems. Also the proofs involve some ideas of 2-dimensional formulae and rewriting. The use of strict multiple groupoids is essential to obtain precise descriptions as colimits and hence precise calculations. Another idea is to use both a “broad” and a “narrow” model of a particular kind of homotopy types, where the broad model is used for conjectures and proofs, while the narrow model is used for calculations and relation to classical methods. The algebraic proof of the equivalence of the two models then gives a powerful tool.

Slides are available from his preprint page.

Author: "david (d.corfield@kent.ac.uk)" Tags: "Math"
Send by mail Print  Save  Delicious 
Date: Tuesday, 02 Sep 2014 18:15

One interesting feature of the Category Theory conference in Cambridge last month was that lots of the other participants started conversations with me about the whole-population, suspicionless surveillance that several governments are now operating. All but one were enthusiastically supportive of the work I’ve been doing to try to get the mathematical community to take responsibility for its part in this, and I appreciated that very much.

The remaining one was a friend who wasn’t unsupportive, but said to me something like “I think I probably agree with you, but I’m not sure. I don’t see why it matters. Persuade me!”

Here’s what I replied.

“A lot of people know now that the intelligence agencies are keeping records of almost all their communications, but they can’t bring themselves to get worked up about it. And in a way, they might be right. If you, personally, keep your head down, if you never do anything that upsets anyone in power, it’s unlikely that your records will end up being used against you.

But that’s a really self-centred attitude. What about people who don’t keep their heads down? What about protesters, campaigners, activists, people who challenge the establishment — people who exercise their full democratic rights? Freedom from harassment shouldn’t depend on you being a quiet little citizen.

“There’s a long history of intelligence agencies using their powers to disrupt legitimate activism. The FBI recorded some of Martin Luther King’s extramarital liaisons and sent the tape to his family home, accompanied by a letter attempting to blackmail him into suicide. And there have been many many examples since then (see below).

“Here’s the kind of situation that worries me today. In the UK, there’s a lot of debate at the moment about the oil extraction technique known as fracking. The government has just given permission for the oil industry to use it, and environmental groups have been protesting vigorously.

“I don’t have strong opinions on fracking myself, but I do think people should be free to organize and protest against it without state harassment. In fact, the state should be supporting people in the exercise of their democratic rights. But actually, any anti-fracking group would be sensible to assume that it’s the object of covert surveillance, and that the police are working against it, perhaps by employing infiltrators — because they’ve been doing that to other environmental groups for years.

“It’s the easiest thing in the world for politicians to portray anti-fracking activists as a danger to the UK’s economic well-being, as a threat to national energy security. That’s virtually terrorism! And once someone’s been labelled with the T word, it immediately becomes trivial to justify using all that surveillance data that the intelligence agencies routinely gather. And I’m not exaggerating — anti-terrorism laws really have been used against environmental campaigners in the recent past.

“Or think about gay rights. Less than fifty years ago, sex between men in England was illegal. This law was enforced, and it ruined people’s lives. For instance, my academic great-grandfather Alan Turing was arrested under this law and punished with chemical castration. He’s widely thought to have killed himself as a direct result. But today, two men in England can not only have sex legally, they can marry with the full endorsement of the state.

“How did this change so fast? Not by people writing polite letters to the Times, or by going through official parliamentary channels (at least, not only by those means). It was mainly through decades of tough, sometimes dangerous, agitation, campaigning and protest, by small groups and by courageous individual citizens.

“By definition, anyone campaigning for anything to be decriminalized is siding with criminals against the establishment. It’s the easiest thing in the world for politicians to portray campaigners like this as a menace to society, a grave threat to law and order. Any nation state with the ability to monitor, infiltrate, harass and disrupt such “menaces” will be very sorely tempted to use it. And again, that’s no exaggeration: in the US at least, this has happened to gay rights campaigners over and over again, from the 1950s to nearly the present day, even sometimes — ludicrously — in the name of fighting terrorism (1, 2, 3, 4).

“So government surveillance should matter to you in a very direct way if you’re involved in any kind of activism or advocacy or protest or campaigning or dissent. It should also matter to you if you’re not, but you quietly support any of this activism — or if you reap its benefits. Even if you don’t (which is unlikely), it matters if you simply want to live in a society where people can engage in peaceful activism without facing disruption or harassment by the state. And it matters more now than it ever did before, because government surveillance powers are orders of magnitude greater than they’ve ever been before.”


That’s roughly what I said. I think we then talked a bit about mathematicians’ role in enabling whole-population surveillance. Here’s Thomas Hales’s take on this:

If privacy disappears from the face of the Earth, mathematicians will be some of the primary culprits.

Of course, there are lots of other reasons why the activities of the NSA, GCHQ and their partners might matter to you. Maybe you object to industrial espionage being carried out in the name of national security, or the NSA supplying data to the CIA’s drone assassination programme (“we track ‘em, you whack ‘em”), or the raw content of communications between Americans being passed en masse to Israel, or the NSA hacking purely civilian infrastructure in China, or government agencies intercepting lawyer-client and journalist-source communications, or that the existence of mass surveillance leads inevitably to self-censorship. Or maybe you simply object to being watched, for the same reason you close the bathroom door: you’re not doing anything to be ashamed of, you just want some privacy. But the activism point is the one that resonates most deeply with me personally, and it seemed to resonate with my friend too.

You may think I’m exaggerating or scaremongering — that the enormous power wielded by the US and UK intelligence agencies (among others) could theoretically be used against legitimate citizen activism, but hasn’t been so far.

There’s certainly an abstract argument against this: it’s simply human nature that if you have a given surveillance power available to you, and the motive to use it, and the means to use it without it being known that you’ve done so, then you very likely will. Even if (for some reason) you believe that those currently wielding these powers have superhuman powers of self-restraint, there’s no guarantee that those who wield them in future will be equally saintly.

But much more importantly, there’s copious historical evidence that governments routinely use whatever surveillance powers they possess against whoever they see as troublemakers, even if this breaks the law. Without great effort, I found 50 examples in the US and UK alone — read on.

Six overviews

If you’re going to read just one thing on government surveillance of activists, I suggest you make it this:

Among many other interesting points, it reminds us that this isn’t only about “leftist” activism — three of the plaintiffs in this case are pro-gun organizations.

Here are some other good overviews:

And here’s a short but incisive comment from journalist Murtaza Hussain.

50 episodes of government surveillance of activists

Disclaimer   Journalism about the activities of highly secretive organizations is, by its nature, very difficult. Even obtaining the basic facts can be a major feat. Obviously, I can’t attest to the accuracy of all these articles — and the entries in the list below are summaries of the articles linked to, not claims I’m making myself. As ever, whether you believe what you read is a judgement you’ll have to make for yourself.

1940s

1. FBI surveillance of War Resisters League (1, 2), continuing in 2010 (1)

1950s

2. FBI surveillance of the National Association for the Advancement of Colored People (1)

3. FBI “surveillance program against homosexuals” (1)

1960s

4. FBI’s Sex Deviate programme (1)

5. FBI’s Cointelpro projects aimed at “surveying, infiltrating, discrediting, and disrupting domestic political organizations”, and NSA’s Project Minaret targeted leading critics of Vietnam war, including senators, civil rights leaders and journalists (1)

6. FBI attempted to blackmail Martin Luther King into suicide with surveillance tape (1)

7. NSA intercepted communications of antiwar activists, including Jane Fonda and Dr Benjamin Spock (1)

8. Harassment of California student movement (including Stephen Smale’s free speech advocacy) by FBI, with support of Ronald Reagan (1, 2)

1970s

9. FBI surveillance and attempted deportation of John Lennon (1)

10. FBI burgled the office of the psychiatrist of Pentagon Papers whistleblower Daniel Ellsberg (1)

1980s

11. Margaret Thatcher had the Canadian national intelligence agency CSEC surveil two of her own ministers (1, 2, 3)

12. MI5 tapped phone of founder of Women for World Disarmament (1)

13. Ronald Reagan had the NSA tap the phone of congressman Michael Barnes, who opposed Reagan’s Central America policy (1)

1990s

14. NSA surveillance of Greenpeace (1)

15. UK police’s “undercover work against political activists” and “subversives”, including future home secretary Jack Straw (1)

16. UK undercover policeman Peter Francis “undermined the campaign of a family who wanted justice over the death of a boxing instructor who was struck on the head by a police baton” (1)

17. UK undercover police secretly gathered intelligence on 18 grieving families fighting to get justice from police (1, 2)

18. UK undercover police spied on lawyer for family of murdered black teenager Stephen Lawrence; police also secretly recorded friend of Lawrence and his lawyer (1, 2)

19. UK undercover police spied on human rights lawyers Bindmans (1)

20. GCHQ accused of spying on Scottish trade unions (1)

2000s

21. US military spied on gay rights groups opposing “don’t ask, don’t tell” (1)

22. Maryland State Police monitored nonviolent gay rights groups as terrorist threat (1)

23. NSA monitored email of American citizen Faisal Gill, including while he was running as Republican candidate for Virginia House of Delegates (1)

24. NSA surveillance of Rutgers professor Hooshang Amirahmadi and ex-California State professor Agha Saeed (1)

25. NSA tapped attorney-client conversations of American lawyer Asim Ghafoor (1)

26. NSA spied on American citizen Nihad Awad, executive director of the Council on American-Islamic Relations, the USA’s largest Muslim civil rights organization (1)

27. NSA analyst read personal email account of Bill Clinton (date unknown) (1)

28. Pentagon counterintelligence unit CIFA monitored peaceful antiwar activists (1)

29. Green party peer and London assembly member Jenny Jones was monitored and put on secret police database of “domestic extremists” (1, 2)

30. MI5 and UK police bugged member of parliament Sadiq Khan (1, 2)

31. Food Not Bombs (volunteer movement giving out free food and protesting against war and poverty) labelled as terrorist group and infiltrated by FBI (1, 2, 3)

32. Undercover London police infiltrated green activist groups (1)

33. Scottish police infiltrated climate change activist organizations, including anti-airport expansion group Plane Stupid (1)

34. UK undercover police had children with activists in groups they had infiltrated (1)

35. FBI infiltrated Muslim communities and pushed those with objections to terrorism (and often mental health problems) to commit terrorist acts (1, 2, 3)

2010s

36. California gun owners’ group Calguns complains of chilling effect of NSA surveillance on members’ activities (1, 2, 3)

37. GCHQ and NSA surveilled Unicef and head of Economic Community of West African States (1)

38. NSA spying on Amnesty International and Human Rights Watch (1)

39. CIA hacked into computers of Senate Intelligence Committee, whose job it is to oversee the CIA
(1, 2, 3, 4, 5, 6; bonus: watch CIA director John Brennan lie that it didn’t happen, months before apologizing)

40. CIA obtained legally protected, confidential email between whistleblower officials and members of congress, regarding CIA torture programme (1)

41. Investigation suggests that CIA “operates an email surveillance program targeting senate intelligence staffers” (1)

42. FBI raided homes and offices of Anti-War Committee and Freedom Road Socialist Organization, targeting solidarity activists working with Colombians and Palestinians (1)

43. Nearly half of US government’s terrorist watchlist consists of people with no recognized terrorist group affiliation (1)

44. FBI taught counterterrorism agents that mainstream Muslims are “violent” and “radical”, and used presentations about the “inherently violent nature of Islam” (1, 2, 3)

45. GCHQ has developed tools to manipulate online discourse and activism, including changing outcomes of online polls, censoring videos, and mounting distributed denial of service attacks (1, 2)

46. Green member of parliament Caroline Lucas complains that GCHQ is intercepting her communications (1)

47. GCHQ collected IP addresses of visitors to Wikileaks websites (1, 2)

48. The NSA tracks web searches related to privacy software such as Tor, as well as visitors to the website of the Linux Journal (calling it an “extremist forum”) (1, 2, 3)

49. UK police attempt to infiltrate anti-racism, anti-fascist and environmental groups, anti-tax-avoidance group UK Uncut, and politically active Cambridge University students (1, 2)

50. NSA surveillance impedes work of investigative journalists and lawyers (1, 2, 3, 4, 5).

Back to mathematics

As mathematicians, we spend much of our time studying objects that don’t exist anywhere in the world (perfect circles and so on). But we exist in the world. So, being a mathematician sometimes involves addressing real-world concerns.

For instance, Vancouver mathematician Izabella Laba has for years been writing thought-provoking posts on sexism in mathematics. That’s not mathematics, but it’s a problem that implicates every mathematician. On this blog, John Baez has written extensively on the exploitative practices of certain publishers of mathematics journals, the damage it does to the universities we work in, and what we can do about it.

I make no apology for bringing political considerations onto a mathematical blog. The NSA is a huge employer of mathematicians — over 1000 of us, it claims. Like it or not, it is part of our mathematical environment. Both the American Mathematical Society and London Mathematical Society are now regularly publishing articles on the role of mathematicians in enabling government surveillance, in recognition of our responsibility for it. As a recent New York Times article put it:

To say mathematics is political is not to diminish it, but rather to recognize its greater meaning, promise and responsibilities.

Author: "leinster (tom.leinster@ed.ac.uk)" Tags: "Science and Politics"
Send by mail Print  Save  Delicious 
Date: Sunday, 31 Aug 2014 07:31

Right now I’d love to understand something a logician at Oxford tried to explain to me over lunch a while back. His name is Boris Zilber. He’s studying what he informally calls ‘logically perfect’ theories — that is, lists of axioms that almost completely determine the structure they’re trying to describe. He thinks that we could understand physics better if we thought harder about these logically perfect theories:

His ways of thinking, rooted in model theory, are quite different from anything I’m used to. I feel a bit like Gollum here:

A zeroth approximation to Zilber’s notion of ‘logically perfect theory’ would be a theory in first-order logic that’s categorical, meaning all its models are isomorphic. In rough terms, such a theory gives a full description of the mathematical structure it’s talking about.

The theory of groups is not categorical, but we don’t mind that, since we all know there are lots of very different groups. Historically speaking, it was much more upsetting to discover that Peano’s axioms of arithmetic, when phrased in first-order logic, are not categorical. Indeed, Gödel’s first incompleteness theorem says there are many statements about natural numbers that can neither be proved nor disproved starting from Peano’s axioms. It follows that for any such statement we can find a model of the Peano axioms in which that statement holds, and also a model in which it does not. So while we may imagine the Peano axioms are talking about ‘the’ natural numbers, this is a false impression. There are many different ‘versions’ of the natural numbers, just as there are many different groups.

The situation is not so bad for the real numbers — at least if we are willing to think about them in a somewhat limited way. There’s a theory of a real closed field: a list of axioms governing the operations +,×,0+, \times, 0 and 11 and the relation \le. Tarski showed this theory is complete. In other words, any sentence phrased in this language can either be proved or disproved starting from the axioms.

Nonetheless, the theory of real closed fields is not categorical: besides the real numbers, there are many ‘nonstandard’ models, such as fields of hyperreal numbers where there are numbers bigger than 11, 1+11+1, 1+1+11+1+1, 1+1+1+11+1+1+1 and so on. These models are all elementarily equivalent: any sentence that holds in one holds in all the rest. That’s because the theory is complete. But these models are not all isomorphic: we can’t get a bijection between them that preserves +,×,0,1+, \times, 0, 1 and \le.

Indeed, only finite-sized mathematical structures can be ‘nailed down’ up to isomorphism by theories in first-order logic. After all, the Löwenheim–Skolem theorem says that if a first-order theory in a countable language has an infinite model, it has at least one model of each infinite cardinality. So, if we’re trying to use this kind of theory to describe an infinitely big mathematical structure, the most we can hope for is that after we specify its cardinality, the axioms completely determine it.

And this actually happens sometimes. It happens for the complex numbers! Zilber believes this has something to do with why the complex numbers show up so much in physics. This sounds very implausible at first, but there are some amazing results in logic that one needs to learn before dismissing the idea out of hand.

Say κ\kappa is some cardinal. A first-order theory describing structure on a single set is called κ-categorical if it has a unique model of cardinality κ\kappa. And in 1965, a logician named Michael Morley showed that if a list of axioms is κ\kappa-categorical for some uncountable κ\kappa, it’s κ\kappa-categorical for every uncountable κ\kappa. I have no idea why this is true. But such theories are called uncountably categorical.

A great example is the theory of an algebraically closed field of characteristic zero.

When you think of algebraically closed fields of characteristic zero, the first example that comes to mind is the complex numbers. These have the cardinality of the continuum. But because this theory is uncountably categorical, there is exactly one algebraically closed field of characteristic zero of each uncountable cardinality… up to isomorphism.

This implies some interesting things. For example, we can take the complex numbers, throw in an extra element, and let it freely generate a bigger algebraically closed field. It’s ‘bigger’ in the sense that it contains the complex numbers as a proper subset, indeed a subfield. But since it has the same cardinality as the complex numbers, it’s isomorphic to the complex numbers!

And then, because this ‘bigger’ field is isomorphic to the complex numbers, we can turn this argument around. We can take the complex numbers, remove a lot of carefully chosen elements, and get a subfield that’s isomorphic to the complex numbers.

Or, if we like, we can take the complex numbers, adjoin a really huge set of extra elements, and let them freely generate an algebraically closed field of characteristic zero. The cardinality of this field can be as big as we want. It will be determined up to isomorphism by its cardinality. But it will be elementarily equivalent to the ordinary complex numbers! In other words, all the same sentences written in the language of +,×,0+, \times, 0 and 11 will hold. See why?

The theory of a real closed field is not uncountably categorical. This implies something really strange. Besides the ‘usual’ real numbers \mathbb{R} there’s another real closed field \mathbb{R}', not isomorphic to \mathbb{R}, with the same cardinality. We can build the complex numbers \mathbb{C} using pairs of real numbers. We can use the same trick to build a field \mathbb{C}' using pairs of guys in \mathbb{R}'. But it’s easy to check that this funny field \mathbb{C}' is algebraically closed and of characteristic zero. So, it’s isomorphic to \mathbb{C}.

In short, different ‘versions’ of the real numbers can give rise to the same version of the complex numbers! This is stuff they didn’t teach me in school.

All this is just background.

To a first approximation, Zilber considers uncountably categorical theories ‘logically perfect’. Let me paraphrase him:

There are purely mathematical arguments towards accepting the above for a definition of perfection. First, we note that the theory of the field of complex numbers (in fact any algebraically closed field) is uncountably categorical. So, the field of complex numbers is a perfect structure, and so are all objects of complex algebraic geometry by virtue of being definable in the field.

It is also remarkable that Morley’s theory of categoricity (and its extensions) exhibits strong regularities in models of categorical theories generally. First, the models have to be highly homogeneous, in a sense technically different from the one discussed for manifolds, but similar in spirit. Moreover, a notion of dimension (the Morley rank) is applicable to definable subsets in uncountably categorical structures, which gives one a strong sense of working with curves, surfaces and so on in this very abstract setting. A theorem of the present author states more precisely that an uncountably categorical structure MM is either reducible to a 2-dimensional “pseudo-plane” with at least a 2-dimensional family of curves on it (so is non-linear), or is reducible to a linear structure like an (infinite dimensional) vector space, or to a simpler structure like a GG-set for a discrete group GG. This led to a Trichotomy Conjecture, which specifies that the non-linear case is reducible to algebraically closed fields, effectively implying that MM in this case is an object of algebraic geometry over an algebraically closed field.

I don’t understand this, but I believe that in rough terms this would amount to getting ahold of algebraic geometry from purely ‘logical’ principles, not starting from ideas in algebra or geometry!

Ehud Hrushovski showed that the Trichotomy Conjecture is false. However, Zilber has bounced back with a new improved notion of logically perfect theory, namely a ‘Noetherian Zariski theory’. This sounds like something out of algebraic geometry, but it’s really a concept from logic that takes advantage of the eerie parallels between structures defined by uncountably categorical theories and algebraic geometry.

Models of Noetherian Zariski theories include not only structures from algebraic geometry, but also from noncommutative algebraic geometry, like quantum tori. So, Zilber is now trying to investigate the foundations of physics using ideas from model theory. It seems like a long hard project that’s just getting started.

Here’s a concrete conjecture that illustrates how people are hoping algebraic geometry will spring forth from purely logical principles:

The Algebraicity Conjecture. Suppose GG is a simple group whose theory (consisting of all sentences in first-order theory of groups that hold for this group) is uncountably categorical. Then G=𝔾(K)G = \mathbb{G}(K) for some simple algebraic group 𝔾\mathbb{G} and some algebraically closed field KK.

Zilber has a book on these ideas:

But there are many prerequisites I’m missing, and Richard Elwes, who studied with Zilber, has offered me some useful pointers:

If you want to really understand the Geometric Stability Theory referred to in your last two paragraphs, there’s a good (but hard!) book by that name by Anand Pillay. But you don’t need to go anywhere near that far to get a good idea of Morley’s Theorem and why the complex numbers are uncountably categorical. These notes look reasonable:

Basically the idea is that a theory is uncountably categorical if and only if two things hold: firstly there is a sensible notion of dimension (Morley rank) which can be assigned to every formula quantifying its complexity. In the example of the complex numbers Morley rank comes out to be pretty much the same thing as Zariski dimension. Secondly, there are no ‘Vaughtian pairs’ meaning, roughly, two bits of the structure whose size can vary independently. (Example: take the structure consisting of two disjoint non-interacting copies of the complex numbers. This is not uncountably categorical because you could set the two cardinalities independently.)

It is not too hard to see that the complex numbers have these two properties once you have the key fact of ‘quantifier elimination’, i.e. that any first order formula is equivalent to one with no quantifiers, meaning that all they can be are sets determined by the vanishing or non-vanishing of various polynomials. (Hence the connection to algebraic geometry.) In one dimension, basic facts about complex numbers tell us that every definable subset of \mathbb{C} must therefore be either finite or co-finite. This is the definition of a strongly minimal structure, which automatically implies both of the above properties without too much difficulty. So the complex numbers are not merely ‘perfect’ (though I’ve not heard this term before) but are the very best type of structure even among the uncountably categorical.

If you know anything else that could help me out, I’d love to hear it!

Author: "john (baez@math.ucr.edu)" Tags: "Logic"
Send by mail Print  Save  Delicious 
Date: Monday, 25 Aug 2014 03:00

The Notices of the AMS has just published the second in its series “Mathematicians discuss the Snowden revelations”. (The first was here.) The introduction to the second article cites this blog for “a discussion of these issues”, but I realized that the relevant posts might be hard for visitors to find, scattered as they are over the last eight months.

So here, especially for Notices readers, is a roundup of all the posts and discussions we’ve had on the subject. In reverse chronological order (and updated after the original appearance of this post):

Author: "leinster (tom.leinster@ed.ac.uk)" Tags: "Science and Politics"
Send by mail Print  Save  Delicious 
Date: Wednesday, 20 Aug 2014 23:13

You know how sometimes someone tells you a theorem, and it’s obviously false, and you reach for one of the many easy counterexamples only to realize that it’s not a counterexample after all, then you reach for another one and another one and find that they fail too, and you begin to concede the possibility that the theorem might not actually be false after all, and you feel your world start to shift on its axis, and you think to yourself: “Why did no one tell me this before?”

That’s what happened to me today, when my PhD student Barry Devlin — who’s currently writing what promises to be a rather nice thesis on codensity monads and topological algebras — showed me this theorem:

Every compact Hausdorff ring is totally disconnected.

I don’t know who it’s due to; Barry found it in the book Profinite Groups by Ribes and Zalesskii. And in fact, there’s also a result for rings analogous to a well-known one for groups: a ring is compact, Hausdorff and totally disconnected if and only if it can be expressed as a limit of finite discrete rings. Every compact Hausdorff ring is therefore “profinite”, that is, expressible as a limit of finite rings.

So the situation for compact rings is completely unlike the situation for compact groups. There are loads of compact groups (the circle, the torus, SO(n)SO(n), U(n)U(n), E 8E_8, …) and there’s a very substantial theory of them, from Haar measure through Lie theory and onwards. But compact rings are relatively few: it’s just the profinite ones.

I only laid eyes on the proof for five seconds, which was just long enough to see that it used Pontryagin duality. But how should I think about this theorem? How can I alter my worldview in such a way that it seems natural or even obvious?

Author: "leinster (tom.leinster@ed.ac.uk)" Tags: "Topology"
Send by mail Print  Save  Delicious 
Date: Tuesday, 12 Aug 2014 04:47

My last article on the ten-fold way was a piece of research in progress — it only reached a nice final form in the comments. Since that made it rather hard to follow, let me try to present a more detailed and self-contained treatment here!

But if you’re in a hurry, you can click on this:

and get my poster for next week’s scientific advisory board meeting at the Centre for Quantum Technologies, in Singapore. That’s where I work in the summer, and this poster is supposed to be a terse introduction to the ten-fold way.

First we’ll introduce the ‘Brauer monoid’ of a field. This is a way of assembling all simple algebras over that field into a monoid: a set with an associative product and unit. One reason for doing this is that in quantum physics, physical systems are described by vector spaces that are representations of certain ‘algebras of observables’, which are sometimes simple (in the technical sense). Combining physical systems involves taking the tensor product of their vector spaces and also these simple algebras. This gives the multiplication in the Brauer monoid.

We then turn to a larger structure called the ‘super Brauer monoid’ or ‘Brauer–Wall monoid’. This is the ‘super’ or 2\mathbb{Z}_2-graded version of the same idea, which shows up naturally in physical systems containing both bosons and fermions. For the field \mathbb{R}, the super Brauer monoid has 10 elements. This gives a nice encapsulation of the ‘ten-fold way’ introduced in work on condensed matter physics. At the end I’ll talk about this particular example in more detail.

Actually elements of the Brauer monoid of a field are equivalence classes of simple algebras over this field. Thus, I’ll start by reminding you about simple algebras and the notion of equivalence we need, called ‘Morita equivalence’. Briefly, two algebras are Morita equivalent if they have the same category of representations. Since in quantum physics it’s the representations of an algebra that matter, this is sometimes the right concept of equivalence, even though it’s coarser than isomorphism.

Review of algebra

We begin with some material that algebraists consider well-known.

Simple algebras and division algebras

Let kk be a field.

By an algebra over kk we will always mean a finite-dimensional associative unital kk-algebra: that is, a finite-dimensional vector space AA over kk with an associative bilinear multiplication and a multiplicative unit 1A1 \in A.

An algebra AA over kk is simple if its only two-sided ideals are {0}\{0\} and AA itself.

A division algebra over kk is an algebra AA such that if a0a \ne 0 there exists bAb \in A such that ab=ba=1a b= b a = 1. Using finite-dimensionality the following condition is equivalent: if a,bAa,b \in A and ab=0a b = 0 then either a=0a = 0 or b=0b = 0.

A division algebra is automatically simple. More interestingly, by a theorem of Wedderburn, every simple algebra AA over kk is an algebra of n×nn \times n matrices with entries in some division algebra DD over kk. We write this as

AD[n] A \cong D[n]

where D[n]D[n] is our shorthand for the algebra of n×nn \times n matrices with entries in DD.

The center of an algebra over kk always includes a copy of kk, the scalar multiples of 1A1 \in A. If DD is a division algebra, its center Z(D)Z(D) is a commutative algebra that’s a division algebra in its own right. So Z(D)Z(D) is field, and it’s a finite extension of kk, meaning it contains kk as a subfield and is a finite-dimensional algebra over kk.

If AA is a simple algebra over kk, its center is isomorphic to the center of some D[n]D[n], which is just the center of DD. So, the center of AA is a field that’s a finite extension of kk. We’ll need this fact when defining in the multiplication in the Brauer monoid.

Example. I’m mainly interested in the case k=k = \mathbb{R}. A theorem of Frobenius says the only division algebras over \mathbb{R} are \mathbb{R} itself, the complex numbers \mathbb{C} and the quaternions \mathbb{H}. Of these, the first two are fields, while the third is noncommutative. So, the simple algebras over \mathbb{R} are the matrix algebras [n]\mathbb{R}[n], [n]\mathbb{C}[n] and [n]\mathbb{H}[n]. The center of [n]\mathbb{R}[n] and [n]\mathbb{H}[n] is \mathbb{R}, while the center of [n]\mathbb{C}[n] is \mathbb{C}, the only nontrivial finite extension of \mathbb{R}.

Example. The case k=k = \mathbb{C} is more boring, because \mathbb{C} is algebraically closed. Any division algebra DD over an algebraically closed field kk must be kk itself. (To see this, consider xDx \in D and look at the smallest subring of DD containing kk and xx and closed under taking inverses. This is a finite hence algebraic extension of kk, so it must be kk.) So if kk is algebraically closed, the only simple algebras over kk are the matrix algebras k[n]k[n].

Example. The case where kk is a finite field has a very different flavor. A theorem of Wedderburn and Dickson implies that any division algebra over a finite field kk is a field, indeed a finite extension of kk. So, the only simple algebras over kk are the matrix algebras F[n]F[n] where FF is a finite extension of kk. Moreover, we can completely understand these finite extensions, since the finite fields are all of the form 𝔽 p n\mathbb{F}_{p^n} where pp is a prime and n=1,2,3,n = 1,2,3,\dots, and the only finite extensions of 𝔽 p n\mathbb{F}_{p^n} are the fields 𝔽 p m\mathbb{F}_{p^m} where nn divides mm.

Morita equivalence and the Brauer group

Given an algebra AA over kk we define Rep(A)Rep(A) to be the category of left AA-modules. We say two algebras A,BA, B over kk are Morita equivalent if Rep(A)Rep(B)Rep(A) \simeq Rep(B). In this situation we write ABA \simeq B.

Isomorphic algebras are Morita equivalent, but this equivalence relation is more general; for example we always have A[n]AA[n] \simeq A, where A[n]A[n] is the algebra of n×nn \times n matrices with entries in AA.

We’ve seen that if AA is simple, AD[n]A \cong D[n], and this implies AD[n]A \simeq D[n]. On the other hand, we have D[n]DD[n] \simeq D. So, every simple algebra over kk is Morita equivalent to a division algebra over kk.

As a set, the Brauer monoid of kk will simply be the set of Morita equivalence classes of simple algebras over kk. By what I just said, this is also the set of Morita equivalence classes of division algebras over kk. The trick will be defining multiplication in the Brauer monoid. For this we need to think about tensor products of algebras.

The tensor product of two algebras A,BA,B over kk is another algebra over kk, which we’ll write as A kBA \otimes_k B. This gets along with Morita equivalence:

AAandBBA kAB kB A \simeq A' \; and \; B \simeq B' \; \implies \; A \otimes_k A' \simeq B \otimes_k B'

However, the tensor product of simple algebras need not be simple! And the tensor product of division algebras need not be a division algebra, or even simple. So, we have to be a bit careful if we want a workable multiplication in the Brauer monoid.

For example, take k=k = \mathbb{R}. The division algebras over \mathbb{R} are ,\mathbb{R}, \mathbb{C} and the quaternions \mathbb{H}. We have

[4] \mathbb{H} \otimes_{\mathbb{R}} \mathbb{H} \cong \mathbb{R}[4] \simeq \mathbb{R}

so this particular tensor product of division algebras over \mathbb{R} is simple and thus Morita equivalent to another division algebra over \mathbb{R}. On the other hand,

\mathbb{C} \otimes_{\mathbb{R}} \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}

and this is not a division algebra, nor even simple, nor even Morita equivalent to a simple algebra.

What’s the problem with the latter example? The problem turns out to be that the division algebra \mathbb{C} does not have \mathbb{R} as its center: it has a larger field, namely \mathbb{C} itself, as its center.

It turns out that if you tensor two simple algebras over a field kk and they both have just kk as their center, the result is again simple. So, in Brauer theory, people restrict attention to simple algebras over kk having just kk as their center. These are called central simple algebras over kk. The set of Morita equivalence classes of these is closed under tensor product, so it becomes a monoid. And this monoid happens to be be an abelian group: Brauer group of kk, denoted Br(k)Br(k). I want to work with all simple algebras over kk. So I will need to change this recipe a bit. But it will still be good to compute a few Brauer groups.

To do this, it pays to note that element of Br(k)Br(k) has a representative that is a division algebra over kk whose center is kk. Why? Every simple algebra over kk is D[n]D[n] for some division algebra DD over kk. D[n]D[n] is central simple over kk iff the center of DD is kk, and D[n]D[n] is Morita equivalent to DD. Using this, we easily see:

Example. The Brauer group Br()Br(\mathbb{R}) is 2\mathbb{Z}_2, the 2-element group consisting of [][\mathbb{R}] and [][\mathbb{H}]. We have

[][]=[ ]=[] [\mathbb{R}] \cdot [\mathbb{R}] = [\mathbb{R} \otimes_\mathbb{R} \mathbb{R}] = [\mathbb{R}]

[][]=[ ]=[] [\mathbb{R}] \cdot [\mathbb{H}] = [\mathbb{R} \otimes_\mathbb{R} \mathbb{H}] = [\mathbb{H}]

[][]=[ ]=[] [\mathbb{H}] \cdot [\mathbb{H}] = [\mathbb{H} \otimes_\mathbb{R} \mathbb{H}] = [\mathbb{R}]

Example. The Brauer group of any algebraically closed field kk is trivial, since the only division algebra over kk is kk itself. Thus Br()=1Br(\mathbb{C}) = 1.

Example. The Brauer group of any finite field kk is trivial, since the only division algebras over kk are fields that are finite extensions of kk, and of these only kk itself has kk as center.

Example. Just so you don’t get the impression that Brauer groups tend to be boring, consider the Brauer group of the rational numbers:

Br()={(a,x):a{0,12},x p/,a+ px p=0} Br(\mathbb{Q}) = \left\{ (a,x) : \; a \in \{0,\frac{1}{2}\}, \quad x \in \bigoplus_p \mathbb{Q}/\mathbb{Z}, \quad a + \sum_p x_p = 0 \right\}

where the sum is over all primes. This is a consequence of the Albert–Brauer–Hasse–Noether theorem. The funny-looking {0,12}\{0,\frac{1}{2}\} is just a way to think about the group 2\mathbb{Z}_2 as a subgroup of /\mathbb{Q}/\mathbb{Z}. The elements of this correspond to \mathbb{Q} itself and a rational version of the quaternions. The other stuff comes from studying the situation ‘locally’ one prime at a time. However, the two aspects interact.

The Brauer monoid of a field

Let kk be a field and k¯\overline{k} its algebraic completion. Let LL be the set of intermediate fields

kFk¯ k \subseteq F \subseteq \overline{k}

where FF is a finite extension of kk. This set LL is partially ordered by inclusion, and in fact it is a semilattice: any finite subset of LL has a least upper bound. We write FFF \vee F' for the least upper bound of F,FLF,F' \in L. This is just the smallest subfield of k¯\overline{k} containing both FF and FF'.

We define the Brauer monoid of kk to be the disjoint union

BR(k)= FLBr(F) BR(k) = \coprod_{F \in L} Br(F)

So, every simple algebra over kk shows up in here: if AA is a simple algebra over kk with center FF, the Morita equivalence class [A][A] will appear as an element of Br(F)Br(F). However, isomorphic copies of the same simple algebra will show up repeatedly in the Brauer monoid, since we may have FFF \ne F' but still FFF \cong F'.

How do we define multiplication in the Brauer monoid? The key is that the Brauer group is functorial. Suppose we have an inclusion of fields FFF \subseteq F' in the semilattice LL. Then we get a homomorphism

Br F,F:Br(F)Br(F) Br_{F', F} : Br(F) \to Br(F')

as follows. Any element [A]Br(F)[A] \in Br(F) comes from a central simple algebra AA over FF; the algebra F FAF' \otimes_F A will be central simple over FF', and we define

Br F,F[A]=[F FA] Br_{F',F} [A] = [F' \otimes_F A]

Of course we need to check that this is well-defined, but this is well-known. People call Br F,FBr_{F',F} restriction, since larger fields have smaller Brauer groups, but I’d prefer to call it ‘extension’, since we’re extending an algebra to be defined over a larger field.

It’s easy to see that if FFFF \subseteq F' \subseteq F'' then

Br F,F=Br F,FBr F,F Br_{F'', F} = Br_{F'' ,F'} Br_{F', F}

and this together with

Br F,F=1 Br(F) Br_{F,F} = 1_{Br(F)}

implies that we have a functor

Br:LAbGp Br: L \to AbGp

So now suppose we have two elements of BR(k)BR(k) and we want to multiply them. To do this, we simply write them as [A]Br(F)[A] \in Br(F) and [A]Br(F)[A'] \in Br(F'), map them both into Br(FF)Br(F \vee F'), and then multiply them there:

[A][A]:=Br FF,F[A]Br FF,F[A] [A] \cdot [A'] \; := \; Br_{F \vee F', F} [A] \; \cdot Br_{F \vee F', F'} [A']

This can also be expressed with less jargon as follows:

[A][A]=[A F(FF) FA] [A] \cdot [A'] = [A \otimes_F (F \vee F') \otimes_{F'} A']

However, the functorial approach gives a nice outlook on this basic result:

Proposition. With the above multiplication, BR(k)BR(k) is a commutative monoid.

Proof. The multiplicative identity is [k]Br(k)[k] \in Br(k), and commutativity is obvious, so the only thing to check is associativity. This is easy enough to do directly, but it’s a bit enlightening to notice that it’s a special case of an idea that goes back to A. H. Clifford.

In modern language: suppose we have any semilattice LL and any functor B:LAbGpB: L \to AbGp. This gives an abelian group B(x)B(x) for any xLx \in L, and a homomorphism

B x,x:B(x)B(x) B_{x', x} : B(x) \to B(x')

whenever aaa \le a'. Then the disjoint union

xLB(x) \coprod_{x \in L} B(x)

becomes a commutative monoid if we define the product of aB(x)a \in B(x) and aB(x)a' \in B(x') by

aa=B xx,x(a)B xx,x(a) a \cdot a' = B_{x \vee x',x} (a) \; \cdot \; B_{x \vee x', x'}(a')

Checking associativity is an easy fun calculation, so I won’t deprive you of the pleasure. Moreover, there’s nothing special about abelian groups here: a functor BB from LL to commutative monoids would work just as well. ∎

Let’s see a couple of examples:

Example. The Brauer monoid of the real numbers is the disjoint union

BR()=Br()Br() BR(\mathbb{R}) = Br(\mathbb{R}) \sqcup Br(\mathbb{C})

This has three elements: [][\mathbb{R}], [][\mathbb{C}] and [][\mathbb{H}]. Leaving out the brackets, the multiplication table is

\begin{array}{lrrr} \cdot & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} &\mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} & \mathbb{C} \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C} & \mathbb{R} \end{array}

So, this monoid is isomorphic to the multiplicative monoid 𝟛={1,0,1}\mathbb{3} = \{1, 0, -1\}. This formalizes the multiplicative aspect of Dyson’s ‘threefold way’, which I started grappling with in my paper Division algebras and quantum theory. If you read that paper you can see why I care: Hilbert spaces over the real numbers, complex numbers and quaternions are all important in quantum theory, so they must fit into a single structure. The Brauer monoid is a nice way to describe this structure.

Example. The Brauer monoid of a finite field kk is the disjoint union

BR(k)= FLBr(F) BR(k) = \coprod_{F \in L} Br(F)

where LL is the lattice of subfields of the algebraic closure k¯\overline{k} that are finite extensions of kk. However, we’ve seen that Br(F)Br(F) is always the trivial group. Thus

Br(k)L Br(k) \cong L

with the monoid structure being the operation \vee in the lattice LL.

Example. The Brauer monoid of \mathbb{Q} seems quite complicated to me, since it’s the disjoint union of Br(F)Br(F) for all F¯F \subset \overline{\mathbb{Q}} that are finite extensions of \mathbb{Q}. Such fields FF are called algebraic number fields, and their Brauer groups can, I believe, be computed using the Albert–Brauer–Hasse–Noether theorem. However, here we are doing this for all algebraic number fields, and also keeping track of how they ‘fit together’ using the so-called restriction maps Br F,F:Br(F)Br(F)Br_{F', F} : Br(F) \to Br(F') whenever FFF\subseteq F'. The absolute Galois group of a field always acts on its Brauer monoid, so the rather fearsome absolute Galois group of \mathbb{Q} acts on Br()Br(\mathbb{Q}), for whatever that’s worth.

Fleeing the siren song of number theory, let us move on to my main topic of interest, which is the ‘super’ or 2\mathbb{Z}_2-graded version of this whole story.

Review of superalgebra

We now want to repeat everything we just did, systematically replacing the category of vector spaces over kk by the category of super vector spaces over kk, which are 2\mathbb{Z}_2 graded vector spaces:

V=V 0V 1 V = V_0 \oplus V_1

We call the elements of V 0V_0 even and the elements of V 1V_1 odd. Elements of either V 0V_0 or V 1V_1 are called homogeneous, and we say an element aV ia \in V_i has degree ii. A morphism in the category of super vector spaces is a linear map that preserves the degree of homogeneous elements.

The category of super vector spaces is symmetric monoidal in a way where we introduce a minus sign when we switch two odd elements.

Simple superalgebras and division superalgebras

A superalgebra is a monoid in the category of super vector spaces. In other words, it is a super vector space A=A 0A 1A = A_0 \oplus A_1 where the vector space AA is an algebra in the usual sense and

aA i,bA jabA i+j a \in A_i, \; b \in A_j \quad \implies \quad a \cdot b \in A_{i + j}

where we do our addition mod 2. There is a tensor product of superalgebras, where

(AB) i= i=j+kA jB k (A \otimes B)_i = \bigoplus_{i = j + k} A_j \otimes B_k

and multiplication is defined on homogeneous elements by:

(ab)(ab)=(1) i+jaabb (a \otimes b)(a' \otimes b') = (-1)^{i+j} \; a a' \otimes b b'

where bB i,aA jb \in B_i, a' \in A_j are the elements getting switched.

An ideal II of a superalgebra is homogeneous if it is of the form

I=I 0I 1 I = I_0 \oplus I_1

where I iA iI_i \subseteq A_i. We can take the quotient of a superalgebra by a homogeneous two-sided ideal and get another superalgebra. So, we say a superalgebra AA over FF is simple if its only two-sided homogeneous ideals are {0}\{0\} and AA itself.

A division superalgebra over kk is a superalgebra AA such that if a0a \ne 0 is homogeneous then there exists bAb \in A such that ab=ba=1a b= b a = 1.

At this point it is clear what we aim to do: generalize Brauer groups to this ‘super’ context by replacing division algebras with division superalgebras. Luckily this was already done a long time ago, by Wall:

• C. T. C. Wall, Graded Brauer groups, Journal für die reine und angewandte Mathematik 213 (1963–1964), 187–199.

He showed there are 10 division superalgebras over \mathbb{R} and showed how 8 of these become elements of a kind of super Brauer group for \mathbb{R}, now called the ‘Brauer–Wall’ group. The other 2 become elements of the Brauer–Wall group of \mathbb{C}. A more up-to-date treatment of some of this material can be found here:

• Pierre Deligne, Notes on spinors, in Quantum Fields and Strings: a Course for Mathematicians, vol. 1, AMS. Providence, RI, 1999, pp. 99–135.

Nontrivial results that I state without proof will come from these sources.

Every division superalgebra is simple. Conversely, we want a super-Wedderburn theorem describing simple superalgebras in terms of division superalgebras. However, this must be more complicated than the ordinary Wedderburn theorem saying every simple algebra is a matrix algebra D[n]D[n] with DD a division algebra.

After all, besides matrix algebras, we have ‘matrix superalgebras’ to contend with. For any p,q0p,q \ge 0 let k p|qk^{p|q} be the super vector space with even part k pk^p and odd part k qk^q. Then its endomorphism algebra

k[p|q]=End(k p|q) k[p|q] = End(k^{p|q})

becomes a superalgebra in a standard way, called a matrix superalgebra. Matrix superalgebras are always simple.

Deligne gives a classification of ‘central simple’ superalgebras, and from this we can derive a super-Wedderburn theorem. But what does ‘central simple’ mean in this context?

The supercommutator of two homogeneous elements aA ia \in A_i, bA jb \in A_j of a superalgebra AA is

[a,b]=ab(1) i+jba [a,b] = a b - (-1)^{i+j} b a

We can extend this by bilinearity to all elements of AA. We say a,bAa,b \in A supercommute if [a,b]=0[a,b]= 0. The supercenter of AA is the set of elements in AA that supercommute with every element of AA. If all elements of AA supercommute, or equivalently if the supercenter of AA is all of AA, we say AA is supercommutative.

I believe a superalgebra AA over kk is central simple if AA is simple and its supercenter is just kA 0k \subseteq A_0, the scalar multiples of the identity. Deligne gives a more complicated definition of ‘central simple’, but then in Remark 3.5 proves it is equivalent to being semisimple with supercenter just kk. I believe this is equivalent to the more reasonable-sounding condition I just gave, but have not carefully checked.

In Remark 3.5, Deligne says that by copying an argument in Chapter 8 of Bourbaki’s Algebra one can show:

Proposition. Any central simple superalgebra over kk is of the form D[p|q]D[p|q] for some division superalgebra DD whose supercenter is kk. Conversely, any superalgebra of this form is central simple.

Starting from this, Guo Chuan Thiang showed me how to prove the:

Super-Wedderburn Theorem. Suppose AA is a simple superalgebra over kk, where kk is a field not of characteristic 2. Its supercenter Z(A)Z(A) is purely even, and Z(A)Z(A) is a field extending kk. AA is isomorphic to D[p|q]D[p|q] where DD is some division superalgebra DD over Z(A)Z(A).

It follows that any simple superalgebra over kk is of the form D[p|q]D[p|q] where DD is a division superalgebra over kk. Conversely, if DD is any division algebra over kk, then D[p,q]D[p,q] is a simple superalgebra over kk.

Proof. Suppose AA is a simple superalgebra over kk, and let Z(A)Z(A) be its supercenter. Suppose aa is a nonzero homogeneous element of Z(A)Z(A). Then aAa A is a graded two-sided ideal of AA. Since this ideal contains aa it is nonzero. Thus, this must be AA itself. So, there exists bAb \in A such that ab=1a b = 1.

If aa is even, bb must be as well, and we obtain ba=ab=1b a = a b = 1, so aa has an inverse. Thus, the even part of Z(A)Z(A) is a field.

If aa is odd, it satisfies a 2=a 2a^2=-a^2. Multiplying on the left by bb, then it follows that a=aa = -a, so a=0a = 0, since kk is not of characteristic 2.

In short, nonzero elements of Z(A)Z(A) must be even and invertible. It follows that Z(A)Z(A) is purely even, and is a field extending kk. AA is central over this field Z(A)Z(A), so by the previous proposition we see AD[p|q]A \cong D[p|q] for some division superalgebra DD over Z(A)Z(A). DD will automatically be a division superalgebra over the smaller field kk as well.

Conversely, suppose DD is a division algebra over kk. Since DD is simple, its supercenter will be a field FF extending kk. By the previous proposition D[p|q]D[p|q] will be a central simple superalgebra over FF. It follows that D[p|q]D[p|q] is simple as a superalgebra over kk. ∎

Here is an all-important example:

Example. Let k[1]k[\sqrt{-1}] be the free superalgebra over kk on an odd generator whose square is -1. This superalgebra has a 1-dimensional even part and a 1-dimensional odd part. It is a division superalgebra. It is not supercommutative, since 1\sqrt{-1} does not supercommute with itself. It is central simple: its supercenter is just kk. Over an algebraically closed field kk of characteristic other than 2, the only division superalgebras are kk itself and k[1]k[\sqrt{-1}].

I don’t understand what happens in characteristic 2.

Morita equivalence and the Brauer–Wall group

The Brauer–Wall group consists of Morita equivalence classes of central simple superalgebras, or equivalently, Morita equivalence classes of division superalgebras. For this to make sense, first we need to define Morita equivalence.

Given a superalgebra AA over kk we define a left module to be a super vector space VV over kk equipped with a morphism (that is, a grade-preserving linear map)

AVV A \otimes V \to V

obeying the usual axioms of a left module. We define a morphism of left AA-modules in the obvious way, and let Rep(A)Rep(A) be the category of left AA-modules.

We say two algebras A,BA, B over kk are Morita equivalent if Rep(A)Rep(B)Rep(A) \simeq Rep(B). In this situation we write ABA \simeq B.

Example. Every matrix superalgebra k[p|q]k[p|q] is Morita equivalent to kk.

Example. If AAA \simeq A' and BBB \simeq B' then A kAB kBA \otimes_k A' \simeq B \otimes_k B' .

Example. Since every central simple superalgebra over kk is of the form D[p|q]=Dk[p|q]D[p|q] = D \otimes k[p|q] for some division superalgebra DD whose supercenter is just kk, the previous two examples imply that every central simple superalgebra over kk is Morita equivalent to a division superalgebra whose center is just kk.

We define the Brauer–Wall group Bw(k)Bw(k) of the field kk to be the set of Morita equivalence classes of central simple superalgebras over kk, given the following multiplication:

[A][B]:=[AB] [A] \otimes [B] \; := \; [A \otimes B]

This is well-defined because the tensor product of central simple superalgebras is again central simple. Given that, Bw(k)Bw(k) is clearly a commutative monoid. But in fact it’s an abelian group.

Since every central simple superalgebra over kk is Morita equivalent to a division superalgebra whose center is just kk, we can compute Brauer–Wall groups by focusing on these division superalgebras.

Example. For any algebraically closed field kk, the Brauer–Wall group Bw(k)Bw(k) is 2\mathbb{Z}_2, where the two elements are [k][k] and [k[1]][k[\sqrt{-1}]]. In particular, Bw()Bw(\mathbb{C}) is 2\mathbb{Z}_2. Wall showed that this 2\mathbb{Z}_2 is related to the period-2 phenomenon in complex K-theory and the theory of complex Clifford algebras. The point is that

[1] nliff n \mathbb{C}[\sqrt{-1}]^{\otimes n} \cong \mathbb{C}liff_n

where liff n\mathbb{C}liff_n is the complex Clifford algebra on nn square roots of -1, made into a superalgebra in the usual way. It is well-known that

liff 2liff 0 \mathbb{C}liff_2 \simeq \mathbb{C}liff_0

and this gives the period-2 phenomenon.

Example. Bw()Bw(\mathbb{R}) is much more interesting: by a theorem of Wall, this is 8\mathbb{Z}_8. This is generated by [[1]][\mathbb{R}[\sqrt{-1}]]. Wall showed that this 8\mathbb{Z}_8 is related to the period-8 phenomenon in real K-theory and the theory of real Clifford algebras. The point is that

[1] nCliff n \mathbb{R}[\sqrt{-1}]^{\otimes n} \cong Cliff_{n}

where Cliff nCliff_{n} is the real Clifford algebra on nn square roots of -1, made into a superalgebra in the usual way. It is well-known that

Cliff 8Cliff 0 Cliff_{8} \simeq Cliff_0

and this gives the period-8 phenomenon.

Example. More generally, Wall showed that as long as kk doesn’t have characteristic 2, Bw(k)Bw(k) is an iterated extension of 2\mathbb{Z}_2 by k */(k *) 2k^*/(k^*)^2 by Br(k)Br(k). For a quick modern proof, see Lemma 3.7 in Deligne’s paper. In the case k=k = \mathbb{R} all three of these groups are 2\mathbb{Z}_2 and the iterated extension gives 8\mathbb{Z}_8.

The Brauer–Wall monoid

And now the rest practically writes itself. Let kk be a field and k¯\overline{k} its algebraic completion. As before, let LL be the semilattice of intermediate fields

kFk¯ k \subseteq F \subseteq \overline{k}

where FF is a finite extension of kk.

We define the underlying set of Brauer–Wall monoid of kk to be the disjoint union

BW(k)= FLBw(F) BW(k) = \coprod_{F \in L} Bw(F)

To make this into a commutative monoid, we use the functoriality of the Brauer–Wall group. Suppose we have an inclusion of fields FFF \subseteq F' in the semilattice LL. Then we get a homomorphism

Bw F,F:Bw(F)Bw(F) Bw_{F', F} : Bw(F) \to Bw(F')

as follows:

Bw F,F[A]=[F FA] Bw_{F',F} [A] = [F' \otimes_F A]

and this gives a functor

Bw:LAbGp Bw: L \to AbGp

Using this, we multiply two elements in the Brauer–Wall monoid as follows. Given [A]Bw(F)[A] \in Bw(F) and [A]Bw(F)[A'] \in Bw(F'), their product is

[A][A]:=Bw FF,F[A]Bw FF,F[A] [A] \cdot [A'] \; := \; Bw_{F \vee F', F} [A] \; \cdot Bw_{F \vee F', F'} [A']

or in other words

[A][A]=[A F(FF) FA] [A] \cdot [A'] = [A \otimes_F (F \vee F') \otimes_{F'} A']

Proposition. With the above multiplication, BR(k)BR(k) is a commutative monoid.

Proof. The same argument that let us show associativity for multiplication in the Brauer monoid works again here. ∎

Example. As a set, the Brauer–Wall monoid of the real numbers is the disjoint union

BW()=Bw()Bw() 8 2 BW(\mathbb{R}) = Bw(\mathbb{R}) \sqcup Bw(\mathbb{C}) \cong \mathbb{Z}_8 \sqcup \mathbb{Z}_2

The monoid operation — let’s call it addition now — is the usual addition on 8\mathbb{Z}_8 when applied to two elements of that group, and the usual addition on 2\mathbb{Z}_2 when applied to two elements of that group. The only interesting part is when we add an element a 8a \in \mathbb{Z}_8 and an element b 2b \in \mathbb{Z}_2. For this we need to convert aa into an element of 2\mathbb{Z}_2. For that we use the homomorphism

Bw ,:Bw()Bw() Bw_{\mathbb{C}, \mathbb{R}} : Bw(\mathbb{R}) \to Bw(\mathbb{C})

which sends [[1]][\mathbb{R}[\sqrt{-1}]] to [[1]][\mathbb{C}[\sqrt{-1}]]. More concretely,

Bw ,: 8 2 Bw_{\mathbb{C}, \mathbb{R}} : \mathbb{Z}_8 \to \mathbb{Z}_2

takes an integer mod 8 and gives the corresponding integer mod 2.

So, very concretely,

BW()𝟙𝟘={0,1,2,3,4,5,6,7,0,1} BW(\mathbb{R}) \cong \mathbb{10} = \{0,1,2,3,4,5,6,7,\mathbf{0}, \mathbf{1}\}

where the monoid operation in 𝟙𝟘\mathbb{10} is addition mod 8 for two lightface numbers, but addition mod 2 for two boldface numbers or a boldface and a lightface one.

Conclusion

I had meant to include a section explaining in detail how the 10 elements of this monoid 𝟙𝟘\mathbb{10} correspond to 10 kinds of matter, but this post is getting too long. So for now, at least, you can click on this picture to get an explanation of that!

References

Besides what I’ve already mentioned about the classification of simple superalgebras, here are some other links. Wall proved a kind of super-Wedderburn theorem starting in the section of his paper called Elementary properties. Natalia Zhukavets has an Introduction to superalgebras which in Theorem 1.5 proves that in an algebraically closed field of characteristic different than 2, any simple superalgebra is of the form k[p|q]k[p|q] or D[n]D[n] where D=k[u]D = k[u], uu being an odd square root of 1. Over an algebraically closed field, this superdivision algebra DD is isomorphic to the division algebra that I called k[1]k[\sqrt{-1}]. Over a field that is not algebraically closed, they can be different, and there can be many nonisomorphic division algebras obtained by adjoining to kk an odd square root of aka \in k where a0a \ne 0.

Jinkui Wan and Weiqiang Wang have a paper with a Digression on superalgebras which summarizes Wall’s results in more modern language. Benjamin Gammage has an expository paper with a Classification of finite-dimensional simple superalgebras. This only classifies the ‘central’ ones — but as we’ve seen, that’s the key case.

Author: "john (baez@math.ucr.edu)" Tags: "Condensed Matter Physics"
Send by mail Print  Save  Delicious 
Date: Tuesday, 12 Aug 2014 04:45

Back in 2005, Todd Trimble came out with a short paper on the super Brauer group and super division algebras, which I’d like to TeXify and reprint here.

In it, he gives extremely efficient proofs of several facts I alluded to last time. Namely:

• There are exactly 10 real division superalgebras.

• 8 of them have center \mathbb{R}, and these are Morita equivalent to the real Clifford algebras Cliff 0,,Cliff 7Cliff_0, \dots, Cliff_7.

• 2 of them have center \mathbb{C}, and these are Morita equivalent to the complex Clifford algebras liff 0\mathbb{C}liff_0 and liff 1\mathbb{C}liff_1.

• The real Clifford algebras obey

Cliff i Cliff jCliff i+jmod8 Cliff_i \otimes_{\mathbb{R}} Cliff_j \simeq Cliff_{i + j mod 8}

where \simeq means they’re Morita equivalent as superalgebras.

It easily follows from his calculations that also:

• The complex Clifford algebras obey

liff i liff jliff i+jmod2 \mathbb{C}liff_i \otimes_{\mathbb{C}} \mathbb{C}liff_j \simeq \mathbb{C}liff_{i + j mod 2}

These facts lie at the heart of the ten-fold way. So, let’s see why they’re true!

Before we start, two comments are in order. First, Todd uses Deligne’s term ‘super Brauer group’ where I decided to use ‘Brauer–Wall group’. Second, and more importantly, there’s something about Morita equivalence everyone should know.

In my last post I said that two algebras are Morita equivalent if they have equivalent categories of representations. Todd uses another definition which I actually like much better. It’s equivalent, it takes longer to explain, but it reveals more about what’s really going on. For any field kk, there is a bicategory with

• algebras over kk as objects,

AA-BB bimodules as 1-morphisms from the algebra AA to the algebra BB, and

• bimodule homomorphisms as 2-morphisms.

Two algebras AA and BB over kk are Morita equivalent if they are equivalent in this bicategory; that is, if there’s a AA-BB bimodule MM and a BB-AA bimodule NN such that

M BNA M \otimes_B N \cong A

as an AA-AA bimodule and

N AMB N \otimes_A M \cong B

as a BB-BB bimodule. The same kind of definition works for Morita equivalence of superalgebras, and Todd uses that here.

So, with no further ado, here is Todd’s note.

The super Brauer group and division superalgebras

The super Brauer group

Let SuperVectSuperVect be the symmetric monoidal category of finite-dimensional super vector spaces over \mathbb{R}. By super algebra I mean a monoid in this category. There’s a bicategory whose objects are super algebras AA, whose 1-morphisms M:ABM: A \to B are left AA- right BB-modules in VV, and whose 2-morphisms are homomorphisms between modules. This is a symmetric monoidal bicategory under the usual tensor product on SuperVectSuperVect.

AA and BB are Morita equivalent if they are equivalent objects in this bicategory. Equivalence classes [A][A] form an abelian monoid whose multiplication is given by the monoidal product. The super Brauer group of \mathbb{R} is the subgroup of invertible elements of this monoid.

If [B][B] is inverse to [A] in this monoid, then in particular A()A \otimes (-) can be considered left biadjoint to B()B \otimes (-). On the other hand, in the bicategory above we always have a biadjunction

ACD CA *D \begin{array}{ccl} A \otimes C \to D \\ ------ \\ C \to A^* \otimes D \end{array}

essentially because left AA-modules are the same as right A *A^*-modules, where A *A^* denotes the super algebra opposite to AA. Since right biadjoints are unique up to equivalence, we see that if an inverse to [A][A] exists, it must be [A *][A^*].

This can be sharpened: an inverse to [A][A] exists iff the unit and counit

1A *AAA *1 1 \to A^* \otimes A \qquad A \otimes A^* \to 1

are equivalences in the bicategory. Actually, one is an equivalence iff the other is, because both of these canonical 1-morphisms are given by the same AA-bimodule, namely the one given by AA acting on both sides of the underlying superspace of AA (call it SS) by multiplication. Either is an equivalence if the bimodule structure map

A *AHom(S,S), A^* \otimes A \to Hom(S, S),

which is a map of superalgebras, is an isomorphism.

Cliff 1Cliff_1

As an example, let A=Cliff 1A = Cliff_1 be the Clifford algebra generated by the 1-dimensional space \mathbb{R} with the usual quadratic form Q(x)=x 2Q(x) = x^2, and 2\mathbb{Z}_2-graded in the usual way. Thus, the homogeneous parts of AA are 1-dimensional and there is an odd generator ii satisfying i 2=1i^2 = -1. The opposite A *A^* is similar except that there is an odd generator ee satisfying e 2=1e^2 = 1. Under the map

A *AHom(S,S) A^* \otimes A \to Hom(S, S)

where we write SS as a sum of even and odd parts +i\mathbb{R} + \mathbb{R}i, this map has a matrix representation

ei(1 0 0 1) e \otimes i \mapsto \left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \end{array} \right)

1i(0 1 1 0) 1 \otimes i \mapsto \left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)

e1(0 1 1 0) e \otimes 1 \mapsto \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)

which makes it clear that this map is surjective and thus an isomorphism. Hence [Cliff 1][Cliff_1] is invertible.

One manifestation of Bott periodicity is that [Cliff 1][Cliff_1] has order 8. We will soon see a very easy proof of this fact. A theorem of C. T. C. Wall is that [Cliff 1][Cliff_1] in fact generates the super Brauer group; I believe this can be shown by classifying super division algebras, as discussed below.

Bott periodicity

That [Cliff 1][Cliff_1] has order 8 is an easy calculation. Let Cliff rCliff_r denote the rr-fold tensor power of Cliff 1Cliff_1. Cliff 2Cliff_2 for instance has two supercommuting odd elements i,ji, j satisfying i 2=j 2=1i^2 = j^2 = -1; it follows that k:=ijk \;:= i j satisfies k 2=1k^2 = -1, and we get the usual quaternions, graded so that the even part is the span 1,k\langle 1, k\rangle and the odd part is i,j\langle i, j\rangle.

Cliff 3Cliff_3 has three supercommuting odd elements i,j,l,i, j, l, all of which are square roots of 1-1. It follows that e=ijle = i j l is an odd central involution (here ‘central’ is taken in the ungraded sense), and also that i=jli' = j l, j=lij' = l i, k=ijk' = i j satisfy the Hamiltonian equations

(i) 2=(j) 2=(k) 2=ijk=1, (i')^2 = (j')^2 = (k')^2 = i'j'k' = -1,

so we have Cliff 3=[e]/e 21Cliff_3 = \mathbb{H}[e]/\langle e^2 - 1\rangle. Note this is the same as

Cliff 1 * \mathbb{H} \otimes Cliff_1^*

where the \mathbb{H} here is the quaternions viewed as a super algebra concentrated in degree 0 (i.e. is purely bosonic).

Then we see immediately that Cliff 4=Cliff 3Cliff 1Cliff_4 = Cliff_3 \otimes Cliff_1 is equivalent to purely bosonic \mathbb{H} (since the Cliff 1Cliff_1 cancels Cliff 1 *Cliff_1^* in the super Brauer group).

At this point we are done: we know that conjugation on (purely bosonic) \mathbb{H} gives an isomorphism

* \mathbb{H}^* \cong \mathbb{H}

hence [] 1=[ *]=[][{\mathbb{H}}]^{-1} = [\mathbb{H}^*] = [\mathbb{H}], i.e. []=[Cliff 4][\mathbb{H}] = [Cliff_4] has order 2! Hence [Cliff 1][Cliff_1] has order 8.

The super Brauer clock

All this generalizes to arbitrary Clifford algebras: if a real quadratic vector space (V,Q)(V, Q) has signature (r,s)(r, s), then the superalgebra Cliff(V,Q)Cliff(V, Q) is isomorphic to A rA * sA^{\otimes r} \otimes {A^*}^{\otimes s}, where A rA^{\otimes r} denotes the rr-fold tensor product of A=Cliff 1A = Cliff_1. By the above calculation we see tha Cliff(V,Q)Cliff(V, Q) is equivalent to Cliff rsCliff_{r-s} where rsr-s is taken modulo 8.

For the record, then, here are the hours of the super Brauer clock, where ee denotes an odd element, and \simeq denotes Morita equivalence:

Cliff 0 Cliff 1 +e,e 2=1 Cliff 2 +e,e 2=1,ei=ie Cliff 3 +e,e 2=1,ei=ie,ej=je,ek=ke Cliff 4 Cliff 5 +e,e 2=1,ei=ie,ej=je,ek=ke Cliff 6 +e,e 2=1,ei=ie Cliff 7 +e,e 2=1 \begin{array}{ccl} Cliff_0 & \simeq & \mathbb{R} \\ Cliff_1 & \simeq & \mathbb{R} + \mathbb{R}e, \quad e^2 = -1 \\ Cliff_2 & \simeq & \mathbb{C} + \mathbb{C}e, \quad e^2 = -1, e i = -i e \\ Cliff_3 & \simeq & \mathbb{H} + \mathbb{H}e, \quad e^2 = 1, e i = i e, e j = j e, e k = k e \\ Cliff_4 & \simeq & \mathbb{H} \\ Cliff_5 & \simeq & \mathbb{H} + \mathbb{H}e, \quad e^2 = -1, e i = i e, e j = j e, e k = k e \\ Cliff_6 & \simeq & \mathbb{C} + \mathbb{C} e, \quad e^2 = 1, e i = -i e \\ Cliff_7 & \simeq & \mathbb{R} + \mathbb{R}e, \quad e^2 = 1 \end{array}

All the superalgebras on the right are in fact division superalgebras, i.e. superalgebras in which every nonzero homogeneous element is invertible.

To prove Wall’s result that [Cliff 1][Cliff_1] generates the super Brauer group, we need a lemma: any element in the super Brauer group is the class of a central division superalgebra: that is, one with \mathbb{R} as its center.

Then, if we classify the division superalgebras over \mathbb{R} and show the central ones are Morita equivalent to Cliff 0,,Cliff 7Cliff_0, \dots, Cliff_7, we’ll be done.

Classifying real division superalgebras

I’ll take as known that the only associative division algebras over \mathbb{R} are ,,\mathbb{R}, \mathbb{C}, \mathbb{H} — the even part AA of an associative division superalgebra must be one of these cases. We can express the associativity of a superalgebra (with even part AA) by saying that the odd part MM is an AA-bimodule equipped with a AA-bimodule map pairing ,:M AMA \langle - , - \rangle : M \otimes_A M \to A such that:

ab,c=a,bcforalla,b,cM() a\langle b, c\rangle = \langle a, b\rangle c \; for \; all \; a, b, c \in M \qquad (\star)

If the superalgebra is a division superalgebra which is not wholly concentrated in even degree, then multiplication by a nonzero odd element induces an isomorphism

AM A \to M

and so MM is 1-dimensional over A; choose a basis element ee for MM.

The key observation is that for any aAa \in A, there exists a unique aAa' \in A such that

ae=ea a e = e a'

and that the AA-bimodule structure forces (ab)=ab(a b)' = a'b'. Hence we have an automorphism (fixing the real field) ():AA (-)': A \to A

and we can easily enumerate (up to isomorphism) the possibilities for associative division superalgebras over \mathbb{R}:

1. A=A = \mathbb{R}. Here we can adjust ee so that e 2:=e,ee^2 \; := \langle e, e\rangle is either 1-1 or 11. The corresponding division superalgebras occur at 1 o’clock and 7 o’clock on the super Brauer clock.

2. A=A = \mathbb{C}. There are two \mathbb{R}-automorphisms \mathbb{C} \to \mathbb{C}. In the case where the automorphism is conjugation, condition ()(\star) for super associativity gives e,ee=ee,e\langle e, e\rangle e = e\langle e, e\rangle so that e,e\langle e, e\rangle must be real. Again ee can be adjusted so that e,e\langle e, e\rangle equals 1-1 or 11. These possibilities occur at 2 o’clock and 6 o’clock on the super Brauer clock.

For the identity automorphism, we can adjust ee so that e,e\langle e, e \rangle is 1. This gives the super algebra [e]/e 21\mathbb{C}[e]/\langle e^2 - 1\rangle (where ee commutes with elements in \mathbb{C}). This does not occur on the super Brauer clock over \mathbb{R}. However, it does generate the super Brauer group over \mathbb{C} (which is of order two).

3. A=A = \mathbb{H}. Here \mathbb{R}-automorphisms \mathbb{H} \to \mathbb{H} are given by hxhx 1h \mapsto x h x^{-1} for xx \in \mathbb{H}. In other words

he=exhx 1 h e = e x h x^{-1}

whence exe x commutes with all elements of \mathbb{H} (i.e. we can assume wlog that the automorphism is the identity). The properties of the pairing guarantee that he,e=e,ehh\langle e, e\rangle = \langle e, e\rangle h for all hinh in \mathbb{H}, so e,e\langle e, e \rangle is real and again we can adjust ee so that e,e\langle e, e\rangle equals 11 or 1-1. These cases occur at 3 o’clock and 5 o’clock on the super Brauer clock.

This appears to be a complete (even if a bit pedestrian) analysis.

Author: "john (baez@math.ucr.edu)" Tags: "Condensed Matter Physics"
Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Aug 2014 10:27

There are 10 of each of these things:

  • Associative real super-division algebras.

  • Classical families of compact symmetric spaces.

  • Ways that Hamiltonians can get along with time reversal (TT) and charge conjugation (CC) symmetry.

  • Dimensions of spacetime in string theory.

It’s too bad nobody took up writing This Week’s Finds in Mathematical Physics when I quit. Someone should have explained this stuff in a nice simple way, so I could read their summary instead of fighting my way through the original papers. I don’t have much time for this sort of stuff anymore!

Luckily there are some good places to read about this stuff:

Let me start by explaining the basic idea, and then move on to more fancy aspects.

Ten kinds of matter

The idea of the ten-fold way goes back at least to 1996, when Altland and Zirnbauer discovered that substances can be divided into 10 kinds.

The basic idea is pretty simple. Some substances have time-reversal symmetry: they would look the same, even on the atomic level, if you made a movie of them and ran it backwards. Some don’t — these are more rare, like certain superconductors made of yttrium barium copper oxide! Time reversal symmetry is described by an antiunitary operator TT that squares to 1 or to -1: please take my word for this, it’s a quantum thing. So, we get 3 choices, which are listed in the chart under TT as 1, -1, or 0 (no time reversal symmetry).

Similarly, some substances have charge conjugation symmetry, meaning a symmetry where we switch particles and holes: places where a particle is missing. The ‘particles’ here can be rather abstract things, like phonons - little vibrations of sound in a substance, which act like particles — or spinons — little vibrations in the lined-up spins of electrons. Basically any way that something can wave can, thanks to quantum mechanics, act like a particle. And sometimes we can switch particles and holes, and a substance will act the same way!

Like time reversal symmetry, charge conjugation symmetry is described by an antiunitary operator CC that can square to 1 or to -1. So again we get 3 choices, listed in the chart under CC as 1, -1, or 0 (no charge conjugation symmetry).

So far we have 3 × 3 = 9 kinds of matter. What is the tenth kind?

Some kinds of matter don’t have time reversal or charge conjugation symmetry, but they’re symmetrical under the combination of time reversal and charge conjugation! You switch particles and holes and run the movie backwards, and things look the same!

In the chart they write 1 under the SS when your matter has this combined symmetry, and 0 when it doesn’t. So, “0 0 1” is the tenth kind of matter (the second row in the chart).

This is just the beginning of an amazing story. Since then people have found substances called topological insulators that act like insulators in their interior but conduct electricity on their surface. We can make 3-dimensional topological insulators, but also 2-dimensional ones (that is, thin films) and even 1-dimensional ones (wires). And we can theorize about higher-dimensional ones, though this is mainly a mathematical game.

So we can ask which of the 10 kinds of substance can arise as topological insulators in various dimensions. And the answer is: in any particular dimension, only 5 kinds can show up. But it’s a different 5 in different dimensions! This chart shows how it works for dimensions 1 through 8. The kinds that can’t show up are labelled 0.

If you look at the chart, you’ll see it has some nice patterns. And it repeats after dimension 8. In other words, dimension 9 works just like dimension 1, and so on.

If you read some of the papers I listed, you’ll see that the \mathbb{Z}’s and 2\mathbb{Z}_2’s in the chart are the homotopy groups of the ten classical series of compact symmetric spaces. The fact that dimension n+8n+8 works like dimension nn is called Bott periodicity.

Furthermore, the stuff about operators TT, CC and SS that square to 1, -1 or don’t exist at all is closely connected to the classification of associative real super division algebras. It all fits together.

Super division algebras

In 2005, Todd Trimble wrote a short paper called The super Brauer group and super division algebras.

In it, he gave a quick way to classify the associative real super division algebras: that is, finite-dimensional associative real 2\mathbb{Z}_2-graded algebras having the property that every nonzero homogeneous element is invertible. The result was known, but I really enjoyed Todd’s effortless proof.

However, I didn’t notice that there are exactly 10 of these guys. Now this turns out to be a big deal. For each of these 10 algebras, the representations of that algebra describe ‘types of matter’ of a particular kind — where the 10 kinds are the ones I explained above!

So what are these 10 associative super division algebras?

3 of them are purely even, with no odd part: the usual associative division algebras ,\mathbb{R}, \mathbb{C} and \mathbb{H}.

7 of them are not purely even. Of these, 6 are Morita equivalent to the real Clifford algebras Cl 1,Cl 2,Cl 3,Cl 5,Cl 6Cl_1, Cl_2, Cl_3, Cl_5, Cl_6 and Cl 7Cl_7. These are the superalgebras generated by 1, 2, 3, 5, 6, or 7 odd square roots of -1.

Now you should have at least two questions:

  • What’s ‘Morita equivalence’? — and even if you know, why should it matter here? Two algebras are Morita equivalent if they have equivalent categories of representations. The same definition works for superalgebras, though now we look at their representations on super vector spaces ( 2\mathbb{Z}_2-graded vector spaces). For physics what we really care about is the representations of an algebra or superalgebra: as I mentioned, those are ‘types of matter’. So, it makes sense to count two superalgebras as ‘the same’ if they’re Morita equivalent.

  • 1, 2, 3, 5, 6, and 7? That’s weird — why not 4? Well, Todd showed that Cl 4Cl_4 is Morita equivalent to the purely even super division algebra \mathbb{H}. So we already had that one on our list. Similarly, why not 0? Cl 0Cl_0 is just \mathbb{R}. So we had that one too.

Representations of Clifford algebras are used to describe spin-1/2 particles, so it’s exciting that 8 of the 10 associative real super division algebras are Morita equivalent to real Clifford algebras.

But I’ve already mentioned one that’s not: the complex numbers, \mathbb{C}, regarded as a purely even algebra. And there’s one more! It’s the complex Clifford algebra l 1\mathbb{C}\mathrm{l}_1. This is the superalgebra you get by taking the purely even algebra \mathbb{C} and throwing in one odd square root of -1.

As soon as you hear that, you notice that the purely even algebra \mathbb{C} is the complex Clifford algebra l 0\mathbb{C}\mathrm{l}_0. In other words, it’s the superalgebra you get by taking the purely even algebra \mathbb{C} and throwing in no odd square roots of -1.

More connections

At this point things start fitting together:

  • You can multiply Morita equivalence classes of algebras using the tensor product of algebras: [A][B]=[AB][A] \otimes [B] = [A \otimes B]. Some equivalence classes have multiplicative inverses, and these form the Brauer group. We can do the same thing for superalgebras, and get the super Brauer group. The super division algebras Morita equivalent to Cl 0,,Cl 7Cl_0, \dots , Cl_7 serve as representatives of the super Brauer group of the real numbers, which is 8\mathbb{Z}_8. I explained this in week211 and further in week212. It’s a nice purely algebraic way to think about real Bott periodicity!

  • As we’ve seen, the super division algebras Morita equivalent to Cl 0Cl_0 and Cl 4Cl_4 are a bit funny. They’re purely even. So they serve as representatives of the plain old Brauer group of the real numbers, which is 2\mathbb{Z}_2.

  • On the other hand, the complex Clifford algebras l 0=\mathbb{C}\mathrm{l}_0 = \mathbb{C} and l 1\mathbb{C}\mathrm{l}_1 serve as representatives of the super Brauer group of the complex numbers, which is also 2\mathbb{Z}_2. This is a purely algebraic way to think about complex Bott periodicity, which has period 2 instead of period 8.

Meanwhile, the purely even ,\mathbb{R}, \mathbb{C} and \mathbb{H} underlie Dyson’s ‘three-fold way’, which I explained in detail here:

Briefly, if you have an irreducible unitary representation of a group on a complex Hilbert space HH, there are three possibilities:

  • The representation is isomorphic to its dual via an invariant symmetric bilinear pairing g:H×Hg : H \times H \to \mathbb{C}. In this case it has an invariant antiunitary operator J:HHJ : H \to H with J 2=1J^2 = 1. This lets us write our representation as the complexification of a real one.

  • The representation is isomorphic to its dual via an invariant antisymmetric bilinear pairing ω:H×H\omega : H \times H \to \mathbb{C}. In this case it has an invariant antiunitary operator J:HHJ : H \to H with J 2=1J^2 = -1. This lets us promote our representation to a quaternionic one.

  • The representation is not isomorphic to its dual. In this case we say it’s truly complex.

In physics applications, we can take JJ to be either time reversal symmetry, TT, or charge conjugation symmetry, CC. Studying either symmetry separately leads us to Dyson’s three-fold way. Studying them both together leads to the ten-fold way!

So the ten-fold way seems to combine in one nice package:

  • real Bott periodicity,
  • complex Bott periodicity,
  • the real Brauer group,
  • the real super Brauer group,
  • the complex super Brauer group, and
  • the three-fold way.

I could throw ‘the complex Brauer group’ into this list, because that’s lurking here too, but it’s the trivial group, with \mathbb{C} as its representative.

There really should be a better way to understand this. Here’s my best attempt right now.

The set of Morita equivalence classes of finite-dimensional real superalgebras gets a commutative monoid structure thanks to direct sum. This commutative monoid then gets a commutative rig structure thanks to tensor product. This commutative rig — let’s call it \mathfrak{R} — is apparently too complicated to understand in detail, though I’d love to be corrected about that. But we can peek at pieces:

  • We can look at the group of invertible elements in \mathfrak{R} — more precisely, elements with multiplicative inverses. This is the real super Brauer group 8\mathbb{Z}_8.

  • We can look at the sub-rig of \mathfrak{R} coming from semisimple purely even algebras. As a commutative monoid under addition, this is 3\mathbb{N}^3, since it’s generated by ,\mathbb{R}, \mathbb{C} and \mathbb{H}. This commutative monoid becomes a rig with a funny multiplication table, e.g. =\mathbb{C} \otimes \mathbb{C} = \mathbb{C} \oplus \mathbb{C}. This captures some aspects of the three-fold way.

We should really look at a larger chunk of the rig \mathfrak{R}, that includes both of these chunks. How about the sub-rig coming from all semisimple superalgebras? What’s that?

And here’s another question: what’s the relation to the 10 classical families of compact symmetric spaces? The short answer is that each family describes a family of possible Hamiltonians for one of our 10 kinds of matter. For a more detailed answer, I suggest reading Gregory Moore’s Quantum symmetries and compatible Hamiltonians. But if you look at this chart by Ryu et al, you’ll see these families involve a nice interplay between ,\mathbb{R}, \mathbb{C} and \mathbb{H}, which is what this story is all about:

The families of symmetric spaces are listed in the column “Hamiltonian”.

All this stuff is fitting together more and more nicely! And if you look at the paper by Freed and Moore, you’ll see there’s a lot more involved when you take the symmetries of crystals into account. People are beginning to understand the algebraic and topological aspects of condensed matter much more deeply these days.

The list

Just for the record, here are all 10 associative real super division algebras. 8 are Morita equivalent to real Clifford algebras:

  • Cl 0Cl_0 is the purely even division algebra \mathbb{R}.

  • Cl 1Cl_1 is the super division algebra e\mathbb{R} \oplus \mathbb{R}e, where ee is an odd element with e 2=1e^2 = -1.

  • Cl 2Cl_2 is the super division algebra e\mathbb{C} \oplus \mathbb{C}e, where ee is an odd element with e 2=1e^2 = -1 and ei=iee i = -i e.

  • Cl 3Cl_3 is the super division algebra e\mathbb{H} \oplus \mathbb{H}e, where ee is an odd element with e 2=1e^2 = 1 and ei=ie,ej=je,ek=kee i = i e, e j = j e, e k = k e.

  • Cl 4Cl_4 is [2]\mathbb{H}[2], the algebra of 2×22 \times 2 quaternionic matrices, given a certain 2\mathbb{Z}_2-grading. This is Morita equivalent to the purely even division algebra \mathbb{H}.

  • Cl 5Cl_5 is [4]\mathbb{C}[4] given a certain 2\mathbb{Z}_2-grading. This is Morita equivalent to the super division algebra e\mathbb{H} \oplus \mathbb{H}e where ee is an odd element with e 2=1e^2 = -1 and ei=ie,ej=je,ek=kee i = i e, e j = j e, e k = k e.

  • Cl 6Cl_6 is [8]\mathbb{R}[8] given a certain 2\mathbb{Z}_2-grading. This is Morita equivalent to the super division algebra e\mathbb{C} \oplus \mathbb{C}e where ee is an odd element with e 2=1e^2 = 1 and ei=iee i = -i e.

  • Cl 7Cl_7 is [8][8]\mathbb{R}[8] \oplus \mathbb{R}[8] given a certain 2\mathbb{Z}_2-grading. This is Morita equivalent to the super division algebra e\mathbb{R} \oplus \mathbb{R}e where ee is an odd element with e 2=1e^2 = 1.

Cl n+8Cl_{n+8} is Morita equivalent to Cl nCl_n so we can stop here if we’re just looking for Morita equivalence classes, and there also happen to be no more super division algebras down this road. It is nice to compare Cl nCl_n and Cl 8nCl_{8-n}: there’s a nice pattern here.

The remaining 2 real super division algebras are complex Clifford algebras:

  • l 0\mathbb{C}\mathrm{l}_0 is the purely even division algebra \mathbb{C}.

  • l 1\mathbb{C}\mathrm{l}_1 is the super division algebra e\mathbb{C} \oplus \mathbb{C} e, where ee is an odd element with e 2=1e^2 = -1 and ei=iee i = i e.

In the last one we could also say “with e 2=1e^2 = 1” — we’d get something isomorphic, not a new possibility.

Ten dimensions of string theory

Oh yeah — what about the 10 dimensions in string theory? Are they really related to the ten-fold way?

It seems weird, but I think the answer is “yes, at least slightly”.

Remember, 2 of the dimensions in 10d string theory are those of the string worldsheet, which is a complex manifold. The other 8 are connected to the octonions, which in turn are connected to the 8-fold periodicity of real Clifford algebra. So the 8+2 split in string theory is at least slightly connected to the 8+2 split in the list of associative real super division algebras.

This may be more of a joke than a deep observation. After all, the 8 dimensions of the octonions are not individual things with distinct identities, as the 8 super division algebras coming from real Clifford algebras are. So there’s no one-to-one correspondence going on here, just an equation between numbers.

Still, there are certain observations that would be silly to resist mentioning.

Author: "john (baez@math.ucr.edu)" Tags: "Condensed Matter Physics"
Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Aug 2014 01:04

My new book is out!

Front cover of Basic Category Theory

Click the image for more information.

It’s an introductory category theory text, and I can prove it exists: there’s a copy right in front of me. (You too can purchase a proof.) Is it unique? Maybe. Here are three of its properties:

  • It doesn’t assume much.
  • It sticks to the basics.
  • It’s short.

I want to thank the nn-Café patrons who gave me encouragement during my last week of work on this. As I remarked back then, some aspects of writing a book — even a short one — require a lot of persistence.

But I also want to take this opportunity to make a suggestion. There are now quite a lot of introductions to category theory available, of various lengths, at various levels, and in various styles. I don’t kid myself that mine is particularly special: it’s just what came out of my individual circumstances, as a result of the courses I’d taught. I think the world has plenty of introductions to category theory now.

What would be really good is for there to be a nice second book on category theory. Now, there are already some resources for advanced categorical topics: for instance, in my book, I cite both the nnLab and Borceux’s three-volume Handbook of Categorical Algebra for this. But useful as those are, what we’re missing is a shortish book that picks up where Categories for the Working Mathematician leaves off.

Let me be more specific. One of the virtues of Categories for the Working Mathematician (apart from being astoundingly well-written) is that it’s selective. Mac Lane covers a lot in just 262 pages, and he does so by repeatedly making bold choices about what to exclude. For instance, he implicitly proves that for any finitary algebraic theory, the category of algebras has all colimits — but he does so simply by proving it for groups, rather than explicitly addressing the general case. (After all, anyone who knows what a finitary algebraic theory is could easily generalize the proof.) He also writes briskly: few words are wasted.

I’m imagining a second book on category theory of a similar length to Categories for the Working Mathematician, and written in the same brisk and selective manner. Over beers five years ago, Nicola Gambino and I discussed what this hypothetical book ought to contain. I’ve lost the piece of paper I wrote it down on (thus, Nicola is absolved of all blame), but I attempted to recreate it sometime later. Here’s a tentative list of chapters, in no particular order:

  • Enriched categories
  • 2-categories (and a bit on higher categories)
  • Topos theory (obviously only an introduction) and categorical set theory
  • Fibrations
  • Bimodules, Morita equivalence, Cauchy completeness and absolute colimits
  • Operads and Lawvere theories
  • Categorical logic (again, just a bit) and internal category theory
  • Derived categories
  • Flat functors and locally presentable categories
  • Ends and Kan extensions (already in Mac Lane’s book, but maybe worth another pass).

Someone else should definitely write such a book.

Author: "leinster (tom.leinster@ed.ac.uk)" Tags: "Categories"
Send by mail Print  Save  Delicious 
Date: Sunday, 03 Aug 2014 14:00

I’ve been spending some time with Simon Willerton’s paper Tight spans, Isbell completions and semi-tropical modules. In particular, I’ve been trying to understand tight spans.

The tight span of a metric space AA is another metric space T(A)T(A), in which AA naturally embeds. For instance, the tight span of a two-point space is a line segment containing the original two points as its endpoints. Similarly, the tight span of a three-point space is a space shaped like the letter Y, with the original three points at its tips. Because of examples like this, some people like to think of the tight span as a kind of abstract convex hull.

Simon’s paper puts the tight span construction into the context of a categorical construction, Isbell conjugacy. I now understand these things better than I did, but there’s still a lot I don’t get. Here goes.

Simon wrote a blog post summarizing the main points of his paper, but I want to draw attention to slightly different aspects of it than he does. So, much as I recommend that post of his, I’ll make this self-contained.

We begin with Isbell conjugacy. For any small category AA, there’s an adjunction

ˇ:[A op,Set][A,Set] op:^ \check{\,\,}: [A^{op}, Set] \leftrightarrows [A, Set]^{op}: \hat{\,\,}

defined for F:A opSetF: A^{op} \to Set and bAb \in A by

Fˇ(b)=Hom(F,A(,b)) \check{F}(b) = Hom(F, A(-, b))

and for G:ASetG: A \to Set and aAa \in A by

G^(a)=Hom(G,A(a,)). \hat{G}(a) = Hom(G, A(a, -)).

We call Fˇ\check{F} the (Isbell) conjugate of FF, and similarly G^\hat{G} the (Isbell) conjugate of GG.

Like any adjunction, it restricts to an equivalence of categories in a canonical way. Specifically, it’s an equivalence between

the full subcategory of [A op,Set][A^{op}, Set] consisting of those objects FF such that the canonical map FFˇ^F \to \hat{\check{F}} is an isomorphism

and

the full subcategory of [A,Set] op[A, Set]^{op} consisting of those objects GG such that the canonical map GG^ˇG \to \check{\hat{G}} is an isomorphism.

I’ll call either of these equivalent categories the reflexive completion R(A)R(A) of AA. (Simon called it the Isbell completion, possibly with my encouragement, but “reflexive completion” is more descriptive and I prefer it now.) So, the reflexive completion of a category consists of all the “reflexive” presheaves on it — those canonically isomorphic to their double conjugate.

All of this categorical stuff generalizes seamlessly to an enriched context, at least if we work over a complete symmetric monoidal closed category.

For example, suppose we take our base category to be the category AbAb of abelian groups. Let kk be a field, viewed as a one-object AbAb-category. Both [k op,Ab][k^{op}, Ab] and [k,Ab][k, Ab] are the category of kk-vector spaces, and both ˇ\check{\,\,} and ^\hat{\,\,} are the dual vector space construction. The reflexive completion R(k)R(k) of kk is the category of kk-vector spaces VV for which the canonical map VV **V \to V^{\ast\ast} is an isomorphism — in other words, the finite-dimensional vector spaces.

But that’s not the example that will matter to us here.

We’ll be thinking primarily about the case where the base category is the poset ([0,],)([0, \infty], \geq) (the reverse of the usual order!) with monoidal structure given by addition. As Lawvere observed long ago, a [0,][0, \infty]-category is then a “generalized metric space”: a set AA of points together with a distance function

d:A×A[0,] d: A \times A \to [0, \infty]

satisfying the triangle inequality d(a,b)+d(b,c)d(a,c)d(a, b) + d(b, c) \geq d(a, c) and the equaiton d(a,a)=0d(a, a) = 0. These are looser structures than classical metric spaces, mainly because of the absence of the symmetry axiom d(a,b)=d(b,a)d(a, b) = d(b, a).

The enriched functors are distance-decreasing maps between metric spaces: those functions f:ABf: A \to B satisfying d B(f(a 1),f(a 2))d A(a 1,a 2)d_B(f(a_1), f(a_2)) \leq d_A(a_1, a_2). I’ll just call these “maps” of metric spaces.

If you work through the details, you’ll find that Isbell conjugacy for metric spaces works as follows. Let AA be a generalized metric space. The conjugate of a map f:A op[0,]f: A^{op} \to [0, \infty] is the map fˇ:A[0,]\check{f}: A \to [0, \infty] defined by

fˇ(b)=sup aAmax{d(a,b)f(a),0} \check{f}(b) = sup_{a \in A} max \{ d(a, b) - f(a), 0 \}

and the conjugate of a map g:A[0,]g: A \to [0, \infty] is the map g^:A op[0,]\hat{g}: A^{op} \to [0, \infty] defined by

g^(a)=sup bAmax{d(a,b)g(b),0}. \hat{g}(a) = sup_{b \in A} max \{ d(a, b) - g(b), 0 \}.

We always have ffˇ^f \geq \hat{\check{f}}, and the reflexive completion R(A)R(A) of AA consists of all maps f:A op[0,]f: A^{op} \to [0, \infty] such that f=fˇ^f = \hat{\check{f}}. (Although you could write out an explicit formula for that, I’m not convinced it’s much help.) The metric on R(A)R(A) is the sup metric.

All that comes out of the general categorical machinery.

However, we can say something more that only makes sense because of the particular base category we’re using.

As we all know, symmetric metric spaces — the ones we’re most used to — are particular interesting. For a symmetric metric space AA, the distinction between covariant and contravariant functors on AA vanishes. The two kinds of conjugate, ^\hat{\,\,} and ˇ\check{\,\,}, are also the same. I’ll write *{}^\ast for them both.

The reflexive completion R(A)R(A) consists of the functions A[0,]A \to [0, \infty] that are equal to their double conjugate. But because there is no distinction between covariant and contravariant, we can also consider the functions A[0,]A \to [0, \infty] equal to their single conjugate.

The set of such functions is — by definition, if you like — the tight span T(A)T(A) of AA. So

T(A)={f:A[0,]|f=f *}, T(A) = \{ f : A \to [0, \infty] \,|\, f = f^\ast \},

while

R(A)={f:A[0,]|f=f **}. R(A) = \{ f: A \to [0, \infty] \,|\, f = f^{\ast\ast} \}.

Both come equipped with the sup metric, and both contain AA as a subspace, via the Yoneda embedding ad(,a)a \mapsto d(-, a). So AT(A)R(A)A \subseteq T(A) \subseteq R(A).

Example Let AA be the symmetric metric space consisting of two points distance DD apart. Its reflexive completion R(A)R(A) is the set [0,D]×[0,D][0, D] \times [0, D] with metric d((s 1,s 2),(t 1,t 2))=max{t 1s 1,t 2s 2,0}. d((s_1, s_2), (t_1, t_2)) = max \{ t_1 - s_1, t_2 - s_2, 0 \}. The Yoneda embedding identifies the two points of AA with the points (0,D)(0, D) and (D,0)(D, 0) of R(A)R(A). The tight span T(A)T(A) is the straight line between these two points of R(A)R(A) (a diagonal of the square), which is isometric to the ordinary Euclidean line segment [0,D][0, D].

As that example shows, the reflexive completion of a space needn’t be symmetric, even if the original space was symmetric. On the other hand, it’s not too hard to show that the tight span of a space is always symmetric. Simon’s Theorem 4.1.1 slots everything into place:

Theorem (Willerton) Let AA be a symmetric metric space. Then the tight span T(A)T(A) is the largest symmetric subspace of R(A)R(A) containing AA.

Here “largest” means that if BB is another symmetric subspace of R(A)R(A) containing AA then BT(A)B \subseteq T(A). It’s not even obvious that there is a largest one. For instance, given any non-symmetric metric space CC, what’s the largest symmetric subspace of CC? There isn’t one, just because every singleton subset of CC is symmetric.

Simon’s told the following story before, but I can’t resist telling it again. Tight spans have been discovered independently many times over. The first time they were discovered, they were called by the less catchy name of “injective envelope” (because T(A)T(A) is the smallest injective metric space containing AA). And the person who made that first discovery? Isbell — who, as far as anyone knows, never noticed that this had anything to do with Isbell conjugacy.

Let me finish with something I don’t quite understand.

Simon’s Theorem 3.1.4 says the following. Let AA be a symmetric metric space. (He doesn’t assume symmetry, but I will.) Then for any aAa \in A, pR(A)p \in R(A) and ε>0\varepsilon \gt 0, there exists bAb \in A such that

d(a,p)+d(p,b)d(a,b)+ε. d(a, p) + d(p, b) \leq d(a, b) + \varepsilon.

In other words, aa, pp and bb are almost collinear.

A loose paraphrasing of this is that every point in the reflexive completion of AA is close to being on a geodesic between points of AA. The theorem does imply this, but it says a lot more. Look at the quantification. We get to choose one end aa of the not-quite-geodesic, as well as the point pp in the reflexive completion, and we’re guaranteed that if we continue the not-quite-geodesic from aa on through pp, then we’ll eventually meet another point of AA (or nearly).

Let’s get rid of those “not quite”s, and at the same time focus attention on the tight span rather than the reflexive completion. Back in Isbell’s original 1964 paper (cited by Simon, in case you want to look it up), it’s shown that if AA is compact then so is its tight span T(A)T(A). Of course, Simon’s Theorem 3.1.4 applies in particular when pT(A)p \in T(A). But then compactness of T(A)T(A) means that we can drop the ε\varepsilon.

In other words: let AA be a compact symmetric metric space. Then for any aAa \in A and pT(A)p \in T(A), there exists bAb \in A such that

d(a,p)+d(p,b)=d(a,b). d(a, p) + d(p, b) = d(a, b).

So, if you place your pencil at a point of AA, draw a straight line from it to a point of T(A)T(A), and keep going, you’ll eventually meet another point of AA.

This leaves me wondering what tight spans of common geometric figures actually look like.

For example, take an arc AA of a circle — any size arc, as long as it’s not the whole circle. Embed it in the plane and give it the Euclidean metric. I said originally that the tight span is sometimes thought of as a sort of abstract convex hull, and indeed, the introduction to Simon’s paper says that some authors have actually used this name instead of “tight span”. But the result I just stated makes this seem highly misleading. It implies that the tight span of AA is not its convex hull, and indeed, can’t be any subspace of the Euclidean plane (unless, perhaps, it’s AA itself, which I suspect is not the case). But what is it?

A lot of people who use tight spans are doing so in contexts such as combinatorial optimization and phylogenetic analysis where the metric spaces that they start with are finite. So they don’t treat examples such as arcs, circles, triangles, etc. Has anyone ever computed the tight spans of common geometric shapes?

Author: "leinster (tom.leinster@ed.ac.uk)" Tags: "Geometry"
Send by mail Print  Save  Delicious 
Date: Friday, 25 Jul 2014 11:10

How can we discuss all the kinds of matter described by the ten-fold way in a single setup?

It’s bit tough, because 8 of them are fundamentally ‘real’ while the other 2 are fundamentally ‘complex’. Yet they should fit into a single framework, because there are 10 super division algebras over the real numbers, and each kind of matter is described using a super vector space — or really a super Hilbert space — with one of these super division algebras as its ‘ground field’.

Combining physical systems is done by tensoring their Hilbert spaces… and there does seem to be a way to do this even with super Hilbert spaces over different super division algebras. But what sort of mathematical structure can formalize this?

Here’s my current attempt to solve this problem. I’ll start with a warmup case, the threefold way. In fact I’ll spend most of my time on that! Then I’ll sketch how the ideas should extend to the tenfold way.

Fans of lax monoidal functors, Deligne’s tensor product of abelian categories, and the collage of a profunctor will be rewarded for their patience if they read the whole article. But the basic idea is supposed to be simple: it’s about a multiplication table.

The 𝟛\mathbb{3}-fold way

First of all, notice that the set

𝟛={1,0,1} \mathbb{3} = \{1,0,-1\}

is a commutative monoid under ordinary multiplication:

× 1 0 1 1 1 0 1 0 0 0 0 1 1 0 1 \begin{array}{rrrr} \mathbf{\times} & \mathbf{1} & \mathbf{0} & \mathbf{-1} \\ \mathbf{1} & 1 & 0 & -1 \\ \mathbf{0} & 0 & 0 & 0 \\ \mathbf{-1} & -1 & 0 & 1 \end{array}

Next, note that there are three (associative) division algebras over the reals: ,\mathbb{R}, \mathbb{C} or \mathbb{H}. We can equip a real vector space with the structure of a module over any of these algebras. We’ll then call it a real, complex or quaternionic vector space.

For the real case, this is entirely dull. For the complex case, this amounts to giving our real vector space VV a complex structure: a linear operator i:VVi: V \to V with i 2=1i^2 = -1. For the quaternionic case, it amounts to giving VV a quaternionic structure: a pair of linear operators i,j:VVi, j: V \to V with

i 2=j 2=1,ij=ji i^2 = j^2 = -1, \qquad i j = -j i

We can then define k=ijk = i j.

The terminology ‘quaternionic vector space’ is a bit quirky, since the quaternions aren’t a field, but indulge me. n\mathbb{H}^n is a quaternionic vector space in an obvious way. n×nn \times n quaternionic matrices act by multiplication on the right as ‘quaternionic linear transformations’ — that is, left module homomorphisms — of n\mathbb{H}^n. Moreover, every finite-dimensional quaternionic vector space is isomorphic to n\mathbb{H}^n. So it’s really not so bad! You just need to pay some attention to left versus right.

Now: I claim that given two vector spaces of any of these kinds, we can tensor them over the real numbers and get a vector space of another kind. It goes like this:

real complex quaternionic real real complex quaternionic complex complex complex complex quaternionic quaternionic complex real \begin{array}{cccc} \mathbf{\otimes} & \mathbf{real} & \mathbf{complex} & \mathbf{quaternionic} \\ \mathbf{real} & real & complex & quaternionic \\ \mathbf{complex} & complex & complex & complex \\ \mathbf{quaternionic} & quaternionic & complex & real \end{array}

You’ll notice this has the same pattern as the multiplication table we saw before:

× 1 0 1 1 1 0 1 0 0 0 0 1 1 0 1 \begin{array}{rrrr} \mathbf{\times} & \mathbf{1} & \mathbf{0} & \mathbf{-1} \\ \mathbf{1} & 1 & 0 & -1 \\ \mathbf{0} & 0 & 0 & 0 \\ \mathbf{-1} & -1 & 0 & 1 \end{array}

So:

  • \mathbb{R} acts like 1.
  • \mathbb{C} acts like 0.
  • \mathbb{H} acts like -1.

There are different ways to understand this, but a nice one is to notice that if we have algebras AA and BB over some field, and we tensor an AA-module and a BB-module (over that field), we get an ABA \otimes B-module. So, we should look at this ‘multiplication table’ of real division algebras:

[2] [2] [4] \begin{array}{lrrr} \mathbf{\otimes} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} \oplus \mathbb{C} & \mathbb{C}[2] \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C}[2] & \mathbb{R}[4] \end{array}

Here [2]\mathbb{C}[2] means the 2 × 2 complex matrices viewed as an algebra over \mathbb{R}, and [4]\mathbb{R}[4] means that 4 × 4 real matrices.

What’s going on here? Naively you might have hoped for a simpler table, which would have instantly explained my earlier claim:

\begin{array}{lrrr} \mathbf{\otimes} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} &\mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} & \mathbb{C} \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C} & \mathbb{R} \end{array}

This isn’t true, but it’s ‘close enough to true’. Why? Because we always have a god-given algebra homomorphism from the naive answer to the real answer! The interesting cases are these:

\mathbb{C} \to \mathbb{C} \oplus \mathbb{C} [2] \mathbb{C} \to \mathbb{C}[2] [4] \mathbb{R} \to \mathbb{R}[4]

where the first is the diagonal map a(a,a)a \mapsto (a,a), and the other two send numbers to the corresponding scalar multiples of the identity matrix.

So, for example, if VV and WW are \mathbb{C}-modules, then their tensor product (over the reals! — all tensor products here are over \mathbb{R}) is a module over \mathbb{C} \otimes \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}, and we can then pull that back via ff to get a right \mathbb{C}-module.

What’s really going on here?

There’s a monoidal category Alg Alg_{\mathbb{R}} of algebras over the real numbers, where the tensor product is the usual tensor product of algebras. The monoid 𝟛\mathbb{3} can be seen as a monoidal category with 3 objects and only identity morphisms. And I claim this:

Claim. There is an oplax monoidal functor F:𝟛Alg F : \mathbb{3} \to Alg_{\mathbb{R}} with F(1) = F(0) = F(1) = \begin{array}{ccl} F(1) &=& \mathbb{R} \\ F(0) &=& \mathbb{C} \\ F(-1) &=& \mathbb{H} \end{array}

What does ‘oplax’ mean? Some readers of the nn-Category Café eat oplax monoidal functors for breakfast and are chortling with joy at how I finally summarized everything I’d said so far in a single terse sentence! But others of you see ‘oplax’ and get a queasy feeling.

The key idea is that when we have two monoidal categories CC and DD, a functor F:CDF : C \to D is ‘oplax’ if it preserves the tensor product, not up to isomorphism, but up to a specified morphism. More precisely, given objects x,yCx,y \in C we have a natural transformation

F x,y:F(xy)F(x)F(y) F_{x,y} : F(x \otimes y) \to F(x) \otimes F(y)

If you had a ‘lax’ functor this would point the other way, and they’re a bit more popular… so when it points the opposite way it’s called ‘oplax’.

(In the lax case, F x,yF_{x,y} should probably be called the laxative, but we’re not doing that case, so I don’t get to make that joke.)

This morphism F x,yF_{x,y} needs to obey some rules, but the most important one is that using it twice, it gives two ways to get from F(xyz)F(x \otimes y \otimes z) to F(x)F(y)F(z)F(x) \otimes F(y) \otimes F(z), and these must agree.

Let’s see how this works in our example… at least in one case. I’ll take the trickiest case. Consider

F 0,0:F(00)F(0)F(0), F_{0,0} : F(0 \cdot 0) \to F(0) \otimes F(0),

that is:

F 0,0: F_{0,0} : \mathbb{C} \to \mathbb{C} \otimes \mathbb{C}

There are, in principle, two ways to use this to get a homomorphism

F(000)F(0)F(0)F(0)F(0 \cdot 0 \cdot 0 ) \to F(0) \otimes F(0) \otimes F(0)

or in other words, a homomorphism

\mathbb{C} \to \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C}

where remember, all tensor products are taken over the reals. One is

F 0,01F 0,0() \mathbb{C} \stackrel{F_{0,0}}{\longrightarrow} \mathbb{C} \otimes \mathbb{C} \stackrel{1 \otimes F_{0,0}}{\longrightarrow} \mathbb{C} \otimes (\mathbb{C} \otimes \mathbb{C})

and the other is

F 0,0F 0,01() \mathbb{C} \stackrel{F_{0,0}}{\longrightarrow} \mathbb{C} \otimes \mathbb{C} \stackrel{F_{0,0} \otimes 1}{\longrightarrow} (\mathbb{C} \otimes \mathbb{C})\otimes \mathbb{C}

I want to show they agree (after we rebracket the threefold tensor product using the associator).

Unfortunately, so far I have described F 0,0F_{0,0} in terms of an isomorphism

\mathbb{C} \otimes \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}

Using this isomorphism, F 0,0F_{0,0} becomes the diagonal map a(a,a)a \mapsto (a,a). But now we need to really understand F 0,0F_{0,0} a bit better, so I’d better say what isomorphism I have in mind! I’ll use the one that goes like this:

11 (1,1) i1 (i,i) 1i (i,i) ii (1,1) \begin{array}{ccl} \mathbb{C} \otimes \mathbb{C} &\to& \mathbb{C} \oplus \mathbb{C} \\ 1 \otimes 1 &\mapsto& (1,1) \\ i \otimes 1 &\mapsto &(i,i) \\ 1 \otimes i &\mapsto &(i,-i) \\ i \otimes i &\mapsto & (1,-1) \end{array}

This may make you nervous, but it truly is an isomorphism of real algebras, and it sends a1a \otimes 1 to (a,a)(a,a). So, unraveling the web of confusion, we have

F 0,0: a a1 \begin{array}{rccc} F_{0,0} : & \mathbb{C} &\to& \mathbb{C}\otimes \mathbb{C} \\ & a &\mapsto & a \otimes 1 \end{array}

Why didn’t I just say that in the first place? Well, I suffered over this a bit, so you should too! You see, there’s an unavoidable arbitrary choice here: I could just have well used a1aa \mapsto 1 \otimes a. F 0,0F_{0,0} looked perfectly god-given when we thought of it as a homomorphism from \mathbb{C} to \mathbb{C} \oplus \mathbb{C}, but that was deceptive, because there’s a choice of isomorphism \mathbb{C} \otimes \mathbb{C} \to \mathbb{C} \oplus \mathbb{C} lurking in this description.

This makes me nervous, since category theory disdains arbitrary choices! But it seems to work. On the one hand we have

F 0,0 1F 0,0 a a1 a(11) \begin{array}{ccccc} \mathbb{C} &\stackrel{F_{0,0}}{\longrightarrow} &\mathbb{C} \otimes \mathbb{C} &\stackrel{1 \otimes F_{0,0}}{\longrightarrow}& \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C} \\ a &\mapsto & a \otimes 1 & \mapsto & a \otimes (1 \otimes 1) \end{array}

On the other hand, we have

F 0,0 F 0,01 a a1 (a1)1 \begin{array}{ccccc} \mathbb{C} &\stackrel{F_{0,0}}{\longrightarrow} & \mathbb{C} \otimes \mathbb{C} &\stackrel{F_{0,0} \otimes 1}{\longrightarrow} & \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C} \\ a &\mapsto & a \otimes 1 & \mapsto & (a \otimes 1) \otimes 1 \end{array}

So they agree!

I need to carefully check all the other cases before I dare call my claim a theorem. Indeed, writing up this case has increased my nervousness… before, I’d thought it was obvious.

But let me march on, optimistically!

Consequences

In quantum physics, what matters is not so much the algebras \mathbb{R}, \mathbb{C} and \mathbb{H} themselves as the categories of vector spaces — or indeed, Hilbert spaces —-over these algebras. So, we should think about the map sending an algebra to its category of modules.

For any field kk, there should be a contravariant pseudofunctor

Rep:Alg kRex k Rep: Alg_k \to Rex_k

where Rex kRex_k is the 2-category of

  • kk-linear finitely cocomplete categories,

  • kk-linear functors preserving finite colimits,

  • and natural transformations.

The idea is that RepRep sends any algebra AA over kk to its category of modules, and any homomorphism f:ABf : A \to B to the pullback functor f *:Rep(B)Rep(A)f^* : Rep(B) \to Rep(A) .

(Functors preserving finite colimits are also called right exact; this is the reason for the funny notation RexRex. It has nothing to do with the dinosaur of that name.)

Moreover, RepRep gets along with tensor products. It’s definitely true that given real algebras AA and BB, we have

Rep(AB)Rep(A)Rep(B) Rep(A \otimes B) \simeq Rep(A) \boxtimes Rep(B)

where \boxtimes is the tensor product of finitely cocomplete kk-linear categories. But we should be able to go further and prove RepRep is monoidal. I don’t know if anyone has bothered yet.

(In case you’re wondering, this \boxtimes thing reduces to Deligne’s tensor product of abelian categories given some ‘niceness assumptions’, but it’s a bit more general. Read the talk by Ignacio López Franco if you care… but I could have used Deligne’s setup if I restricted myself to finite-dimensional algebras, which is probably just fine for what I’m about to do.)

So, if my earlier claim is true, we can take the oplax monoidal functor

F:𝟛Alg F : \mathbb{3} \to Alg_{\mathbb{R}}

and compose it with the contravariant monoidal pseudofunctor

Rep:Alg Rex Rep : Alg_{\mathbb{R}} \to Rex_{\mathbb{R}}

giving a guy which I’ll call

Vect:𝟛Rex Vect: \mathbb{3} \to Rex_{\mathbb{R}}

I guess this guy is a contravariant oplax monoidal pseudofunctor! That doesn’t make it sound very lovable… but I love it. The idea is that:

  • Vect(1)Vect(1) is the category of real vector spaces

  • Vect(0)Vect(0) is the category of complex vector spaces

  • Vect(1)Vect(-1) is the category of quaternionic vector spaces

and the operation of multiplication in 𝟛={1,0,1}\mathbb{3} = \{1,0,-1\} gets sent to the operation of tensoring any one of these three kinds of vector space with any other kind and getting another kind!

So, if this works, we’ll have combined linear algebra over the real numbers, complex numbers and quaternions into a unified thing, VectVect. This thing deserves to be called a 𝟛\mathbb{3}-graded category. This would be a nice way to understand Dyson’s threefold way.

What’s really going on?

What’s really going on with this monoid 𝟛\mathbb{3}? It’s a kind of combination or ‘collage’ of two groups:

  • The Brauer group of \mathbb{R}, namely 2{1,1}\mathbb{Z}_2 \cong \{-1,1\}. This consists of Morita equivalence classes of central simple algebras over \mathbb{R}. One class contains \mathbb{R} and the other contains \mathbb{H}. The tensor product of algebras corresponds to multiplication in {1,1}\{-1,1\}.

  • The Brauer group of \mathbb{C}, namely the trivial group {0}\{0\}. This consists of Morita equivalence classes of central simple algebras over \mathbb{C}. But \mathbb{C} is algebraically closed, so there’s just one class, containing \mathbb{C} itself!

See, the problem is that while \mathbb{C} is a division algebra over \mathbb{R}, it’s not ‘central simple’ over \mathbb{R}: its center is not just \mathbb{R}, it’s bigger. This turns out to be why \mathbb{C} \otimes \mathbb{C} is so funny compared to the rest of the entries in our division algebra multiplication table.

So, we’ve really got two Brauer groups in play. But we also have a homomorphism from the first to the second, given by ‘tensoring with \mathbb{C}’: complexifying any real central simple algebra, we get a complex one.

And whenever we have a group homomorphism α:GH\alpha: G \to H, we can make their disjoint union GHG \sqcup H into monoid, which I’ll call G αHG \sqcup_\alpha H.

It works like this. Given g,gGg,g' \in G, we multiply them the usual way. Given h,hHh, h' \in H, we multiply them the usual way. But given gGg \in G and hHh \in H, we define

gh:=α(g)h g h := \alpha(g) h

and

hg:=hα(g) h g := h \alpha(g)

The multiplication on G αHG \sqcup_\alpha H is associative! For example:

(gg)h=α(gg)h=α(g)α(g)h=α(g)(gh)=g(gh) (g g')h = \alpha(g g') h = \alpha(g) \alpha(g') h = \alpha(g) (g'h) = g(g'h)

Moreover, the element 1 GG1_G \in G acts as the identity of G αHG \sqcup_\alpha H. For example:

1 Gh=α(1 G)h=1 Hh=h 1_G h = \alpha(1_G) h = 1_H h = h

But of course G αHG \sqcup_\alpha H isn’t a group, since “once you get inside HH you never get out”.

This construction could be called the collage of GG and HH via α\alpha, since it’s reminiscent of a similar construction of that name in category theory.

Question. What do monoid theorists call this construction?

Question. Can we do a similar trick for any field? Can we always take the Brauer groups of all its finite-dimensional extensions and fit them together into a monoid by taking some sort of collage? If so, I’d call this the Brauer monoid of that field.

The 𝟙𝟘\mathbb{10}-fold way

If you carefully read Part 1, maybe you can guess how I want to proceed. I want to make everything ‘super’.

I’ll replace division algebras over \mathbb{R} by super division algebras over \mathbb{R}. Now instead of 3 = 2 + 1 there are 10 = 8 + 2:

  • 8 of them are central simple over \mathbb{R}, so they give elements of the super Brauer group of \mathbb{R}, which is 8\mathbb{Z}_8.

  • 2 of them are central simple over \mathbb{C}, so they give elements of the super Brauer group of \mathbb{C}, which is 2\mathbb{Z}_2.

Complexification gives a homomorphism

α: 8 2 \alpha: \mathbb{Z}_8 \to \mathbb{Z}_2

namely the obvious nontrivial one. So, we can form the collage

𝟙𝟘= 8 α 2 \mathbb{10} = \mathbb{Z}_8 \sqcup_\alpha \mathbb{Z}_2

It’s a commutative monoid with 10 elements! Each of these is the equivalence class of one of the 10 real super division algebras.

I’ll then need to check that there’s an oplax monoidal functor

G:𝟙𝟘SuperAlg G : \mathbb{10} \to SuperAlg_{\mathbb{R}}

sending each element of 𝟙𝟘\mathbb{10} to the corresponding super division algebra.

If GG really exists, I can compose it with a thing

SuperRep:SuperAlg Rex SuperRep : SuperAlg_{\mathbb{R}} \to Rex_{\mathbb{R}}

sending each super algebra to its category of ‘super representations’ on super vector spaces. This should again be a contravariant monoidal pseudofunctor.

We can call the composite of GG with SuperRepSuperRep

SuperVect:𝟙𝟘Rex SuperVect: \mathbb{10} \to \Rex_{\mathbb{R}}

If it all works, this thing SuperVectSuperVect will deserve to be called a 𝟙𝟘\mathbb{10}-graded category. It contains super vector spaces over the 10 kinds of super division algebras in a single framework, and says how to tensor them. And when we look at super Hilbert spaces, this setup will be able to talk about all ten kinds of matter I mentioned last time… and how to combine them.

So that’s the plan. If you see problems, or ways to simplify things, please let me know!

Author: "john (baez@math.ucr.edu)" Tags: "Condensed Matter Physics"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Jul 2014 22:43

The following concept seems to have been reinvented a bunch of times by a bunch of people, and every time they give it a different name.

Definition: Let CC be a category with pullbacks and a class of weak equivalences. A morphism f:ABf:A\to B is a [insert name here] if the pullback functor f *:C/BC/Af^\ast:C/B \to C/A preserves weak equivalences.

In a right proper model category, every fibration is one of these. But even in that case, there are usually more of these than just the fibrations. There is of course also a dual notion in which pullbacks are replaced by pushouts, and every cofibration in a left proper model category is one of those.

What should we call them?

The names that I’m aware of that have so far been given to these things are:

  1. sharp map, by Charles Rezk. This is a dualization of the terminology flat map used for the dual notion by Mike Hopkins (I don’t know a reference, does anyone?). I presume that Hopkins’ motivation was that a ring homomorphism is flat if tensoring with it (which is the pushout in the category of commutative rings) is exact, hence preserves weak equivalences of chain complexes.

    However, “flat” has the problem of being a rather overused word. For instance, we may want to talk about these objects in the canonical model structure on CatCat (where in fact it turns out that every such functor is a cofibration), but flat functor has a very different meaning. David White has pointed out that “flat” would also make sense to use for the monoid axiom in monoidal model categories.

  2. right proper, by Andrei Radulescu-Banu. This is presumably motivated by the above-mentioned fact that fibrations in right proper model categories are such. Unfortunately, proper map also has another meaning.

  3. hh-fibration, by Berger and Batanin. This is presumably motivated by the fact that “hh-cofibration” has been used by May and Sigurdsson for an intrinsic notion of cofibration in topologically enriched categories, that specializes in compactly generated spaces to closed Hurewicz cofibrations, and pushouts along the latter preserve weak homotopy equivalences. However, it makes more sense to me to keep “hh-cofibration” with May and Sigurdsson’s original meaning.

  4. Grothendieck WW-fibration (where WW is the class of weak equivalences on CC), by Ara and Maltsiniotis. Apparently this comes from unpublished work of Grothendieck. Here I guess the motivation is that these maps are “like fibrations” and are determined by the class WW of weak equivalences.

Does anyone know of other references for this notion, perhaps with other names? And any opinions on what the best name is? I’m currently inclined towards “WW-fibration” mainly because it doesn’t clash with anything else, but I could be convinced otherwise.

Author: "shulman (viritrilbia@gmail.com)" Tags: "Homotopy Theory"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Jul 2014 20:36

Nope, this isn’t about gender or social balance in math departments, important as those are. On Friday, Glasgow’s interdisciplinary Boyd Orr Centre for Population and Ecosystem Health — named after the whirlwind of Nobel-Peace-Prize-winning scientific energy that was John Boyd Orr — held a day of conference on diversity in multiple biological senses, from the large scale of rainforest ecosystems right down to the microscopic scale of pathogens in your blood.

Cartoon of John Boyd Orr

I used my talk (slides here) to argue that the concept of diversity is fundamentally a mathematical one, and that, moreover, it is closely related to core mathematical quantities that have been studied continuously since the time of Euclid.

In a sense, there’s nothing new here: I’ve probably written about all the mathematical content at least once before on this blog. But in another sense, it was a really new talk. I had to think very hard about how to present this material for a mixed group of ecologists, botanists, epidemiologists, mathematical modellers, and so on, all of whom are active professional scientists but some of whom haven’t studied mathematics since high school. That’s why I began the talk with an explanation of how pure mathematics looks these days.

I presented two pieces of evidence that diversity is intimately connected to ancient, fundamental mathematical concepts.

The first piece of evidence is a connection at one remove, and schematically looks like this:

maximum diversity \leftrightarrow magnitude \leftrightarrow intrinsic volumes

The left leg is a theorem asserting that when you have a collection of species and some notion of inter-species distance (e.g. genetic distance), the maximum diversity over all possible abundance distributions is closely related to the magnitude of the metric space that the species form.

The right leg is a conjecture by Simon Willerton and me. It states that for convex subsets of n\mathbb{R}^n, magnitude is closely related to perimeter, volume, surface area, and so on. When I mentioned “quantities that have been studied continuously since the time of Euclid”, that’s what I had in mind. The full-strength conjecture requires you to know about “intrinsic volumes”, which are the higher-dimensional versions of these quantities. But the 2-dimensional conjecture is very elementary, and described here.

The second piece of evidence was a very brief account of a theorem of Mark Meckes, concerning fractional dimension of subsets XX of n\mathbb{R}^n (slide 15, and Corollary 7.4 here). One of the standard notions of fractional dimension is Minkowski dimension (also known by other names such as Kolmogorov or box-counting dimension). On the other hand, the rate of growth of the magnitude function t|tX|t \mapsto \left| t X \right| is also a decent notion of dimension. Mark showed that they are, in fact, the same. Thus, for any compact X nX \subseteq \mathbb{R}^n with a well-defined Minkowski dimension dimXdim X, there are positive constants cc and CC such that

ct dimX|tX|Ct dimX c t^{dim X} \leq \left| t X \right| \leq C t^{dim X}

for all t0t \gg 0.

One remarkable feature of the proof is that it makes essential use of the concept of maximum diversity, where diversity is measured in precisely the way that Christina Cobbold and I came up with for use in ecology.

So, work on diversity has already got to the stage where application-driven problems are enabling advances in pure mathematics. This is a familiar dynamic in older fields of application such as physics, but I think the fact that this is already happening in the relatively new field of diversity theory is a promising sign. It suggests that aside from all the applications, the mathematics of diversity has a lot to give pure mathematics itself.

Next April, John Baez and friends are running a three-day investigative workshop on Entropy and information in biological systems at the National Institute for Mathematical and Biological Synthesis in Knoxville, Tennessee. I hope this will provide a good opportunity for deepening our understanding of the interplay between mathematics and diversity (which is closely related to entropy and information). If you’re interested in coming, you can apply online.

Author: "leinster (tom.leinster@ed.ac.uk)" Tags: "Biology"
Send by mail Print  Save  Delicious 
Date: Thursday, 10 Jul 2014 13:42

Here’s another post asking for a reference to stuff that should be standard. (The last ones succeeded wonderfully, so thanks!)

I should be able to say

CC is the symmetric monoidal category with the following presentation: it’s generated by objects xx and yy and morphisms L:xyyL: x \otimes y \to y and R:yxyR: y \otimes x \to y, with the relation

(L1)(1R)α x,y,x=(1R)(L1)(L \otimes 1)(1 \otimes R)\alpha_{x,y,x} = (1 \otimes R)(L \otimes 1)

Here α\alpha is the associator. Don’t worry about the specific example: I’m just talking about a presentation of a symmetric monoidal category using generators and relations.

Right now Jason Erbele and I have proved that a certain symmetric monoidal category has a certain presentation. I defined what this meant myself. But this has got to be standard, right?

So whom do we cite?

You are likely to mention PROPs, and that’s okay if they get the job done. But I don’t actually know a reference on describing PROPs by generators and relations. Furthermore, our actual example is not a strict symmetric monoidal category. It’s equivalent to one, of course, but it would be nice to have a concept of `presentation’ that specified the symmetric monoidal category only up to equivalence, not isomorphism. In other words, this is a ultimately a 2-categorical concept, not a 1-categorical one.

If it weren’t for this, we could use the fact that PROPs are models of an algebraic theory. But our paper is actually about control theory—a branch of engineering—so I’d rather avoid showing off, if possible.

Author: "john (baez@math.ucr.edu)" Tags: "Categories"
Send by mail Print  Save  Delicious 
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader