*Taken from Random Walks on Finite Quantum Groups by Franz & Gohm.*

To motivate the definition of quantum Markov chains let us start with a reformulation of the classical Markov chain. Let be finite sets. Any map may be called an *action of on . *Let and be the *-algebra of complex functions on and . For all , we have unital *-homomorphisms () given by They can be put together into a single unital *-homomorphism

, ,

where is the indicator function. We get a natural non-commutative generalisation just by allowing the algebras to become non-commutative (by replacing the C*-algebras and by more general (!), not necessarily commutative C*-algebras).

Let and be unital C*-algebras and a unital *-homomorphism. Here is the spatial tensor product. Then we can build up the following iterative scheme for :

,

, .

(Sweedler’s notation stands for and is very convenient for writing formulas).

, ,

.

Clearly all the are unital *-homomorphisms (). If we want to have an algebra which includes all their ranges we can form the infinite tensor product (the closure of the union of all of with the natural inclusions ) and then .

Denote by the right shift on , i.e. Using this we can also write

, ,

where is a unital *-homomorphism given by

, .

i.e. by applying the shift, we first obtain and then interpret ‘‘ as the operation which replaces by . We may interpret as a kind of time evolution producing

To do probability theory, consider states on and form product states

for (in particular for the infinite product state on , which we call ). Now we can think of the as non-commutative random variables with distributions and is a non-commutative stochastic process. We call the *initial state *and the *transition state*.

In order to analyse this process, we define for the linear maps:

,

.

In particular, , .

Such maps are often called *slice maps.* From a probabilistic point of view, it is common to refer to idempotent norm-one (completely) positive maps onto a C*-subalgebra as (non-commutative) conditional expectations. Clearly the slice map is a conditional expectation (with its range embedded by ) and it has the additional property of preserving the state, i.e. .

## Proposition 2.1: The Markov Property

*where , *

*.*

### Proof

We interpret this as a Markov property of the process . Note that if there are state-preserving conditional expectations onto and onto the algebraic span of , then because is dominated by and is dominated by , we get

(*Markov Property*)

The reader should check that for commutative algebras, this is the usual Markov property of classical probability.

Suppose that is a finite set and consider the C*-algebra of complex-valued functions on . This algebra has basis . We now construct the , the stochastic operator and a conditional expectation .

Now let . We need a unital *-homomorphism . Let us try

, .

It is not difficult to show that this is a unital *-homomorphism.

In this framework the are quite simple:

.

Now let be a probability measure on . We have

.

i.e. . Now we need to construct a conditional expectation . The following seems reasonable:

.

I’m hoping to show that for commutative algebras the Markov property above it the correct one. Here I’ve started with a commutative algebra, and showed that the Markov property holds. However I don’t think I have the correct . Even if I do find the correct , I also need to show that *classical *Markov chains satisfy this quantum Markov property. In the next paragraph there is a hint how we might do this. Let be a classical Markov chain with stochastic operator . In the classical situation the stochastic operator can tell us a lot about the random variables — hopefully in the quantum case also!

So in hoping to find the correct for classical Markov chains, define

.

We should be able to get something out of this by looking at — which is connected to via . This suggests to me that we can look at the Markov property for (recalling and ):

.

With the prescription we should be able to write down a that satisfies the Markov property for some . Following this procedure I have derived

… this isn’t linear. Does determine and vice versa? If is allowed to be chosen independently of then things don’t seem to work. We must establish a link between and . To try and get linearity we have for

As unlikely as it seems, this is additive and I’m going to extend to linearity by allowing homogeneity. In fact this is easier written as

.

However this implies that which doesn’t seem correct… Well thankfully I’m doing random walks on groups as that is perfectly straightforward — can’t seem to find a to do the job at all. If I had thought for a moment then this shouldn’t have been a problem — it’s when we have a group action can we write down the classical version in this way. Suppose again that and we have an action of a finite group on :

.

It is better that this is induced by a group action via . As before this can be encoded as a *coaction*

,

My ‘interpretation’ of this is as follows. If the action is going to be by , then is like the set of points which will be sent to in that case. So it should be very difficult to do what I wanted to do above without a group action somewhere. Can we still do it? Perhaps we can construct a Markov chain which can’t be realised in this framework? Consider a two state Markov chain with transition probabilities and . I can show that after quite a lot of calculations that this can only be done if . This suggests that we can only do Markov chains which are induced by group actions — this does not cover *all *Markov chains however. Therefore I must disagree with the author and call this (classical) construction a *non-invariant random walk on a finite group (NIRWG).*

We must show therefore that the above Markov Property is the same as that of the classical chain when we have a NIRWG. Let be a set and be a group action. Let be a probability measure on and define a stochastic operator by:

, for all .

Now the unital *-homomorphisms are given by

.

We also have a transition operator

Now let us check out what is given by. It’s not easy (possibly not unique) but we have

This seems to suggest that an action of on and a state on can define a classical non-invariant random walk but that the other direction there may not necessarily be unique. Hmmm — can this be inverted? At this point we have to ask questions about the comparative sizes of and — or moreover . From now on I will assume that .

We examine the Markov property for .

Thus in the general case, we say that is a *quantum Markov chain *on . The map is called the *transition operator *of the Markov chain. In the classical case, it can be identified with the transition matrix by choosing indicator functions of single points as a basis, i.e. . It is an instructive exercise to start with a given transition matrix and to realise the classical Markov Chain with the construction above. As indicated I don’t think this is always possible.

Let be the random walk on with stochastic operator induced by the driving probability. The action, we will be concerned with shall be left-multiplication (well is an additive abelian group so we can right-addition —I am considering the action to be ‘on’ ):

, .

This yields the unital *-homomorphisms of given by:

.

They can be put together (encoded?) into a single unital *-homomorphism

, .

Now is linear and the indicator functions are a basis for , hence we examine :

.

.

My reading of this is that if the ‘action’ was (), then the random walk will be (considering it started ‘at’ ) ‘at’ .

Now we can examine the stochastic process . We have that , . Now we calculate:

,

,

.

My interpretation here is that the random walk in in a superposition of states — it’s true state can only be determined when we know which combination of and acted. Inductively, we have that

.

Again the construction is clear, the are indeed random variables on the function space, and further we can see that the random walk is the random variable given by .

Whoops: Most of the next paragraph doesn’t make sense. I would define the states on linearly by:

Eh, actually this fixes the next paragraph!

For the probability theory, we might look at the probability measures . For these to be states they need to be norm 1, positive elements in the dual of . When is normed by the supremum norm, the elements of are in the dual space with the norm — thus we have that the probability measures are indeed norm 1. Also, as we know that the positive elements of are those that are real-valued with , we clearly have for so the probability measures are indeed our states.

I‘d like to take being equivalent to — for a walk starting deterministically at , and — but this doesn’t make a whole pile of sense at the moment, so we’ll just stick with . So the distribution of is given by , where .

Now for the slice maps. This is where I am confused. My interpretation so far is that , where the first describes the current state, and the th element of the tensor product of the s accounts for the action at step . The definition of the slice maps is very much at odds with this (probably false) interpretation. It’s pretty clear that the slice maps act on the . Considering :

.

Hmmm — it seems as if the actions are the wrong way around. Maybe we can do them the other way around?

Well anyway, let us consider the map — what does that look like. It acts on anyway:

.

This does seem to encode the walk that begins at . Perhaps I am doing things slightly wrong — but looking at the “right-shift” formulation, the same problems emerge…

Now looking to the Markov property. Again we’ll just look at . On the left-hand side we have:

.

Now for the right-hand side. Firstly

.

Now we must hit this with — I’m happy enough that they are equal actually. Now we need to endorse the interpretation — that this is the usual Markov property of classical probability. We can take this to be as simple as:

…

Analogous to a classic formula, we can also derive the following semigroup property for transition operators from the Markov property. It is one of the main reasons why Markov chains are easier than more general processes.

## Corollary 2.2: Semigroup Property

.

It might be useful to prove this is the commutative case for a random walk on a group of order . Both and act on and as both are linear it suffices to prove the result for . As per usual, this is given by

.

Now, as far as I am concerned, the action of on this is unclear: acts on …

On the other hand, it’s pretty easy to show that is given by:

I can’t reconcile this exactly, but if can act like this then no problem.

Finally we note that if , then .

Suppose that . Let ; then there exist elementary tensors such that

.

Now apply the product state :

.

Now let embedded into . Now

,

and the claim is proved

This implies that the Markov chain is stationary, i.e. correlations between random variables only depend on time differences (???) In particular, the state is then preserved by , i.e. .

Suppose that we have a stationary Markov chain and let . Now

The construction above is called *coupling to a shift. *

## 2 comments

Comments feed for this article

September 15, 2011 at 11:52 am

Random Walks on Finite Quantum Groups: Random Walks on Comodule Algebras « J.P. McCarthy: Math Page[…] we start with such a coaction then we can look at the quatum Markov chain constructed in the previous section in a different way. Define […]

September 21, 2011 at 4:58 pm

An alternative quantisation of a Markov chain? or Why do we need Coalgebras? « J.P. McCarthy: Math Page[…] in category theory. At times I can see that the coalgebra is encoding a lot (see some remarks here and here), but really I haven’t a clue what’s going […]