Categories
_NOTES_ Blogs Uncategorized

Blog@CITA: Perfect Fulfilling

It’s been two weeks since my internship started at CITA, and if I were to describe the feeling of studying, working, and living here in the simplest terms possible, “perfectly fulfilling” would probably be the one.

I’d like to believe that we’ve moved past the phase of excessively praising the grass on the other hemisphere of earth, but there are still many things here that I genuinely appreciate and enjoy, to the point where I can’t help but offer some comments. Toronto, in a way that is hard to pinpoint, allows me to live and work at a pace that simply feels comfortably fitting. I think this is, to some extent, due to a community atmosphere that is difficult to precisely describe.

If I were to characterize the inclusivity of this community using a power spectral density (PSD) function, my ideal scenario would look something like this: a fairly wide range, allowing me to meet various kinds of people; a sufficiently large mean, indicating a decent level of average human kindness; not too exaggerated a variance, implying an equal distribution without falling into the trap of egalitarianism; and a noticeable peak, indicating some nontrivial commitment to science and education. Of course, words like “sufficiently large” and “fairly wide” are quite ambiguous and scientifically irresponsible, which is why I say that the community atmosphere is not easily describe.

The feeling of being “perfectly fulfilled” is similarly imprecise. Maybe one day I’ll describe it as “perfectly relaxed,” but of course, that wouldn’t provide any more knowledge of the feeling. I’m rambling again. To be honest, I’ve been in a poor state over the past year. Compared to the period from mid-2022 to mid-2023, the past year has been awful. Looking back, I think it’s due to a kind of stubbornness that misses the point. Over the past year, I think I’ve been too fixated on “getting myself into an ideal state before engaging in work, study, or other things I’m passionate about.” The initial intention, of course, was to handle the things I love with greater efficiency. I used to think this was a given—we should, of course, do the things we love in the best possible state. To exaggerate, “Not doing so is a desecration of the things we love.” I found myself once deeply convinced of this exaggerated and flawed notion, to the point of neglecting some important and fundamental truths. Firstly, fixating on whether you’re in the right state doesn’t actually improve your state. Secondly, and more importantly, I believe that the so-called “right state” for research, learning, or other interests is nothing more than an “excited state,” and experience has shown that, at least for me, the most efficient way to reach this excited state is to directly engage in research, learning, or the other interests themselves. You might think I’m stating the obvious, because this is indeed so simple that it contains nothing particularly insightful. But humans are such peculiar creatures that sometimes they forget these simple truths, and when they do, they might forget them for an entire year.

Zheng

Aug 13, Toronto

Categories
_NOTES_ Group Theory Uncategorized

Notes on Group Theory $($unfinished$)$

The following file can be opened by Obsidian, written in markdown, and may be converted to LaTeX code.This notes is based on Wu-ki Tung’s book.

# INTRODUCTION
Check [[#Appendix I]] for Notational convents, check [[#Appendix II]] for summary of linear vector space theory.
## 1.1 Particle on A One-Dimensional Lattice
### Translational Symmetry (discrete)

## 1.2 Representation of the Discrete Translation Operators
## 1.3 Physical Consequences of Translational Symmetry
## 1.4 The Representation Functions and Fourier Analysis

## 1.5 Symmetry Groups of Physics

### General Feature of the Application of Group Theory in Physics
**(i)** Since Hamiltonian is invariant under symmetry operations, the Hamiltonian operator commute with symmetry operators. Hence **eigenstates of Hamiltonian operators are also basis vectors of the representation of symmetry group**. ([[#1.1 Particle on A One-Dimensional Lattice]])
**(ii)** The representation of the revenant group can be found by general mathematical methods. The results are inherent to the symmetry, and independent of the details of the physical system. ([[#1.2 Representation of the Discrete Translation Operators]])
**(iii)** 
**(iv)** **The representation functions form orthonormal and complete sets in the function space of the solutions to the problem.** 
### Some Commonly Encountered Symmetries in Physics
#### Continuous space-time symmetry
(a) **Translation in space**. Applicable to all *isolated* systems, is based on the assumption of *homogeneity of space*.
(b) **Translation in time**. Applicable to all isolated systems, is itself a statement of *homogeneity of time*.(I.e. Behavior of physical system of same initial condition is independent time. Energy conservation of energy can be easily derived from this symmetry, check [[#Landau]] [[#Goldstein]])
(c) **Rotation in space**. Applicable to isolated systems, it reflects the *isotropy of space*. (It leads to the conservation of angular momentum, check [[#Landau]] [[#Goldstein]])
(d) **Lorentz symmetry**.  This symmetry embodies the generalization of classical separate symmetries of space and time into a combined single *space-time symmetry*, known as *special relativity*.
#### Discrete space-time symmetry
(**a**) **Space inversion (parity transformation)**. Most interactions in the nature obey this symmetry, but the *“weak interaction”*  does not.
(**b**) **Time-reversal transformation**. The symmetry is respected to all known forces except in *isolated instances*.
(c) **Discrete translation on a lattice**.
(d) **Discrete rotational symmetry of a lattice**. Also known as ***Point group***.

#### Permutation symmetry 
#### Gauge invariance and Charge conservation
#### Internal symmetry of nuclear and elementary particle physics

# BASIC GROUP THEORY
## 2.1 Basic Definitions and Examples
##### Definition 2.1 : A Group
A set $\{G:a,b,c,…\}$ is said to form a group if there is  an operation $\cdot$ called *group multiplication*, which associates any given ordered pair of elements $a,b \in G$  With a well defined product $a\cdot b \in G$, such that the following conditions are satisfied:
(i) the operation  $\cdot$ is **associative**
(ii) there $\exists$ $e\in G$ called the **identity** such that: $\forall$ $a\in G$, $a \cdot e=a$
(iii) there $\exists$ $a^{-1}\in G$ for $\forall a\in G$ called the **inverse** of $a$ such that: $a\cdot a^{-1}=e$

From theses conditions one can derive some elementary consequences such as: (a) $e^{-1}\cdot e=e$; (b) $a^{-1}\cdot a=e$
##### Examples (elementary groups)

##### Definition 2.2 : Abelian Group
A group is said to be Abelian if: for $\forall$ $a,b\in G$,  $a\cdot b=b\cdot a$. (I.e. A group is said to be Abelian if its group multiplication is commutative.)
##### Definition 2.3 : Order of Finite Group
The number of elements of a group is called its order.

## 2.2 Further Examples and Subgroups
##### Further Examples (elementary groups)
###### Dihedral Groups $D_n$
###### Permutation Groups $S_n$ (Symmetry Group)


The readers can later show that a dihedral group and a permutation group of the same Oder is **isomorphic**. (check ref [[#Definition 2.5 Isomorphism]])

##### Definition 2.4 : Subgroup
A subset $H$ of group $G$ is said to form a subgroup of $G$ if it forms a group under the same group-multiplication law



## 2.3 The Rearrangement Lemma and the Symmetric (Permutation) Groups $S_n$
Another direct consequence from the [[#Definition 2.1 A Group]], or more specifically, from the existence of an **inverse** for every element of a group, is the so called rearrangement lemma. Which will be used to derive important results throughout this chapter.
### Rearrangement Lemma
Rearrangement lemma states that: if $p,b,c \in G$ and $pb=pc$, then $b=c$.
**Proof:** Multiply both sides by $p^{-1}$. QED

This result means:
Firstly, if $a,b\in G$ and $a\not=b$, then for $\forall$ $p\in G$, $pa\not= pb$;
Therefore, if one arranges all elements of $G$ in an ordered sequence, and then multiply these elements through some element $p\in G$, the resulted sequence’ is just a **rearrangement** of the original sequence. (This is whyh its called the rearrangement lemma.)

### Permutation Groups
Consider a finite group $G$ whose elements are labeled $g_1,g_2,…,g_n$, multiplying each element $g_i$ by a chosen element 
$h$ would results in a rearrangement like $\{hg_1,hg_2,…,hg_n\}=\{g_{h_1},g_{h_2},…,g_{h_n}\}$ where the new labels $h_i$ ranges from $1$ to $n$. So by rearrangement, a group  $G$ naturally associates any element $h\in G$ with a permutation operation characterized by $\{h_1,h_2,…,h_n\}$ (i.e. with the operation that rearrangement a sequence $\{1,2,…,n\}$ into $\{h_1,h_2,…,h_n\}$).

As for **convention**, we denote the **permutation** of $n$ objects labeled in the sequence $\{1,2,…,n\}$ into a new sequence whose labels are $\{p_1,p_2,…,p_n\},p_i\in\{1,2,…,n\}$ by:$$p=\begin{pmatrix}
1&2&…&Ν \\
P_1&p_2&…&p_n

\end{pmatrix}$$
The set of $n!$  permutation operations on $n$ objects form a group $S_n$ called a **symmetry group** or simply a **permutation group**

**Question:** proof that this definition does form a group, and describe how its group multiplication is defined?

**A more compact convention:**
Take $$p=\begin{pmatrix}
1&2&3&4&5 \\
3&5&4&1&2
\end{pmatrix}
$$as an example, read from the first column which is “interchange object $1$ and $3$” and then look through the first row to locate the column that begins with $s$, in this case the column “interchange object $3$ and $4$”, then repeat the process locating the column begins with $4$, in this case this column says “interchange object $4$ and $1$”, at this point, when  we try to repeat the “read-locate-read” procedure we will go back to the first column, so the the procedure terminates and we write $(134)$; now again we apply the “read-locate-read” procedure from the second column, which will eventually read $(25)$; the left column read  $(6)$. So $p_6=(134)(25)(6)$. (Readers may notice that we have used this notation in [[#2.2 Further Examples and Subgroups]])
##### Definition 2.5 : Isomorphism
Two groups $G$ and $G’$ are said to be isomorphic if there is an one-to-one correspondence between their elements, such that preserves the law of group multiplication. (in other words, there exists an one-to-one correspondence $g_i \in G \leftrightarrow g_i’ \in G’$ , such that: if $g_1g_2=g_3$ then $g_1’g_2’=g_3’$. )
##### Examples (isomorphism)

##### Theorem 2.1 :Cayley’s Theorem
Cayley’s theorem states that every group $G$ of order $n$ is **isomorphic** to a **subgroup** of symmetry group $S_n$
**Proof:** Recall the [[#Rearrangement Lemma]] ' provides us with the correspondence from elements of $G$ (of order $n$) to elements of $S_n$ by:
$$a\in G \rightarrow p_a=\begin{pmatrix}
1&2&…&n\\
a_1&a_2&…&a_n
\end{pmatrix}
$$

(Notice we don’t have to manually check if these corresponding elements from $S_n$ form a subgroup, because since **the law of group multiplication is preserved by an isomorphism**, the corresponding elements automatically meet the requirements of a group.)
where th exact values of $a_i$ are defined iΝ [[#Permutation Groups]]:
$$
g_{a_i}=ag_i
$$

(Notice we don’t have to manually check if these corresponding elements from $S_n$ form a subgroup, because since **the law of group multiplication is preserved by an isomorphism**, the corresponding elements automatically meet the requirements of a group.)
Now check this natural correspondence between general group elements to symmetry group elements satisfies the [[#Definition 2.5 Isomorphism]],i.e. check if it preserves the law of group multiplication. In other words we check whether for $\forall a,b,c\in G$ such that $ab=c$,  $p_ap_b=p_c$ always holds.
Obviously:
$$
p_ap_b=\begin{pmatrix}
1 & 2 &…&n\\
a_1 & a_2 &… &a_n
\end{pmatrix}
\begin{pmatrix}
1 & 2 &…&n\\
b_1 & b_2 &… &b_n
\end{pmatrix}
$$

(Notice we don’t have to manually check if these corresponding elements from $S_n$ form a subgroup, because since **the law of group multiplication is preserved by an isomorphism**, the corresponding elements automatically meet the requirements of a group.)
but from the identity of permutation we know:
$$
\begin{pmatrix}
1 & 2 &…&n\\
a_1 & a_2 &… &a_n
\end{pmatrix}=\begin{pmatrix}
b_1 & b_2 &…&b_n\\
a_{b_1} & a_{b_2} &… &a_{b_n}
\end{pmatrix}
$$

(Notice we don’t have to manually check if these corresponding elements from $S_n$ form a subgroup, because since **the law of group multiplication is preserved by an isomorphism**, the corresponding elements 
automatically meet the requirements of a group.)
so:
$$p_a p_b=
\begin{pmatrix}
1 & 2 &…&n\\
a_{b_1} & a_{b_2} &… &a_{b_n}
\end{pmatrix}
$$
Then we check if this is equivalent to $p_c$ which is equivalent to check whether:$$a_{b_i}=c_i$$again we use the definition identity of $p_c$ that:$$g_{c_i}=cg_i=abg_i$$but the RHS is just:
$$abg_i=ag_{b_i}=g_{a_{b_i}}$$
(The last equality is from definition identity $ag_i=g_{a_{i}}$ and replace $i$ by $b_i$) **QED**
##### Theorem 2.2 : Prime-order-n group must be isomeric to $C_n$

**Proof**: not given.
A direct consequence of this theorem is: as long as $n$ is prime, there can only be one group (structure) of this order (which must be isomorphic to cyclic group $C_n$ of the same order.)
[[]]
## 2.4 Classes and Invariant Subgroups 
The elements of group $G$ can be partitioned into conjugate classes and cosets([[#Definition 2.10: Cosets]]), these constitute different ways to sort group elements which would be useful in study the structure of group, and the representation theory ([[#GROUP REPRESENTATION]]). 
##### Definition 2.6 : Conjugate Elements
An element $b\in G$ is said to be conjugate to another element $a\in G$ if and only if there $\exists$ $p \in G$ such that  $b=pap^{-1}$. Denote $a\sim b$     

By definition we can show that conjugate is an ***equivalence relation***  since it’s:
(i) **reflective:** $a\sim a$
(ii)  **symmetric:** if $a\sim b$ then $b \\sim a$
(iii) **transitive:** if $a\sim b$ and $b\sim c$ then $a\sim c$

It is well known that **any equivalence relation provides a unique way to classify the elements of a set**, now that since conjugate is an equivalence relation, we can define conjugate class as follows:
##### Definition 2.7 : Conjugate Class
Elements of a group which are conjugate to each other are said to form a conjugate class.

Directly from its definition we can show that: (i) each element of a group belongs to i one and **only** one conjugate class; (ii) the identity element forms a class by **itself**.

Further more, we can extend the concept of conjugate from group elements to subgroups: 
If $H$ is a subgroup of $G$ and $a\in G$, then $H’=\{aha^{-1};h\in H\}$ is said to be ***a conjugate subgroup*** of $H$.(It can be easily shown that $H’$ does form a group.)
Clearly that $H$ and $H’$ is of same order; what’s more, they are **either** isomorphic **or** they have only the identity element in common.
##### Definition 2.8 : Invariant Subgroup
An subgroup $H$ of group $G$ is said to be an invariant subgroup if it’s identical to all its conjugate subgroups.

It can be easily shown that **a subgroup $H$ is invariant if and only if it contains elements in complete conjugate classes.** (If it does, then for $\forall h_i \in H, a \in G$, we have $ah_ia^{-1}\sim h_i$ thus $ah_ia^{-1}\in H$, so all conjugate subgroups of $H$ contains exactly the same elements of $H$ and thus they are identical; otherwise if $H$ does not contain all elements of a complete conjugate class, for example if $b\sim h_i$ for $\forall h_i \in H$ but $b\notin H$, then there $\exists p_i\in G$ for $\forall h_i \in H$  such that $p_ih_ip_i^{-1}=b$, thus there are conjugate subgroups $H’=\{php^{-1};h\in H\}$ that are not identical to $H$ since they contain $b\notin H$ .)
It then follows that **all subgroups of an abelian group are invariant subgroups**. (Because any subgroup of an abelian group forms a complete conjugate class?)

**Every group have at least 2 trivial invariant subgroups**, $\{e\}$ and $G$ itself. But if **non-trivial** invariant subgroups exist, the full group can be **“simplified”** or **“factorized”** in ways to be discussed in [[]]. Consequently, it is natural to adopt the following definition:
##### Definition 2.9 : Simple and Semi-Simple groups
A group is said to be simple if it doesn’t contain any non-trivial invariant subgroups;
A group is said to be semi-simple if it doesn’t contain any abelian invariant subgroups.

## 2.5 Cosets and Factor (Quotient) Groups 
##### Definition 2.10 : Cosets (of a Subgroup)
Let $H$ be a **subgroup** of $H$, then for $\forall$ $p\in G, p\notin H$, the **set** $pH=\{ph_i;h_i\in H\}$ is called a left coset of $H$; similarly $Hp=\{h_ip;h_i\in H\}$ is called a right coset of $H$.

Notice we use the term **set** because: aside from $H$ itself (obtained if $p\in H$ , by rearrangement lemma $pH=H$), cosets are all non-group because they don’t even contain the identity element.

##### Lemma (cosets are either identical or disjoint)
Two (left) cosets of a subgroup $H$ either coincide completely, or they  have no elements in common at all.

**Proof:** Let $pH$ and $qH$ be the two cosets considered. 
**Assume** they have an element in common $ph_i=qh_j$ where $h_i,h_j\in H$, then $q^{-1}p$ would be equal to $h_jh_i^{-1}$. But $h_jh_i^{-1}\in H$ so by **rearrangement lemma** we know $q^{-1}pH=h_jh_i^{-1}H=H$. Multiply through $q$ from left results in $pH=qH$. Thus we have proved that if $pH$ and $qH$ have any element in common, then they coincide completely.
Thus, if there no $h_i,h_j$ satisfying $ph_i=qh_j$, then $pH$ and $qH$ must be disjoint at all. **QED**

A direct consequence of the lemma above is: given a subgroup $H$ of $G$, its distinct, its distinct(thus disjoint by the lemma) **(left) cosets** will partition the elements of the **full** group $G$ into **disjoint** subsets, each contain $n_H$ elements where $n_H$ is the order of $H$.
(Because for any left coset $pH\neq H$, $pH$ would be disjoint to $H$ by the lemma; and $\forall g_i\in G$, there $\exists p=g_ih_i^{-1} \in G$ so that $ph_i=g_i$, so all elements of $G$ can be covered by $H$ and its distinct cosets.) As a consequence of this partition, the order of a subgroup must be a integer factor of the oder of the group:
##### Theorem 2.3 : Lagrange’s Theorem
The Oder of a finite group must be an integer-multiple of the order of any of its subgroup.
(Since distinct **cosets** of subgroup $H$ partition the full group $G$ disjointly, each contain $n_H$ elements.)


##### Cosets of Invariant Subgroup and their Multiplication
The cosets of **invariant subgroups** are particularly simple and useful. If $H$ is an invariant subgroup, then its left cosets are also right cosets. (Since $pHp^{-1}=H$, it follows $pH=Hp$)

Let’s consider the **cosets** of an invariant subgroup $H$ as the **elements** of a new group. ***The multiplication of two cosets $pH$ and $qH$ is defined as the set containing all products $ph_i qh_j$***, we can show that this set is a **coset** of $H$ because all its element can be written as $pq\cdot (q^{-1}h_i q)h_j$ where $(q^{-1}h_iq)h_j \in H$ ($H$ is invariant so $a^{-1}h_i q\in H$, and it’s a subgroup so $h_lh_j\in H$), **Following this multiplication law, the product of two cosets of a invariant subgroup $H$ is also a coset of $H$** i.e.
$$qH\cdot pH=qpH$$and this meets the conditions of group multiplication by: (i) $eH=H$ plays the role of identity element in the “coset group”; (ii) $p^{-1}H$ is the inverse of $pH$; (iii) the multiplication is associative.
In other words, we have the following theorem:
##### Theorem 2.4 : Factor (or quotient) Group $G/H$ where $H$ is an Invariant Subgroup of $G$
If $H$ is an **invariant subgroup** of $G$, then the set of its cosets, along with the multiplication law $pH\cdot qH=pqH$, form a group called the factor group or quotient group of $G$, denoted $G/H$ and is of order $n_G / n_H$
(**Notice** that there is a natural mapping from group $G$ to its factor group $Κ=G/H$, namely: $g\in G \rightarrow gH$. Because for $g\in H$, $gH=H$ by rearrangement lemma; for $g\notin H$, $gH\in G/H$ is a coset of $H$. We check if this mapping is **homomorphic** by check if “ $\forall g_1,g_2,g_3\in G$ and $g_1 g_2=g_3$, it holds $g_1g_2H=g_3H$”, the answer is clearly **yes** due to [the definition of multiplication of cosets](#Cosets%20of%20Invariant%20Subgroup%20and%20their%20Multiplication) and rearrangement lemma.)
## 2.6 Homomorphisms 
##### Definition 2.11 : Homomorphism
A mapping from $G$ to $G’$ (not necessarily one-to-one) is said to be a homomorphism if and only if it **preserves group multiplication**.

(Clearly [Definition 2.5 Isomorphism](#Definition%202.5%20Isomorphism) is a special case of homomorphism)
The whole theory of [GROUP REPRESENTATION](#GROUP%20REPRESENTATION) is built on homomorphisms of **abstract groups**(often symmetry groups of physics) to **groups of linear operators or matrices** on vector spaces(spaces of physical states).
##### Theorem 2.5 : If $f$ is a Homomorphism, then $K\subset G\xrightarrow{f}\{e’\}\subset G’$ is an Invariant Subgroup of $G$ and $G/K$ is Isomorphic to $G’$
Let $f$ be a homomorphism from $G$ to $G’$ , denote by $K$ the set of elements of $G$ such that are mapped to $e’\in G’$, then (i) **$K$** **forms an invariant subgroup of $G$**; (ii) the factor group $G/K$ is **isomorphic** to $G’$.

**Proof** (too long) **QED**

## 2.7 Direct Products (DNS)
Many physically useful groups are direct products of simpler groups, when this is the case, it suffices to know the structure and representation of the smaller groups.
##### Definition 2.12 : Direct Product Group
. Let $H_1,H_2$ be subgroups of $G$ with the following properties: (i) $\forall h_1\in H_1, h_2 \in H_2$ holds $h_1h_2=h_2h_1$; (ii) $\forall g\in G$ can be uniquely written as $g=h_1 h_2$ where $h_1\in H_1, h_2\in H_2$.  Then $G$ is said to be the direct product of $H_1$ and $H_2$, denoted $G=H_1\otimes H_2$.

# GROUP REPRESENTATION
## 3.1 Representations
***Group of Linear Transformation (Group of Operators):*** the multiplication of linear transformations on a linear vector space is, **in general** associative, but not necessarily commutative. Hence it’s basically a “group multiplication”. A set of **invertible** linear transformations, such that are **closed** with respect to operator multiplication, would satisfy the group axioms. Such a set forms a group of linear transformations.
##### Definition 3.1 : Representation of a Group
If there is a [homomorphism](#Definition%202.11%20Homomorphism) from group $G$ and a group of operators $U(G)$ (the operators are on a linear vector space $V$), then we say that $U(G)$ forms a representation of the group $G$. The **dimension** of the representation is the dimension of the vector space $V$.
A representation is said to be **faithful** if the homomorphism is also a **isomorphism**(i.e. if the homomorphism is one-to-one); a representation that is not faithful is said to be a **degenerate** representation.

**In other words**, the representation is a **mapping**:
$$g\in G \xrightarrow{U}U(g)$$(where $U(g)$ is an operator on vector space $V$), **such that**:$$U(g_1)U(g_2)=U(g_1g_2)$$

Now consider the group of linear transformations on the vector space, once a set of basis are chosen, any operator $U(g)$ can be **realized** as a matrix:
$$U(g)\ket{e_i}=\ket{e_j}{D(g)^j}_i$$recall that matrices may also form a group, we wander if $g\in G \rightarrow D(g)$ is a legal representation: 
**Proof:** $U(g_1)U(g_2)=U(g_1 g_2)$ implies:
$$U(g_1)U(g_2)\ket{e_i}=U(g_1 g_2)\ket{e_i}$$where the $LHS$ is realized by:$$LHS=U(g_1)\ket{e_j}{D(g_2)^j}_i=\ket{e_k}{D(g_1)^k}_j{D(g_2)^j}_i$$and the $RHS$ is realized by:$$RHS=\ket{e_k}{D(g_1 g_2)^k}_i$$thus we have:$$D(g_1 g_2)=D(g_1)D{g_2}$$since their $(k,i)$ elements are equal for any $k,l$ within dimension. **QED** 
**Thus we conclude the group of matrices $D(G) forms a *matrix representation*.***
##### Theorem 3.1
**(I)** If the group $G$ has a non-trivial [invariant subgroup](#Definition%202.8%20Invariant%20Subgroup) $H$, then any representation of the [factor group](#2.5%20Cosets%20and%20Factor%20(Quotient)%20Groups) $Κ=G/H$ is also a representation of $G$ itself. This representation must be **degenerate.**
**(II)** Conversely, if $U(G)$ is a degenerate representation representation of $G$, then $G$ must have at least one invariant subgroup $H$ such that $U(G)$ defines a  **faithful** representation of the factor group $G/H$

**Proof:** (I) given that there is a representation of $K$, i.e., there is a homomorphism: $kH\xrightarrow{U}U(kH)$ where $kH\in K$ and $k\in G,k\notin H$. But there is a natural mapping:$g\in G \xrightarrow{W} gH \in K$(notice if $g\in H$ then it maps to $H$ itself, otherwise it maps to a coset of $H$, so for any $g\in G$, this natural mapping always maps it to an element of the factor group. We have proved this natural mapping is **homomorphic** in [2.5 Cosets and Factor (Quotient) Groups](#2.5%20Cosets%20and%20Factor%20(Quotient)%20Groups). So by combing two maps, we have constructed a homomorphism: $g\in G\xrightarrow{W}gH\in G/H \xrightarrow{U} U{gH}\in U(K)$.
(II) From [Theorem 2.5](#Theorem%202.5%20If%20$f$%20is%20a%20Homomorphism,%20then%20$K%20subset%20G%20xrightarrow{f}%20{e’%20}%20subset%20G’$%20is%20an%20Invariant%20Subgroup%20of%20$G$%20and%20$G/K$%20is%20Isomorphic%20to%20$G’$) 

A direct consequence of this theorem is that ***all representations (except for the trivial one) of [simple groups](#Definition%202.9%20Simple%20and%20Semi-Simple%20groups) are faithful***。


## 3.2 Irreducible, Inequivalent Representations
For most groups of interest, the possible ways of realization (representation) of the group **are limited and can be enumerated.** Inoder to enumerate all possible representations, it’s important to distinguish between **essentially different  (i.e. “inequivalent”) representations** from the **redundant ones**. What types of redundancy there are? We shall describe **two of them.**
### The first type of redundancy
##### Definition 3.2 : Equivalence of Representations
Two representations of a group $G$ **related by a similarity transformation** are said to be equivalent.

(**Similarity transformation**: Let $U(G)$ be a representation on $V$, and $S$ be a invertible operator on $V$, then it’s obvious that $U’(g)=SU(g)S^{-1}$ for $\forall g\in G$ also form a representation since $U’$ also preserves the multiplication law. **The relation between $U’(G)$ and $U(G)$ is identical to that between *two matrices representation of the same operator* with two different vector basis.**)

Equivalent representations form an ***equivalent class***, so it’s suffice to know one member of the class, then the others can be generated by performing all possible similarity transformations.
To distinguish whether two representations are equivalent, we simply compare the ***characterization***s of the representations ***such that are invariant under similarity transformation***, then the representation of the same characterization are of course equivalent and vice versa.  One such characterization is the ***trace***. So we take the following definition:
##### Definition 3.3 : Characters of a Representation
The character of of element $g\in G$ in a representation $U(G)$ is defined to be $\chi(g):=\text{Tr}U(g)$. If $D(G)$ is a matrix realization of $U(G)$, then:
$$\chi(g):={D(g)^i}_{i}$$
 Then (i) the representations of a given element $g$ will give the same character $\chi(g)$ if the representation are equivalent (because then $\chi’(g)=\text{Tr}SU(g)S^{-1}=\text{Tr}U(g)=\chi(g)$, the second equality stands by identity of similarity transformation); (ii)  the characters of elements $g_i$ in the same [conjugate class](#Definition%202.7%20Conjugate%20Class) in a given (and extended to all equivalent, by (i)) representation is the same.
 
(**Notice**: when we use the word **”representation”**, usually we refer to $U$, when we use the word **“realization”** we usually refer to $D$. For a given representation there exist different realization generated by different choices on vector basis, with the same set of characters.)
### The second type of redundancy
A second type of redundancy concerns **direct sum representations**.

The **most obvious** form (check the next few paragraphs for general ones) of such a representation is as follows: 
Let $G\rightarrow U(G)$ be a representation on $V_n$.
***If*** **for some choice of basis on $V$,** the realization of $G$ appear in the form:$$D(g)=\begin{pmatrix}
D_1 (g) & O\\
O & D_2 (g)
\end{pmatrix}$$for all $g\in G$, and $D_1$ is $m\times m$, $D_2$ is $(n-m)\times(n-m)$.
***Then*** $D(G)$ is the **direct sum** of $D_1(G)$ and $D_2(G)$ since we can prove:$$D(g_1) D(g_2)=\begin{pmatrix}
D_1 (g_1) D_1 (g_2) & O \\
O & D_2 (g_1) D_2 (g_2)
\end{pmatrix}$$for all $g_1,g_2\in G$. Thus **$D(G)$ does not contain any new information other than those already contained by $D_1(G),D_2(G)$**.

But **general** direct sum representation $D_(G)$ (from $D_1(G)$ and $D_2(G)$) may not be so easily recognized in the block-diagonal form above. (For example, $D’=SDS^{-1}$ where $D=\text{Diag}\{D_1,D_2\}$ is of course a direct sum representation, thus is redundant, but $D’$ don’t necessarily be in block-diagonal form.)
**In order to identify a direct sum properly, we first introduce a few useful terms.**
##### Definition 3.4 : Invariant Subspace (of $V$ with respect to $U(G)$)
**Let** $U(G)$ be a representation of $G$ on vector space $V$ and $V_1$ a sub space of $V$, **then** $V_1$ is said to be *an invariant subspace of $V$ with respect to $U(G)$* **if**:$$\text{for} \space \forall \mathbf{x}\in V_1, \space \text{holds} \space U(g)\ket{\mathbf{x}}\in V_1 $$An invariant subspace is said to be **minimal (or proper)** if it does not contain any non-trivial invariant subspace (with respect to $U(G)$).
##### Definition 3.5 : Irreducible Representation
A representation $U(G)$ on $V$ is said to be irreducible if there is no non-trivial [invariant subspace](#Definition%203.4%20Invariant%20Subspace) with respect to $U(G)$, otherwise the representation is said to be reducible.
In the latter case, **if** the **orthonormal complement** of the existing invariant subspace is also invariant with respect to $U(G)$, then the representation is said to be *fully reducible* or *decomnposible*.
(**Orthonormal complement:** if $V_1$ is a subspace of $V$, the orthonormal complement of $V_1$ contains all vectors in $V$ such that are orthonormal to all vectors in $V_1$.
I.e. $$V_2=\{\mathbf{w}:\forall \mathbf{v}\in V_1, \mathbf{v}\cdot\mathbf{w}=0\}$$One can show that for finite-dimensional vector space $V$, the orthonormal complement of a subspace also form a subspace, and $V$ is the **direct sum** of them.$$V=V_1\oplus V_2$$)

#####  General matrix form of a reducible representation
Let us look at ***the general matrix form of a reducible representation*** $D(G)$ on $V$. If $V_1$ is an invariant subspace of $V$ with respect to $U(G)$ of dimension $n_1$.
We can always choose the basis of $V$: $\{\mathbf{e_1},\mathbf{e_2},…,\mathbf{e_n}\}$ to be such that the first $n_1$ of them are in $V_1$. Now that the basis is chosen so we can write the realization as:$$U(g)\ket{e_i}=\ket{e_j}{D(g)^j}_i$$
since $V_1$ is an **invariant subspace**, by definition it must holds:$$\ket{e_j}{D(g)^j}_i \in V_1 \space \space \text{for} \space i=1,…,n_1 $$but$$\begin{pmatrix}
U(g)\ket{e_1}&… &U(g)\ket{e_n}
\end{pmatrix}=
\begin{pmatrix}
\ket{e_1}&… &\ket{e_n}
\end{pmatrix}
\begin{pmatrix}
D_1(g) & D’(g)\\
D’’(g) & D_2(g)
\end{pmatrix}$$**to satisfy the requirement of invariant subspace**, the matrix $D(g)$ must be of the form:$$\begin{pmatrix}
D_1 (g) &D’(g)\\
O & D_2 (g)
\end{pmatrix}$$where $D_1(g)$ is an $n_1\times n_1$ matrix and $D_1(g)$ is an $(n-n_1)\times (n-n_1)$ matrix, for $\forall g\in G$.

***Further more, if the complement (not necessarily orthonormal) space $V_2$ of the invariant subspace $V_1$ (i.e. the space spanned by $\{\mathbf{e_{n_1+1}},…,\mathbf{e_n}\}$ ) is also an invariant subspace with respect to $U(G)$***, then $D’(g)=O$ too.


***In conclusion***, if $U(G)$ is a representation of $G$ on $V$ and $V^{\mu}$ is an invariant subspace of $V$ with respect to $U(G)$, then by restricting the action of $U(G)$ to $V^{\mu}$, we obtain a lower-dimensional representation $U^{\mu}(G)$. If $V^{\mu}$ can not be further reduced, then $U^{\mu}(G)$ is the irreducible representation , and we say $V^{\mu}$ is a proper (or irreducible) invariant subspace with respect to $G$.
## 3.3 Unitary Representations 
##### Definition 3.6 : Unitary Representation
**If** the space of representation is **inner product space**, **and** $U(g)$ are **unitary** ($U^{\dagger}=U^{-1}$)for all $g\in G$, **then** the representation $U(G)$ is said to be a unitary representation.

Because ***symmetry transformations are naturally associated with. Unitary operators*** (which preserves lengths, angles and inner products), unitary representations are of essential importance in studying symmetry groups.
##### Theorem 3.2 : Reducible unitary representations are fully reducible
**Proof:** Let $U(G)$ be an reducible unitary representation of $G$ on $V$, and $V_1$ be an invariant subspace with respect to it. To prove $U(G)$ is fully reducible, we simply need to **prove the orthonormal complement of $V_1$ is also an invariant subspace with respect to $U(G)$** ([Definition 3.4 ](#Definition%203.4%20Invariant%20Subspace%20(of%20$V$%20with%20respect%20to%20$U(G)$)). To do so we choose orthonormal basis of $V$ to be such that the first $n_1$ (dimension of $V_1$) of basis-vectors are also basis-vectors of $V_1$.
Now since $V_1$ is an invariant subspace, we have $U(g)\ket{e_i}\in V_1$ for $\forall i=1,2,…,n_1$.
Since $U(G)$ is unitary and we have chosen the basis to be orthonormal, we have $\bra{e^i}U^{\dagger}U\ket{e_j}=0$ for $\forall j=n_1+1,n_1+2,…,n \space \space i=1,2,…,n_1$. Since $\bra{e^i}U^{\dagger}\in V_1$ in the equation above by the identity of invariant subspace, the zero-inner-product result must implies that $U\ket{e_j}\in V_2$ for $\forall j=n_1+1,…,n$. Thus $V_2$ spawned by $\ket{e_j}, j=n_1+1,…,n$ is also an invariant subspace. **QED**
##### Theorem 3.3 : Every representation $D(G)$ is equivalent to a unitary representation
**Proof:** Don’t care **QED**

Although we restricted the theorem above to finite groups, the proof suggests that it remains valid for any group such that **the summation over norm of group elements can be properly defined and rearrangement lemma holds**. For instance, rotation groups in Euclidean spaces, unitary groups and special unitary groups.
##### Corollary: all reducible representations of finite groups are fully reducible.
Then let $V_1$ and $V_2$ be complementary invariant subspaces (recall [Definition 3.5 Irreducible Representation](#Definition%203.5%20Irreducible%20Representation), if the representation is fully reducible, then the complementary of invariant subspace is also an invariant subspace). Let $U_1(G)$ and $U_2(G)$ be **the operators which coincide with $U(G)$ on these subspaces**. Clearly $V=V_1\oplus V_2$ since they are orthogonally complementary, and in the sense of operators we have $I(G)=U_1(G)\oplus U_2(G)$. In this situation we give the following definition:
##### Definition 3.7 : Direct Sum Representation 
Given the [situation above](#Corollary%20all%20reducible%20representations%20of%20finite%20groups%20are%20fully%20reducible.), the representation $U(G)$ is said to be the direct sum representation of $U(G)$ on $V_1$ and $U_2(G)$ on $V_2$.

If either $V_1$ or $V_2$ is still reducible wrt $G$, then it can be further decomposed, repeat the process until the representation is fully **reduced into direct sum of irreducible representations**.
## 3.4 Schur’s Lemmas
### Schur’s Lemma 1
Let $U(G)$ be an **irreducible** representation of $G$ on $V$, and $A$ be an arbitrary operator on $V$. **If** $AU(g)=U(g)A$ for $\forall g\in G$, **then** $A$ must be a multiple of the identity operator $E$ on $V$.
(In other phrases, Schhr’s lemma 1 states that the only type of operators that commutes with all operators of an irreducible representation is the multiple of the identity operator of the space.)
#### Consequence of Schur’s lemma 1
##### Theorem 3.4 : Irreducible representations of any Abelian group must be of dimension one.
**Proof:**  Let $U(G)$ be an irreducible representation of an **abelian** group $G$. Given $p\in G$ chosen, then by the abelian identity of $G$, it must holds: $U(p)U(g)=U(g)U(p)$ for $\forall g\in G$. But **Schur’s lemma 1** claims that only multiples of $E$ on $V$ commute with all $U(g)$, so it’’s clear $U(p)=\lambda_p E$. And this apply for any chosen $p\in G$. **QED**
#### Proof of Schur’s lemma 1
(i) Without loss of generality, we consider only unitary representations since [Theorem 3.3 Every representation $D(G)$ is equivalent to a unitary representation](#Theorem%203.3%20Every%20representation%20$D(G)$%20is%20equivalent%20to%20a%20unitary%20representation); and we consider only Hermitian operators $A$, since if there $\exists$ non-Hermitian operator $B$ that commutes with all $U(g)$, then there must also $\exists$ $B_+=(B+B^{\dagger})/2$ and $B_{-}=(B-B^{\dagger})2i$ which are Hermitian and also commutes with all $U(g)$.
(ii) Assume $A$ commutes with All $U(g)$, we want to prove that $A=\mu E$. We can always choose a basis such that basis vectors are eigenvectors of $A$, label by:$$A\ket{u_{\alpha,i}}=\lambda_i\ket{u_{\alpha,i}}$$The basis may also be chosen to be orthonormal. (NOTICE: here $\alpha$ together with $i$ specify/label the vectors, for example $\ket{u_{2,i}}$ specifies the second basis-vector in the basis-vectors with eigenvalue $\lambda_i$)
(iii) For any given eigenvalue $\lambda_i$ of $A$, denote by $V^i$ the subspace spanned by **all basis vectors that are eigenvetors of $A$ with eigenvalue $\lambda_i$**. **We can show that** $U(g)\ket{u_{\alpha,i}}$ is also an eigenvector of $A$ with eigenvalue $\lambda_i$, **thus** it must be a linear combination of $\ket{u_{i,\alpha}}, \alpha=1,…$, meaning that it’s also $V^i$, **so we conclude** that $V^i$ is an invariant subspace with  respect to $U(G)$.
(iv) But $U(G)$ is an irreducible representation on $V$, so $V$ cannot contain any non-trivial invariant subspace, thus $V^i=V$, meaning $A$ has only one eigenvalue. Thus $A=\mu E$ **QED**

### Schur’s lemma 2
Let $U(G)$ and $Υ’(G)$ be two **irreducible** representation of $G$ on vector space $V$ and $V’$, and $A$ be a **linear transformation** from $V$ to $V’$. The lemma states: **If** the linear transformation $A$ is such that $AU’(g)=U(g)A$, **then either** $A=0$ **or** $V$ and $V’$ are isomorphic and $U(G)$ is equivalent to $Υ’(G)$.

The **proof** of the lemma is boring and explicit when we use the following notation. Denote $U(g): V\rightarrow V, \mathbf{x}\mapsto \mathbf{y}$ ,denote $U’(g): V’\rightarrow V’, \mathbf{x‘}\mapsto \bar{\mathbf{y}}’$, and the linear transformation $A: V‘\rightarrow V, \mathbf{x’}\mapsto \mathbf{x},\mathbf{y’}\mapsto \mathbf{y}, \bar{\mathbf{y}}’\mapsto \bar{\mathbf{y}}$. Then its clear $AU’(g): V’\rightarrow V, \mathbf{x}’\mapsto \bar{\mathbf{y}}’ \mapsto \bar{\mathbf{y}}$ and $U(g)A: V’\rightarrow V, \mathbf{x}’\mapsto \mathbf{x}\mapsto\mathbf{y}$. So to make $AU’(g)=U(g)A$, the only nontrivial way is to require the two vector spaces are isomorphic and the two representations are equivalent, where the similarity transformation is readable: since $U(g)=AU’(g)A^{-1}$, the similarity transformation $S$ that makes $SU(G)S^{-1}=U’(G)$ is $S=A^{-1}$.
## 3.5 Orthonormality and Completeness Relations of Irreducible Representation Matrices
Before introducing the central results of group representation we list some essential notations so far:
##### Essential Notations (GRT)
(A)**Order of group $G$**: $n_G$ [Definition 2.3 Order of Finite Group](#Definition%202.3%20Order%20of%20Finite%20Group)
(B)**Labels for ii equivalent irreducible representations**: $\mu,\nu$ [3.2 Irreducible, Inequivalent Representations](#3.2%20Irreducible,%20Inequivalent%20Representations)
(C)**Dimension of the $\mu$-representation**: $n_{\mu}$
(D)**The realization matrix of $g\in G$ in the $\mu$-representation wrt an orthonormal basis**: $D^{\mu}(g)$
(E)**[Character](#Definition%203.3%20Characters%20of%20a%20Representation) of [class](#Definition%202.7%20Conjugate%20Class) $\zeta_i$ elements in $\mu$-representation**: $\chi^{\mu}_i$ (notice the character of a given element $g\in G$ depends indeed only on these two labels, check [Definition 3.3 Characters of a Representation](#Definition%203.3%20Characters%20of%20a%20Representation))
(F)**Number of elements in class $\zeta_i$**: $n_i$
(G)**Number of classes in group $G$**: $n_c$
### Orthonormality of Irreducible Representation Matrices
##### Theorem 3.5 : Orthonormality of irreducible representations
$$\frac{n_{\mu}}{n_G}\sum_{g}{{D^{\dagger}_{\mu}(g)}^k}_i
{{D^{\nu}(g)}^j}_l=\delta^{\mu}_{\nu}\delta^j_i\delta^k_l$$
**Proof**: Too long **QED**
(**Notice:** the relation is called orthonormality because one Can regard $\sqrt{\frac{n_{\mu}}{n_g}}{{D^{\mu}(g)}^i}_j$ as the components of a set of $n_G$-dimensional vectors label by $(\mu,i,j)$)

Apply the theorem to **abelian group $G$**, since the dimension of any irreducible representation of abelian group is **one**([Theorem 3.4](#Theorem%203.4%20Irreducible%20representations%20of%20any%20Abelian%20group%20must%20be%20of%20dimension%20one.)) and representation matrix of $g\in G$ is simply a complex number $d^{\mu}(g)$, we conclude:
$$n_G^{-1}\sum_{g}d^{\dagger}_{\mu}(g)d^{\nu}(g)=\delta^{\nu}_{\mu}$$By applying this orthonormality relation, we can **construct new irreducible representations of an Abelian group from a given one.**

##### Corollary 1 : the number of irreducible representations of a finite group is restricted by $\sum_{\mu}n_{\mu}^2\leq n_G$ 
(**Notice:** a Remarkable fact is that the inequality is always saturated, which results in the [Completeness of Irreducible Representations](#Completeness%20of%20Irreducible%20Representations) and we prove this fact in the next subsection; here we only prove the inequality rather than equality)
**Proof:** From the orthonormality relation we consider ${{D^{\mu}(g)}^i}_j$ the $g$-th component of the $(\mu,i,j)$ vector in the set of orthonormal vectors of dimension $n_G$. We know that the number of such vectors labeled with given $\mu$ is $n_{\mu}^2$ so the total number of vectors in the set is $\sum_{\mu}n_{\mu}^2$. But the number of orthonormal vectors in a vector space of dimension $n_G$ can not be bigger than $n_G$. So this corollary holds. **QED**
### Completeness of Irreducible Representations
##### Theorem 3.6 : Completeness of irreducible representations
**(I)** The **dimensionality parameters** $\{n_{\mu}\}$ for the inequivalent irreducible representations satisfies: $$\sum_{\mu} n_{\mu}^2=n_G$$**(II)** The corresponding representation matrices satisfy the **completeness relation**: $$\sum_{\mu,l,k}\frac{n_{\mu}}{n_G}{{D^{\mu}(g)}^l}_k {{D^{\dagger}_{\mu}(g’)}^k}_l=\delta_{gg’}$$
**Proof:** The proof of **(I)** is deferred until [3.7 The Regular Representation](#3.7%20The%20Regular%20Representation) Is introduced; once **(I)** is accepted, the completeness relation automatically falls into the form of **(II)** due to [[#Theorem II.13]]
## 3.6 Orthonormality and Completeness of Irreducible Characters 
Despite the orthonormality and completeness of **irreducible representation matrices** are very important, the representation matrices concerning these relations are **basis-dependent** themself; **characters of a representations** on the other side, depend only on the choice of irreducible representation. I.e. **all group elements in a given have the same character in a given representation, regardless of choice of basis.**

##### Lemma: Sum of $U(g)$ over a class
Let $U^{\mu}(G)$ be an irreducible representation (labeled by $\mu$), then the sum of $U(g)$ over a given class $g\in \zeta_{i}$ (labeled by $i$) is given by: $$\sum_{h\in\zeta_i}U^{\mu}(h)=\frac{n_i}{n_{\mu}}\chi^{\mu}_i E$$(Notice here the character is labeled by $(\mu,i)$ since the value of a character depends only on the choice of irreducible representation $\mu$ and the class $i$ where the element concerned belongs to.)

**Proof:** Denote the **LHS** of the relation by $A_i$, and then by [Definition 2.7 Conjugate Class](#Definition%202.7%20Conjugate%20Class) It follows $U^{\mu}(g)A_i U^{\mu}(g)^{-1}=A_i$, thus $A_i$ commutes with $\forall g\in G$. Then by [Schur’s Lemma 1](#Schur’s%20Lemma%201) $A_i$ must be proportional to $E$. Assume $A_i=c_i E$, and taking the trace of both sides gives $\chi^{\mu}_i n_i=c_i n_{\mu}$ **QED**

##### Theorem 3.7 : Orthonormality and completeness of group characters
**Orthonormality:** $$\sum_i\frac{n_i}{n_G}\chi^{\dagger i}_{\mu}\chi^{\nu}_i=\delta^{\nu}_{\mu}$$ **Completeness:** $$\frac{n_i}{n_G}\sum_{\mu}\chi^{\mu}_i\chi^{\dagger j}_{\mu}=\delta^j_i$$(**Notice:** the relations are named orthonormality and completeness because we can consider $\sqrt{\frac{n_i}{n_G}}\chi^{\mu}_i$ as **the $i$-th component of the $\mu$-labeled vectors in the set of orthonormal complete set of vectors labeled by $\mu$,** or **consider it as the $\mu$-th component of the $i$-labeled vector**.)
**Proof:**  **QED**

**Further more**, according to the interpretation above, ${\chi^{\mu}}_i$ must be a $(n_c\times n_c)$ or number-of-iir$\times$number-of-iir  matrix, which meaning that ***number of irreducible inequivalent representations is equal to the number of classes $n_c$ of the group***, i.e. the following corollary holds:
##### Corollary: number of iir is equal to $n_c$
In practice, we designate ${\chi^{\mu}}_i$ a $n_c \times n_c$ square matrix where $\mu$ its row index and $i$ its column index. **A table of this matrix of any given group $G$ is called its *character table.***

##### Example: find all iir of non-abelian group $S_3$
(I) First find all **conjugate classes**: 
$S_3$ has three classes: the 1-cycle $\{e\}$, the 2-cycles $\{(12),(23),(31)\}$, the 3-cycles $\{(123),(321)\}$
So by [Corollary number of iir is equal to $n_c$](#Corollary%20number%20of%20iir%20is%20equal%20to%20$n_c$) we know $S_3$ has $3$ inequivalent irreducible representations.
(II) We already know there is a **trivial iir**, namely the identity representation. We denote the identity representation by $\mu=1$.
Apply **orthonormality relation** and use condition that **for identity representation al characters are same**,  we can determine $\chi^1_i=1$ 
(III) By definition of character, we know **the character of identity element in any representation is equal to the dimension of the representation**. In addition we ca determine the dimension of each iir from :$$\sum_{\mu}n_{\mu}^2=n_G$$and:$$\text{number of iir}=n_i$$Thus in this case we determine $\chi_1^{\mu}=1,1,2$ for $\mu=1,2,3$
(IV) Determine the remaining characters by (a) **columns in the table are orthogonal**; and (b) **row vectors with component $\sqrt{\frac{n_i}{n_G}}\chi_i$ Are normalized**, i.e. for characters in any given row (given $\mu=\mu_0$) of the table, we have $\sum_i \frac{n_i}{n_G}(\chi^{\mu_0}_i)^*\chi^{\mu_0}_i=1$

We have shown how to derive the character table above, the important uses of the table are derived in the following theorems.

First we introduce a very important convention:$$\tilde{\chi}_i:=\sqrt{\frac{n_i}{n_G}}\chi^i$$then the orthonormal completeness can be rewritten as: $$\bra{\tilde{\chi}^{\mu}}\ket{\tilde{\chi}^{\nu}}=\delta^{\mu\nu}$$and$$\bra{\tilde{\chi}_i}\ket{\tilde{\chi}_j}=\delta_{ij}$$
### Applications of Character Table
##### Theorem 3.8 : reduction of a given reducible representation
Given $U(G)$ a reducible representation, in its reduction into [Definition 3.7 Direct Sum Representation](#Definition%203.7%20Direct%20Sum%20Representation), the time $\{a_{\mu}\}$ the irreducible representation $U^{\mu}(G)$ occurs can be determined by:$$a_{\nu}=\bra{\tilde{\chi}^{\nu}}\ket{\tilde{\chi}}$$Where $\ket{\tilde{\chi}}$’s components $\tilde{\chi}_i$ are $\sqrt{\frac{n_i}{n_G}}\chi_i$, so the equation can be rewritten as: $$a_{\nu}=\sum_i (\chi^{\nu}_i)^*\chi_i\frac{n_i}{n_G}$$
##### Theorem 3.9 : Condition for irreducibility
A necessary and sufficient condition for a representation $U(G)$ (with characters $\chi_i$, $i$ are the labels of the conjugate classes of the group) to be irreducible is that: $$\sum_i n_i|\chi_i|^2=n_G$$i.e. $$|{\tilde{\chi}}|^2=1$$
## 3.7 The Regular Representation(DNF)
The **regular representation** defined on the **group algebra** plays an important role in the development of group representation theory, and some results obtained in this section will be needed in [Representations of the Symmetric Groups](#Representations%20of%20the%20Symmetric%20Groups).

Let $G$ be a finite group with elements $\{g_1,i=1,2,…,n_G\}$. The group multiplication rule $g_ig_j=g_k$ where $i,j,k$ are specific numbers, this specific rule can be written as: $$g_ig_j=g_m\Delta^m_{ij}$$(notice here $m$ is dummy and $\Delta^m_{ij}=1\text{or}0$ depending on whether $m$ take $k$).

Obviously, once all multiplication rules of the group are given, the elements ${(\Delta_i)^m}_j$ of $n_G$ $i$-labeled $n_G\times n_G$ matrices $\Delta_i$ are determined by the equation above.
**For example**, to determined matrix $\Delta_1$: (i) list all products $g_1g_l=g_m$ ; (ii) for any given $l=l_0$ we can determined the $l$-th column of $\Delta_1$ by the rule only the $m=m_0$-th element in the column is 1 and otherwise 0; (iii) repeat the process to determine all columns of $\Delta_1$; (iv) repeat the process to determine other matrices $\Delta_2,\Delta_3,…\Delta_{n_G}$.
##### Theorem 3.10 : the Regular Representation
The matrices $\Delta_i, i=1,2,…,n_G$ (with elements ${(\Delta_i)^k}_j$ in the $k$-th row $j$-th column) form a representation of the group $G$, called the regular representation of $G$.

**Proof:** We use the notation: let $a,b,c\in G$ such that $ab=c$. And the definition equation for matrix $\Delta_a$ is $ag_k=g_m\Delta^m_{ak}$.
To verify the matrices $\Delta_a,\Delta_b,…$ satisfy the [Definition 3.1 Representation of a Group](#Definition%203.1%20Representation%20of%20a%20Group), simply check if the multiplication rules are preserved: check if $\Delta_a\Delta_b=\Delta_c$, i.e. check if ${(\Delta_a)^i}_j{(\Delta_b)^j}_k={(\Delta_c)^i}_k$ 
We know that $abg_j=cg_j$ since $ab=c$, express $a,b,c$ in terms of the the matrices establishes the equation we wanna check above. **QED**
##### Theorem 3.10 is just an incarnation of [Theorem 2.1 Cayley’s Theorem](#Theorem%202.1%20Cayley’s%20Theorem)
Recall Cayley’s theorem states:$$a\in G \rightarrow p_a=\begin{pmatrix}
1&2&…&n\\
a_1&a_2&…&a_n
\end{pmatrix}$$where the exact values of $a_i$ are defined iΝ [[#Permutation Groups]]:
$$g_{a_i}=ag_i$$
Now that the regular representation states:$$a\in G\rightarrow \Delta_a$$It’s obviously the element of $\Delta_a$ follows:$${(\Delta_a)^k}_m={\delta^k}_{a_m}$$**The relation between the permutation representation (in Cayley’s theorem; we abuse the word representation instead of isomorphism)  and regular representation is the sub index $a_m$ in the equation above can be determined by $g_{a_m}=ag_m$.**
(To be more specific we take an example here, say if $a\in G$ is associated to $p_a\in S_{G_n}$ and: $$a\rightarrow p_a=
\begin{pmatrix}
1&2&3&4\\
3&4&2&1
\end{pmatrix}$$then the regular representation matrix $\Delta_a$ will be: $$a\rightarrow \Delta_a=
\begin{pmatrix}
0&0&0&1\\
0&0&1&0\\
1&0&0&0\\
0&1&0&1
\end{pmatrix}$$)(Notice the example shall be taken non-casually, the $P_a$ associate to any $a$ other than $e$ must be $n_G$-cycle, because $ag_i=g_i$ only happens when $a=e$; and thus **the regular representation matrix of non-identity element in a group must have trivial diagonal elements.**)  
##### Significance of Regular Representation
The significance of regular representation lies in the fact that **(i)all inequivalent irreducible representations of the group are contained in its regular representation**, and **(ii) the number of times each irreducible representation appears is exactly the dimension of it.** 

**Proof (ii):** the number of each iir in the regular representation of 
$G$ can be calculated by [Theorem 3.8 reduction of a given reducible representation](#Theorem%203.8%20reduction%20of%20a%20given%20reducible%20representation) where the characters of regular representation is readible(check the example above, we can see $\chi^{\mathtt{R}}_{i=e}=n_G$ and characters of any class other than the class of identity is zero(since the regular representation matrices of them contains only null diagonal elements)). The result produced by Theorem 3.8, the times iir-$\mu$ appears  is $a^{\mathtt{R}}_{\mu}=n_{\mu}$, i.e. the dimension of iir-$\mu$. **QED**

##### Theorem 3.11 Decomposition of Regular Representation
**(I)** The regular representation contains every **inequivalent irreducible representation $\mu$ precisely $n_{\mu}$ times**;
**(II)** We have already make the following claim in [Theorem 3.6 Completeness of irreducible representations](#Theorem%203.6%20Completeness%20of%20irreducible%20representations) , we restate again and give its proof in this subsection:$$\sum_{\mu}n_{\mu}^2=n_G$$
**Proof:** (I) Already given in the text above; (II)… **QED**
##### Example (decomposition of regular representation) (DNF)

##### Consequence of [Theorem 3.11 Decomposition of Regular Representation](#Theorem%203.11%20Decomposition%20of%20Regular%20Representation)
One can obtain all inequivalent irreducible representations of any finite group $G$ and read off the dimension $n_{\mu}$ of $\mu$-representation, by finding the similarity transformation $S$ such that make $\Delta’_a=S\Delta_a S^{-1}$ block-diagonal for any $a\in G$.

In the first part of [Representations of the Symmetric Groups](#Representations%20of%20the%20Symmetric%20Groups), the reduction of regular representation will be worked out in detail for a non-trivial example-the symmetry romp $S_n$.
## 3.8 Direct Product Representations, Clebsch-Gordon Coefficients (DNF)
Vector spaces which occur in physical applications are often **direct product** of smaller vector spaces that corresponds to different degrees of freedom of the physical system. We shall define the direct product of two representations, and study **the relation** between **representations of a a symmetry group realized on the product space** and **those defined on the component spaces**.

##### Definition 3.8 : Direct Product Space
Let $U$ and $V$ be **inner product vector spaces** and $\{\hat{\mathbf{u}}_i\}$ and $\{\hat{\mathbf{v}}_j\}$ be **orthonormal** basis respectively. Then the ***direct product space*** $W=U\times V$ consists of all linear combinations of orthonormal basis (of $W$) $\{\hat{\mathbf{w}}_k;k=(i,j)\}$, where $\hat{\mathbf{w}}_k$ can be regarded as the **formal product** $\hat{w}_k=\hat{\mathbf{u}}_i\cdot\hat{\mathbf{v}}_j$. By definition:
(I) $\bra{w^{k’}}\ket{w_k}=\delta^{k’}_k=\delta^{i’}_i \delta^j_{j’}$
(II) $W=\{\mathbf{x};\ket{x}=\ket{w_k}x^k\}$
(III) $\bra{x}\ket{y}=x^{\dagger}_ky^k$ where $x^{\dagger}_k=(x^K)^*$



To each pair of operators $A$ on $U$ and $B$ on $V$, there is a natural **direct product operator** $D=A\times B$ on $Ω=U\times V$, defined by acts action on **direct-product basis** vectors $\{w_k\}$: $$D\ket{w_k}=\ket{w_{k’}}{D^{k’}}_k \space \space ,\space \space {D^{k’}}_k:={A^{i’}}_i{B^{j’}}_j$$Where $k=(i,j)$ and $k’=(i’,j’)$ 


We now apply these concepts to the theory of group representation. Let $G$ be a symmetry group of a physical system, $W$ be the **direct-product space of physical solutions** consisting two sets of degrees of freedom $U,V$. Suppose $D^{\mu}(G)$ and $D^{\nu}(G)$ the representations of $G$ on $U$ and $V$ respectively. Then the operators $D^{\mu\times\nu}(g)=D^{\mu}(g)\times D^{\nu}(g)$ on $W$ with $g\in G$ also form a representation of group $G$.
##### Definition 3.9 : Direct Product Representation
The representation $D^{\mu\times\nu}(G)$ defined above on space $W$ is called the direct product representation of $D^{\mu}(G)$ and $D^{\nu}(G)$.

**Characters of direct product representation:** it is straightforward that: $$\chi^{\mu\times\nu}=\chi^{\mu}\chi^{\nu}$$


# Representations of the Symmetric Groups
Symmetric (i.e. permutation) groups and their representation are very important for the following reasons:
(A)[Theorem 2.1 Cayley’s Theorem](#Theorem%202.1%20Cayley’s%20Theorem) states that all finite group of order $n$ are subgroups of $S_n$.
(B)As we shall see in this chapter, irreducible representations of $S_n$ provide a valuable tool  to **analyze the irreducible representations of important classical continuous groups** for example GL(m), U(m), and SIU(m), **through tensor analysis**.
(C)Permutation symmetry is directly related to physical system consisting identical particles.

**In this chapter** we shall construct all irr of $S_n$ for an arbitrary $n$






# One-dimensional Continuous Groups
Continuous groups consists of group elements which are labeled by one or more **continuous variables**. (Notice that an infinite group is not necessarily continuous since its label may be infinite yet discrete).
A continuous group is said to be one-dimensional if its elements only depend on one continuous variable. In this chapter we introduce the simplest examples: **the group of rotations in a plane-SO(2)** and **the group of translation in one dimension-$\text{T}_1$**.
The general mathematical method studying continuous groups is **the theory of lie groups**. (Roughly speaking, a Lie group is *an infinite group whose elements can be **parametrized smoothly and analytically***, which requires introducing algebraic and geometric structures besides multiplication in group theory). However, **all known continuous symmetry groups of physical interests are groups of matrices for which the additional algebraic and geometric structure is already well-defined**, these groups are usually referred to as ***linear Lie groups*** or ***classical Lie groups***.
## 6.2 The Rotation Group SO(2)
Given orthonormal basis $\{\mathbf{e_1},\mathbf{e_2}\}$ in a plane, the rotation operator of degree $\phi$ shall be denoted by $R(\phi)$, its matrix realization can be found by $$R(\phi)\mathbf{e_i}=\mathbf{e}_j{R(\phi)^j}_i$$but we know that: $$R(\phi)\mathbf{e_1}=\mathbf{e_1}\cos{\phi}+\mathbf{e_2}\sin{\phi}$$and $$R(\phi)\mathbf{e_2}=-\mathbf{e_1}\sin{\phi}+\mathbf{e_2}\cos{\phi}$$So we can read off the realization matrix: $$R(\phi)=
\begin{pmatrix}
\cos{\phi}&-\sin{\phi}\\
\sin{\phi}&\cos{\phi}
\end{pmatrix}$$
 We can obtain the components $x^j$ of any vector $\mathbf{x}$ (the components form a column vector $(x^1,x^2)^T$) transforms following the rule:$$
 \mathbf{x’}:=R(\phi)\mathbf{x}\space \space, \space \space x’^j={R(\phi)^j}_ix^i$$
**Notice** that the rotation transformation defined above preserves the norm of $\mathbf{x}$, which is because $R^{\dagger}R=E$ and indicates $|\text{det}R(\phi)|^2=1$. **Further more**, the matrix realization of $R(\phi)$ indicates that $\text{det}R(\phi)=1$ and the matrix is always diagonal for any chosen basis of the plane. **Matrices with unit determinant are said to be *special***, hence these rotation matrices in a plane are said to be ***special orthogonal***, and they are of **rank 2**, hence they are designated as **SO(2)** matrices.
##### Theorem 6.1 : There is a one-to-one correspondence between rotations in a plane and SO(2) matrices
**Extension**: this correspondence also applies to any SO(n) matrices and rotation in $n$-dimensional space.
#####  $R(\phi_2)R(\phi_2)=R(\phi_1+\phi_2)$
This relation can be shown easily by realization matrices, and it indicates some **Abelian** property of rotation operator in a plane.
##### $R(\phi)=R(\phi+2\pi)$
##### Theorem 6.2 : Two-Dimensional Rotation Group
The two dimensional rotations $\{R(\phi)\}$ form a group called $\text{R}_2$ or SO(2) with: (i) multiplication rule $R(\phi_1)R(\phi_2)=R(\phi_1+\phi_2)$; (ii) identity element $e=R(0)$; (iii) inversion of any element $R(\phi)R(2\pi-\phi)=e$
**The group elements of So(2) are labeled by the continuous real variables in the domain $[0,2\pi)$.** This corresponds to all points on the unit circle, defining the topology of the group parameter space.

Notice the multiplication rule defined above implies ensures SO(2) to be **abelian**.
The parametrization is natural but not unique, since any relabel $\phi\rightarrow f(\phi)$ with $f(\phi)$ monotonic will also work well.

## 6.2 The Generator of SO(2)
The following simple analysis forms the foundation of **the theory of Lie groups**.

Consider an infinitesimal SO(2) rotation by angle $d\phi$. The **differentialbility** of $R(\phi)$ requires that the difference between $R(0)$ and $R(d\phi)$ in first order is proportional to $d\phi$ , for convenience we denote: $$R(d\phi)=E-i\phi J$$Next consider the difference between $R(\phi+d\phi)$ and $R(\phi)$, the difference can be expressed in multiplication law: $R(\phi+d\phi)=R(\phi)R(d\phi)=R(\phi)(E-i\phi J)$ or by differentiation: $R(\phi+d\phi)=R(\phi)+d\phi\frac{dR(\phi)}{d\phi}$ .
Compare the two expression to be the differential equation: $$\frac{dR(\phi)}{d\phi}=-iR(\phi)J$$solve the differential equation within boundary condition $R(0)=E$ we obtain: $$R(\phi)=e^{-i\phi J}$$Where $J$ is said to be the ***generator of te group***.
##### Theorem 6.3 : Generator of SO(2)
All two-dimensional rotation operator can be expressed in terms of the **operator $J$** as $$R(\phi)=e^{-i\phi J}$$Where $J$ is said to be the ***generator of te group***.

##### Significance of [Theorem 6.3 Generator of SO(2)](#Theorem%206.3%20Generator%20of%20SO(2))
The general structure of SO(2) and its representations are, to a large extent, determined by the single **generator $J$**. 
This also provides a first glimpse into the beauty and power of the **theory of Lie groups**, that is: **most important properties of a continuous are determined by the *local behavior of the group near its identity element*.** 
The group multiplication is explicitly satisfied when we express group elements in terms of generator $J$. **Once $J$ is known, the group elements can all be determined.**

However, not all information of the group are contained in the relation $R(\phi)=e^{-i\phi J}$ , certain **global** properties of the group, such as $R(\phi)=R(\phi\pm2\pi)$ in this case, are **not contained in the generator-element relation**. These global properties, **mostly topological in nature, also play a role in determining the irreducible representations of the group**, as we shall seen in [6.3 Irreducible Representations of SO(2)](#6.3%20Irreducible%20Representations%20of%20SO(2)).

##### Explicit representation of $R(\phi)$
From the matrix realization of $R(\phi)$ we can deduce matrix of $R(d\phi)$ in first order of $d\phi$:$$R(d\phi)=
\begin{pmatrix}
1&-d\phi\\
d\phi&1
\end{pmatrix}$$then by comparing this with the definition equation of $J$: $R(d\phi)=E-id\phi J$ we can read off the generator $J$: $$J=\begin{pmatrix}
0&-i\\
i&0
\end{pmatrix}$$
**Notice** that $J$ in this case is **traceless Hermitian** matrix.
It is easy to derive all representation matrices from the generator-element relation by Taylor-expand the relation in terms of $J$.
## 6.3 Irreducible Representations of SO(2)
**(I)** Consider any representation $U$ of SO(2) on a finite-dimensional vector space $V$. Let $U(\phi)$ denote the representation operator of $R(\phi)$ in the vector space. 
Then consider the generator $J$: **for sure, the original generator $J$ must also be an operator in the plane we define SO(2) rotations**, and **the isomorphic mapping from group elements (operators on the plane we define SO(2) rotations) to operators in $V$, must also map $J$ into an operator in $V$**, we denote this operator by $\bar{J}$ (in Tung’s book, still $J$). I.e. $$R(\phi) \xrightarrow{U}U(\phi)\space,\space J\xrightarrow{U}\bar{J}$$Thus when we impose the relation $R(d\phi)=e-id\phi J$, the representation operator of it, $U(d\phi)$ must be $U(d\phi)=E-id\phi \bar{J}$.
Repeating the process of deriving generator-element relation, we can derive the generator-RepresentationOfElement relation simply by replacing $R(\phi)$ by $U(\phi)$ and $J$ by $\bar{J}$: $$U(\phi)=e^{-i\phi\bar{J}}$$
**(II)** Then we restrict the representation to be unitary without loss of generality(since [Theorem 3.3 Every representation $D(G)$ is equivalent to a unitary representation](#Theorem%203.3%20Every%20representation%20$D(G)$%20is%20equivalent%20to%20a%20unitary%20representation)). To ensure $U(\phi)$ be unitary, the image $\bar{J}$ of $J$, in vector space $V$, must be **Hermitian**.

**(III)** Since SO(2) is abelian, all its irreducible representation must be one-dimensional([Theorem 3.4 Irreducible representations of any Abelian group must be of dimension one.](#Theorem%203.4%20Irreducible%20representations%20of%20any%20Abelian%20group%20must%20be%20of%20dimension%20one.)). So for any vector $\ket{\alpha}$ in a **minimal** [invariant subspace](#Definition%203.4%20Invariant%20Subspace%20(of%20$V$%20with%20respect%20to%20$U(G)$) we must guarantee: $$U(\phi)\ket{\alpha}=\ket{\alpha}\cdot\text{some scalar}$$(because the minimal invariant subspace is of dimension one). By replace the operator $U(\phi)$ in terms of image $\bar{J}$ and expand $it with respect to $\bar{J}$ near $E$, we find that to ensure the equation above holds: any vector $\ket{\alpha}$ in the minimal invariant subspace must be an eigenvector of $\bar{J}$, thus we label these vectors by their eigenvalue s wrt $\bar{J}$: $$\bar{J}\ket{\alpha}=\ket{\alpha}\alpha$$(notice this also suggest the image of generator Is simply the operator “multiplied by $\alpha$”), and so the scalar coefficient is determined: $$U(\phi)\ket{\alpha}=\ket{\alpha}e^{-i\phi\alpha}$$(the result on the RHS is calculated by: first replace $U(\phi)$ by the exponential form, then expand the exponential, then replace the expanded terms by eigenvalue relations, eventually by consider the result as a series expansion in terms of the eigenvalues we resume the exponential form). 
We may also notice that since $\bar{J}$ must be Hermitian, its eigenvalues $\alpha$ must be **real**.

**(III)** the form $U(\phi)\ket{\alpha}=\ket{\alpha}e^{-i\phi\alpha}$ automatically satisfy the multiplication rule of SO(2) group; however, in order to satisfy the **global constraint** that $R(\phi)=R(\phi\pm2\pi)$ a restriction must be put on the values of $\alpha$: $e^{-i\phi\alpha}=e^{-i(\phi\pm2\pi)\alpha}$ must holds for $\forall \alpha \in \mathbb{R}$, i.e. **$\alpha$ must be an integer**. ***We denote this integer by $m$, and the corresponding representation by $U^m(\text{SO(2)})$ ***

**In conclusions**, an irreducible representation of SO(2) labeled $m$, in the one-dimensional vector space spawned by $\ket{m}$ has image of generator: $$\bar{J}\ket{m}=\ket{m}m$$and representation operators: $$U^m(\phi)\ket{m}=\ket{m}e^{-im\phi}$$
**(A)** When $m=0$, $U^0$ is the **identity representation**.
**(B)** When $m=1$, $R(\phi)\rightarrow U^1(\phi)=e^{-i\phi}$, this representation is an isomorphism between SO(2) elements and points on the unit circle of complex plane.
**(C)** When $m=-1$, the situation is the same as above except for …
**(D)** When $m=\pm2$, $R(\phi)\rightarrow U^{\pm2}(\phi)=e^{\pm2i\phi}$, these are the mappings from SO(2) parameters $(\phi)$ (or group elements, since they are isomorphic) to the unit circle on the complex plane twice, each covers the circle twice.
##### Theorem 6.4 : Irreducible Representation of SO(2)
The single-valued irreducible representations of SO(2) are given by $\bar{J}=m$ where $m$ is any integer, and: $$U^m(\phi)=e^{-im\phi}$$of these, only the $m=\pm1$ ones are **faithful representations** (meaning the homomorphisms are isomorphic for the two cases).

We may notice that **the definition equation for $R(\phi)$ is a two-dimensional representation**, and it must be **reducible**. Indeed, it is equivalent to **a direct sum of the $m=\pm1$ representations**.
To do this we simply looking for a similarity transform that block-diagonalizes all representation matrices $R(\phi)$, but since **the generator-element relation ensures that any representation is exponential of generator $J$**, we realize that **a block-diagonalization well block-diagonalize all representation matrices**. Where the generator: $$J=\begin{pmatrix}
0&-i\\
i&0
\end{pmatrix}$$To diagonalize this matrix we first looking for its eigenvalues and eigenvectors… (You have leaned these in linear algebra).



## 6.4 Invariant Integration Measure, Orthonormality and Completeness Relations
We would like to formulate the **orthonormality and completeness** relation for the irreducible representation $U_m(\phi)=e^{-im\phi}$ in analogy to [Theorem 3.7 Orthonormality and completeness of group characters](#Theorem%203.7%20Orthonormality%20and%20completeness%20of%20group%20characters). It seems the only necessary change is substitute element label by $\phi$ and representation label $\mu$ by $m$; however because SO(2) is infinite and continuous, the summation over group elements must be replaced by an integration. And **the integration measure must be well defined**. 
By saying “well defined”we want the integration of some function of group elements over all group elements to be independent of the parametrization. Say if $R(\phi)$ is the natural parametrization and $\xi(\phi)$ a monotonic function of $\phi$ thus $R(\xi)$ another parametrization. Then a straightforward integration of an arbitrary function $f(R)$ of the group in two parametrization s are: $$\int d{\phi} f[R(\phi)] \text{ and } \int d\xi f[R(\xi)]$$it’s easy to show: $$RHS=\int d\phi \frac{d\xi(\phi)}{d\phi}f[R(\phi)]\neq LHS$$Hence **integration of $f$ over the group manifold is not well defined**. Our question turns to how to find a natural yet non-ambiguous definition of integration of $f$ over $R$.
### Invariant integration measure (DNF)
The key to answer this question is that **such a integration must** not only **integrate over all elements of $R_2$** but also **preserve the rearrangement lemma**.
##### Definition 6.1 : Invariant Integration Measure
A parameterization $R(\xi)$ in group space with an associated weight function $\rho_{\mathbf{R}}(\xi)$ ($\xi$ here means the weight function is wrt $\xi$ rather than saying it’s a function of $\xi$, indeed it is, but we’d better consider it as a function of group element $\mathbf{R}$) is said to provide an invariant integration measure they satisfy: $$\int d\tau_{\mathbf{R}}f[\mathbf{R}]=\int d\tau_{\mathbf{R}}f[\mathbf{S}^{-1}\mathbf{R}]=\int d\tau_{\mathbf{SR}}f[\mathbf{R}]$$(notice that when we use.boldface letter $\mathbf{R}$ we are referring to **an element of the group**) where: $$d\tau_{\mathbf{R}}=\rho_{\mathbf{R}}(\xi)d\xi_{\mathbf{R}}$$The notation $\rho_{\mathbf{R}}(\xi)$ here means that $\rho$ is a function of $\mathbf{R}$, but since $\mathbf{R}$ is labeled by $\xi$, $\rho$ is also a function of $\xi$.

We can prove that these conditions are automatically satisfied when we define the weight function as: $$\rho_{\mathbf{R}}:=\frac{d\xi_{\mathbf{E}}}{d\xi_{\mathbf{ER}}}|_{\mathbf{R}}=\rho_{\mathbf{R}}(\xi)$$and the relation between $\xi$s is determined by the group multiplication rule.

**For example** if we take $\xi=\phi$, then $\xi_{\mathbf{ER}}=\xi_{\mathbf{E}}+\xi_{\mathbf{R}}$ and so $\rho=1$
##### Theorem 6.5 : the Invariant Integration of SO(2)
The rotation angle $\phi$ and the volume measure $d\tau_{\mathbf{R}}=d\phi$, provide the invariant integration measure over SO(2)

Then if $\xi$ is a general parameterization of the group element, the corresponding weight function $\rho_{\mathbf{R}}(\xi)$ with respect to $\xi$ must satisfies: $$\rho_{\mathbf{R}}(\xi)d\xi=\rho_{\mathbf{R}}(\phi)d\phi$$so the weight function is constructed by: $$\rho_{\mathbf{R}}(\xi)=\frac{d\phi}{d\xi}|_{\mathbf{R}}$$
### Orthonormality and Completeness Relations
##### Theorem 6.6 : the representation [Theorem 6.4 Irreducible Representation of SO(2)](#Theorem%206.4%20Irreducible%20Representation%20of%20SO(2)) has the following orthonormality and completeness relations
**Orthonormality:** $$\frac{1}{2\pi}\int_{0}^{2\pi}U^{\dagger}_n(\phi)U^m(\phi)d\phi=\delta^m_n$$ **Completeness**: $$\sum_n U^n(\phi)U^{\dagger}_n(\phi’)=\delta(\phi-\phi’)$$
By looking as these relations, it’s natural to consider them as generalization of [Theorem 3.5 Orthonormality of irreducible representations](#Theorem%203.5%20Orthonormality%20of%20irreducible%20representations) And [Theorem 3.6 Completeness of irreducible representations](#Theorem%203.6%20Completeness%20of%20irreducible%20representations).
### Comparison of [Theorem 3.5 Orthonormality of irreducible representations](#Theorem%203.5%20Orthonormality%20of%20irreducible%20representations) and [3.6 Orthonormality and Completeness of Irreducible Characters](#3.6%20Orthonormality%20and%20Completeness%20of%20Irreducible%20Characters) and [Theorem 3.7 Orthonormality and completeness of group characters](#Theorem%203.7%20Orthonormality%20and%20completeness%20of%20group%20characters) and Theorem 6.6

## 6.5 Multivalued Representation (DNF)
A feature of **continuous group** is that **there are possibly multi-valued representations**. Let’s take a look at the following example.

Consider the irreducible representation $U_{1/2}$.  A mapping of this representation is: $R(\phi)\rightarrow u_{1/2}(\phi)=e^{-i\phi/2}$.
we expect, on physical ground that this be equal to $U_{1/2}(2\pi+\phi)$, however: $U_{1/2}(2\pi+\phi)=-U_{1/2}(\phi)$.
Which means any element of SO(2) group is mapped to **two ccomplex numbers**: $\mp e^{-i\phi/2}$ **differing by a factor of $-1$**. This is called a **two-valued representation** in the sense that the group multiplication rule is preserved if either one of the values is accepted.


# Rotation in Three-Dimensional Space and SO(3) Group
In this chapter we study the most useful **non-abelian** continuous group, the group SO(3).


## 7.1 Description of Group So(3)
##### Definition 7.1 : the SO(3) group
The SO(3) group consists **all** *continuous linear transformations* in 3-D Euclidean space **that** preserves norm of vectors.

Consider when orthonormal basis $\{\mathbf{e_i}\}$ is chosen on the 3-D vector space,  under a rotation (described by ${R^j}_i$) such that the effects are: $$\mathbf{Re_i}=\mathbf{e_j}{R^j}_i$$ we can show the components $x^i$ (which form a column vector themselves) of any vector $\mathbf{x}$ on the vector space will transform under the rule: $$x’^i={R^i}_jx^j$$ Then we can get some **restrictions** on the realization matrices from the requirement of [Definition 7.1 the SO(3) group](#Definition%207.1%20the%20SO(3)%20group) :
**First** the elements must preserves norm of a vector, this yields:$$RR^T=R^TR=E$$for all rotational matrices, this also implies $\text{det}R=\pm1$ for all real matrices of rotation;
**Secondly** since all **physical rotations** can be reached continuously from the identity transformation, while the determinant of identity transformation is $1$, it follows that **all rotation matrices must have determinant $+1$**, i.e. $$\text{det}R=1$$
(Matrices that preserves norm yet having determinant $-1$ corresponds to **rotations combined with discrete spatial reflection** transformations, these would be introduced in later chapters.)
### Restrictions on rotation matrices expressed in *invariant tensors*
The restriction that **rotation matrices preserve norm** can be expressed by: $${R^i}_k{R^T_k}^j=\delta^{ij}$$but the LHS is equal to: $$LHS={R^i}_k{R^j}_k={R^i}_k{R^j}_l\delta^{kl}$$So the first restriction can be expressed as **invariance of the $\delta$ tensor under rotations**: $${R^i}_k{R^j}_l\delta^{kl}=\delta^{ij}$$(this simply means that rotation matrices are **orthonormal matrices**).

Similarly the second restriction that **rotation matrices must have determinant $1$** can be expressed as **the invariance of Levi-civita symbol**: $${R^i}_l{R^j}_m{R^k}_n\epsilon^{lmn}=\epsilon^{ijk}$$(this simply means that rotation matrices are **special** matrices).
### Prove the definition above satisfies the requirement of a group
Simply prove that the product of two SO(3) matrices is still an SO(3) matrix. (To check if the product is **SO(3)** we simply check if it’s **S**pecial and if it’s **O**rthonormal by testing if the equations still hold for the product matrix ). And similarly check if the inverse of an SO(3) matrix is still SO(3), not to mention that the identity element of the group is of course SO(3).
Hence we have **the rotation matrices form a group: the SO(3) group**.

A general SO(3) group element depends on three continuous group parameters. There are infinite possible ways to choose these parameters, among which the two most commonly used ways will be described in this chapter; a third way related to SU(2) group will be discussed in [The Group SU(2) and More about SO(2)](#The%20Group%20SU(2)%20and%20More%20about%20SO(2)).

### 7.1.1 the Angle-and-axis Parameterization
Any rotation can be designated by $R_{\vec{n}}(\psi)$ where unit vector $\vec{n}$ specifies the **direction of axis** and $\psi$ denotes the **angle of rotation** about the axis, but *the direction of the unit vector can be determined by two angles*—say the polar and azimuthal angles $(\theta,\phi)$. Thus we say that $\mathbf{R}$ is characterized by the three parameters $(\psi,\theta,\phi)$, there $\psi\in[0,\pi], \theta\in[0,\pi],\phi\in[0,2\pi]$. (Notice we take the range of $\psi$ as such because $R_{\vec{Ν}}(\psi)=R_{-\vec{n}}(2\pi-\psi)$, so that there is no need to extend its range to $[0,2\pi]$. The only redundancy in this parameterization is $$R_{\vec{n}}(\pi)=R_{-\vec{n}}(\pi)$$.)

**The structure of the group parameter space** can be visualized by associating each rotation $R_{\vec{n}}(\psi)$ with a three-vector $\mathbf{c}=\psi\vec{n}$ (pointing in the direction $\vec{n}$ with magnitude $\psi$.) The tips of these vectors fill a sphere of radius $\pi$. Because of the redundancy relation expressed in the equation above, **any two opposite points on the surface of the sphere are equivalent to each other.**

A sphere with this added **feature** above is said to be ***compact***(i.e. closed and bounded) and ***doubly connected***. And ***doubly connected*** indicates that this **group manifold** allows **two distinct classes of closed curves**: (1) those can be continuously deformed into a point, and (2) those that must be warped around the sphere once.
It is not hard to see that all curves which wind around the sphere an **even** number of times can be continuously deformed into the (1)
 class; while all curves which wind the sphere an **odd** number of times can be deformed into (2) class.


A very useful identity of angle-and-axis parameterization is: $$R_{\vec{n}’}(\psi)=RR_{\vec{n}}R^{-1} \space, \space \text{if} \space \vec{n}’=R\vec{n}$$An immediate consequence of this fact is the following theorem:
##### Theorem 7.1 : Classes of Rotations
All rotation by the same angle $\psi=\psi_0$ belong to a single conjugate class of the group SO(3)
### 7.1.2 the Euler-Angles Parameterization
A rotation can also be specified by the **relative configuration** of two Cartesian coordinate frames (labeled (1,2,3) and (1’,2’,3’) respectively), such that the effect of the rotation is to bring the axis of fixed frame (1,2,3) to those of the rotated frame (1’,2’,3’)

Consider the following operation in order on the original frame (1,2,3):
**First** a rotation about 3, i.e. a rotation on the 1o2 plane, by angle $\alpha$. This operation transforms $1$ into $\bar{1}$ and $2$ into $\bar{2}$ while keep $3$ still.
**Second** a rotation about $\bar{1}$, i.e. a rotation on the $\bar{2}o3$ plane, by angle $\beta$. This operation transforms $\bar{2}$ into $\bar{\bar{2}}$ and $3$ into ${\bar{3}}$.
**Finally** a rotation about $\bar{3}$, i.e. a rotation on the $\bar{1}o\bar{\bar{2}}$ plane, by angle $\gamma$. This operation transforms $1$ into $\bar{\bar{1}}$ and $\bar{\bar{2}}$ into $\bar{\bar{\bar{2}}}$.
It’s obvious that $1’=\bar{\bar{1}},2’=\bar{\bar{\bar{2}}},3’=\bar{3}$.

But we want to describe the whole rotation with information provided by the relative configuration between the final frame and the original frame.It’s obviously that the axis of each operation in the process above are: (finally)=$3’$;(second)=$\bar{1}$=“intersection of 1o2 and 1’o2’”(denote $\vec{N}$); (first)=3. While the angles of each step are (finally)=$\gamma$; (second)=$\$beta$; (first)=$\alpha$.So the rotation shall be described as **first rotate about $3$ by angle $\alpha$, then rotate about $\vec{N}$ by angle $\beta$, last rotate about $3’$ by angle $\gamma$.** Denote this rotation by $R(\alpha,\beta,\gamma)$ and it’s easy to establish the relation between this parameterization with axis-and-angle parameterization: $$R(\alpha,\beta,\gamma)=R_{3’}(\gamma)R_{\vec{N}}(\beta)R_{3}(\alpha)$$where the range of variables: $\alpha,\gamma\in[0,2\pi]$ and $\beta\in[0,\pi]$.

##### Re-express $R(\alpha,\beta,\gamma)$ in terms of rotation about the fixed axes (1,2,3)
By using the identity:$$R_{\vec{n}’}(\psi)=RR_{\vec{n}}R^{-1} \space, \space \text{if} \space \vec{n}’=R\vec{n}$$and since: $\vec{3’}=R_{\vec{N}}(\beta)\vec{3}$,  $\vec{N}=R_{3}(\alpha)\vec{2}$, so we can re-express the relation between Euler-angle parameterization and axis-angle parameterization as: $$R(\alpha,\beta,\gamma)=R_3{\alpha}R_2(\beta)R_3(\gamma)$$Thus, **in Euler angles, every rotation can be decomposed into product of simple rotations around the fixed axes $\mathbf{e_2,e_3}$.** Which is a great advantage of using Euler angles.

Then when we utilize Euler-angle parameterization, it’s necessary to know the representation of $R_2({\psi})$ and $R_3(\psi)$ (when the frame has been chosen). 
##### Representation matrices of rotation wrt frame basis
The rotation about axis-1 is represented by: $$R_1(\psi)=\begin{pmatrix}
1&0&0\\
0&\cos{\psi}&-\sin{\psi}\\
0&\sin{\psi}&\cos{\psi}
\end{pmatrix}$$and rotation about axis-2: $$R_2(\psi)=\begin{pmatrix}
\cos{\psi}&0&\sin{\psi}\\
0&1&0\\
-\sin{\psi}&0&\cos{\psi}
\end{pmatrix}$$and rotation about axis-3:$$R_3(\psi)=\begin{pmatrix}
\cos{\psi}&-\sin{\psi}&0\\
\sin{\psi}&\cos{\psi}&0\\
0&0&1
\end{pmatrix}$$(notice only $R_2$ and $R_3$ are of importance in Euler-angle parameterization)

Further more, we can show the relations between Euler-angles and the three parameters $(\psi,\theta,\phi)$ in axis-angle parametrization:  (DNF)

## 7.2 One Parameter Subgroups, Generators, and the Lie Algebra
given any fixed axis in the direction $\vec{n}$, **rotations about $\vec{n}$ form a subgroup of SO(3)**. Each such subgroup is **isomorphic** to the group of rotations on a plane, i.e. the SO(2) group.
Associated with each of these subgroup there is a [generator](#Theorem%206.3%20Generator%20of%20SO(2)) denoted $J_{\vec{n}}$. All elements of this subgroup can be written as: $$R_{\vec{n}}(\psi)=e^{-i\psi J_n}$$forming a **one parameter subgroup** of SO(3).

Now recall the very useful identity: $$R_{\vec{n}’}(\psi)=RR_{\vec{n}}R^{-1} \space, \space \text{if} \space \vec{n}’=R\vec{n}$$Rewrite the rotation matrices in terms of the generators $J_{\vec{n}}$ and $J_{\vec{n}’}$ and use the identity: $$Re^{-i\psi Q}R^{-1}=e^{-i\phi RQR^{-1}}$$We obtain: $$RJ_{\vec{n}}R^{-1}=J_{\vec{n}’} \text{where } \vec{n}’=R\vec{n}$$
##### Lemma: relations between generators of two subgroups of SO(3)
Given unit vector $\vec{n}$ and $J_{\vec{n}}$ the generator associated with rotation wrt $\vec{n}$ and an arbitrary rotation $R$: $$RJ_{\vec{n}}R^{-1}=J_{\vec{n}’} \text{where } \vec{n}’=R\vec{n}$$
By Taylor expand the element-generator relation with respect to $J_n$ at $E$ we get the infinitesimal relation:$$R_{\vec{n}}(d\psi)=E-id\psi J_{\vec{n}}$$compare this with the rotation matrices wrt frame axis by infinitesimal angle $R_{i}(d\psi)$ we can read $J_{\vec{n}=1,2,3}$: $$J_1=\begin{pmatrix}
0&0&0\\
0&0&-i\\
0&i&0
\end{pmatrix}$$and$$J_2=\begin{pmatrix}
0&0&i\\
0&0&0\\
-i&0&0
\end{pmatrix}$$and$$J_3=\begin{pmatrix}
0&-i&0\\
i&0&0\\
0&0&0
\end{pmatrix}$$**these results can be summarized as:** $${(J_k)^l}_m=-i\epsilon_{klm}$$
##### Theorem 7.2 : vector Generator $\mathbf{J}$
 Recalls that any generator $J_i$ is its self an **operator** too, but an operator transforms under rotation by the rule $RJ_iR^{-1}$. We find that: $$RJ_kR^{-1}=J_l{R^l}_k$$i.e. the three generator transforms like **components of a row vector**;
 Further, the generator of rotations wrt any arbitrary axis $\vec{n}=\vec{e_k}n^k$ can be written: $$J_{\vec{n}}=J_kn^k$$
 **Consequences:** $\{J_1,J_2,J_3\}$ form a basis for the generators of all one-parameter subgroups of SO(3). Any element $R_{\vec{n}}(\psi)$ of the subgroup associated with $\vec{n}$ can be expressed as: $$R_{\vec{n}}(\psi)=e^{-i\phi J_{\vec{k}}n^k}$$**Similarly** any Euler-angle representation can be written in terms of generators as: $$R(\alpha,\beta,\gamma)=e^{—i\alpha J_3}e^{-i\beta J_2}e^{-i\gamma J_3}$$therefore, for all practical purposes, it suffices to work with the three basis generators rather than the 3-fold infinity of group elements.

##### Theorem 7.3 : lie Algebra of SO(3)
The three **basis generators** $\{J_i\}$ satisfy the following ***Lie algebra***:$$[J_k,J_l]=i\epsilon_{klm}J_m$$
**Proof:**…**QED**
(Notice: when we use the word ***Lie algebra*** we simply refer to the commutation relation of basis generators, the reason to do so is that **on the space spanned by the basis generators, if we define the multiplication to be their commutator, the space combined with this multiplication structure from a system of linear algebra**.)
Also notice (from the proof of the theorem) **the Lie algebra is equivalent to an important identity involving the multiplication rule, i.e. the relation**: $$R_{\vec{n}’}(\psi)=RR_{\vec{n}}R^{-1} \space \text{where}\space \vec{n}’=R\vec{n}$$**in vicinity of the identity transformation**. Thus ***the Lie algebra along with element-generator relation*** ($R_{\vec{n}}(\psi)=e^{-i\phi J_{\vec{k}}n^k}$ or $R(\alpha,\beta,\gamma)=e^{—i\alpha J_3}e^{-i\beta J_2}e^{-i\gamma J_3}$) ***determines most important properties of group structure and representation, except for*** some “global” properties.

## 7.3 Irreducible Representations of the SO(3) Lie Algebra
Since the basis elements of the **Lie algebra** are generators of infinitesimal rotation. It’s obvious that every representation of the group is automatically a representation of the corresponding Lie algebra.
(i.e. Because every element $J_{\vec{n}}$ in the space spanned by $\{J_1,J_2,J_3\}$ is associated with an element $R_{\vec{n}}(d\phi)$ of SO(3) group, the representation of $J_{\vec{n}}$ is then homomorphic to the representation of SO(3) group.)
In this section we only need to **construct the irreducible representation of the Lie algebra.**

Because the group parameter space is **compact**, we expect the irreducible representation be of **finite dimension**;
Since any finite-dimensional representation must be equivalent to some **unitary** representation, we impose the representation to be unitary, thus the generators $J_{\vec{n}}$ are represented by **Hermitian** operators;
The **invariant integration measure** on group parameter space is left for [The Group SU(2) and More about SO(3)](#The%20Group%20SU(2)%20and%20More%20about%20SO(3)).

We know the space of irreducible representation is a **minimal invariant subspace** (under transformations that are image of the operators that are represented) of the space of a general representation, the general strategy to construct an irreducible representation is:
(1) pick one convenient “standard” vector as the start point, consider the minimal invariant subspace that contains it;
(2) generate the rest of the vectors in the minimal invariant subspace by repeatedly apply transformations representing the operators, then find basis of this space

In this particular case: denote the representation pace by.$V$ (for simplicity we denote the representation operators of generators still by $J$), and its **basis vectors are chosen naturally to be the eigenvectors of a set of mutually commuting generators.** But $\{J_1,J_2,J_3\}$ do not mutually commute. However, any basis generator commute with $J^2=\sum_i(J_i)^2$ , it’s said to be a [Definition 7.2 Casimir Operator](#Definition%207.2%20Casimir%20Operator) of the group of generators.
##### Definition 7.2 Casimir Operator
An operator which commutes with all elements of a Lie group is said to be a Casimir operator of this Lie group.

Now that $J^2$ commutes with all elements of the Lie group, its image in the representation must be a multiple of identity operator in $V$ by [Schur’s lemma 2](#Schur’s%20lemma%202). Which means **all vectors in $V$ are eigenvectors of $J^2$ with a same eigenvalue.**

By convention, the basis vectors of $V$ are chosen as eigenvectors of $J^2,J_3$. For convenience, we denote some important operators:
$J_{\pm}:=J_1\pm iJ_2$
Then the commutation relations between $J_i,J^2,J_{\pm}$ are:
$[J_3,J_{\pm}]=\pm J_{\pm}$
$[J+,J_-]=2J_3$
$J^2=J_3^2-J_3+J_+J_-=J_3^2+J_3+J_-J_+$
$J_{\pm}^{\dagger}=J_{\mp}$

Denote the eigenvector of $J_3$ with eigenvalue $m$ by $\ket{m}$.
It can be shown that $J_+\ket{m}$ will also be an eigenvector of $J_3$ with eigenvalue $M+1$, or it is a null vector.
Similarly, $J_-\ket{m}$ is either null or eigenvector of $J_3$ with eigenvalue $m-1$

Let assume $J_+\ket{m}\neq0$ and so it can be normalized to $\ket{mn+1}$, repeat this ”raise-and-normalize” process we can generate a sequence of normalized eigenvectors of $J_3$: $\ket{m},\ket{m+1},…$ We require the sequence to **terminate** so that the space is **finite-dimensional**.
Suppose the last **non-null** vector of this sequence is $\ket{j}$ and so we have: $$J_3\ket{j}=\ket{j}j$$and $$J_+\ket{j}=0$$Hence we have it’s also an eigenvector of $J^2$ with eigenvalue: $$J^2\ket{j}=\ket{j}j(j+1)$$
Reversely as we apply $J_-$ to $\ket{j}$ repeatedly, the resulting vectors are eigenvectors of $J_3$ to until the sequence terminate. and this sequence can also be normalized. Suppose the last **non-null** vector of this sequence be $\ket{l}$ such that: $$J_3\ket{l}=\ket{l}l$$and $$J_-\ket{l}=0$$Which means: $$0=\bra{l}J_-^{\dagger}J_-\ket{l}=\bra{l}(J^2-J_3^2-J_3)\ket{l}$$but any eigenvector of $J_3$ with eigenvalue $l$ is eigenvector of $J^2$ with eigenvalue $l(l+1)$, so the equation above simply means: $j(j+1)-l(l-1)=0$, i.e. $l=-j$. 
Since the vector $\ket{l}$ is obtained from $\ket{j}$ by applying $J_-$ **an integer number of times** we must have $j-l$ is an integer, but $l=-j$, so: $$2j=n=0,1,2,3…$$

Thus, the basis of $V$ are chosen to be $\ket{m}$ where $m=0,\pm 1/2,\pm1,…,\pm j$. And the vector space $V$ is of dimension $2j+1$. (**How to prove this is a minimal; invariant subspace?**)

##### Theorem 7.4 : Irreducible Representation of SO(3) Lie Algebra
The irreducible representations of SO(3) Lie algebra are each characterized by an **angular momentum eigenvalue $j$** from positive integers and half-integers. The **orthonormal basis vectors** ($j$ is the irr the basis belongs to, $m$ labels the vectors in the basis of irr-$j$) **for fixed $j$** of each irr-$j$ can be specified by the following equations:
$J^2\ket{j\space m}=\ket{j\space m}j(j+1$
$J_3\ket{j\space m}=\ket{j\space m}m$
$J_{\pm}\ket{j\space m}=\ket{j\space (m\pm1)}[j(j+1)-m(m+1)]^{1/2}$
We refer to a basis defined according to this convention as a **canonical basis.**

(**Notice that the third relation implies that $J_{\pm}$ are real and thus $J_2$ must be Imaginary**)


Now that we know how (images) of generators transform basis vectors of $V$, and we know the element-generator relation, thus by principle we can know how (images) of elements transform basis vectors and thus write the representing matrices.
**The representation matrix of $U(\alpha,\beta,\gamma)$** is **specified by its operation on basis vectors $\ket{j\space m}$s**:  $$U(\alpha,\beta,\gamma)\ket{j\space m}=\ket{j\space m’}{D^j(\alpha,\beta,\gamma)^{m’}}_m$$We find that: $${D^j(\alpha,\beta,\gamma)^{m’}}_m=e^{-i\alpha m’}{d^j(\beta)^{m’}}_me^{-i\gamma m}$$where: $${d^j(\beta)^{m’}}_m:=\bra{j\space m’}e^{-i\beta J_2}\ket{j\space m}$$
### Examples (irreducible representations of SO(3) Lie algebra)
##### $j=1/2$ representation

##### $j=1$ representation

#### Irreducible Representations of SO(3) group
##### Theorem 7.5 : Irreducible Representation of SO(3) Group
When apply the irreducible representation of SO(3) Lie algebra to SO(3) group, the irreducible representations of SO(3) group can be sorted to **two distinct categories**:
(I) For $j$ being positive integers, the representations are **single-values**;
(II) For $j$ being positive half-odd integers, the representations are all **double-valued**.






## 7.4 Properties of the Rotational Matrices (DNF) 
##### Unitary
##### Unit Determinant (”Special”)
##### Reality of $d(\beta)$
Recall that [Theorem 7.5 Irreducible Representation of SO(3) Group](#Theorem%207.5%20Irreducible%20Representation%20of%20SO(3)%20Group) indicates that $J_{\pm}$ are real and $J_2$ is imaginary.  Which implies that $d^j(\beta)$ is real by its definition equation. Therefore we have: $$d^{-1}(\beta)=d(-\beta)=d^T(\beta)$$from its definition equation.
##### Complex Conjugate of $D$ (DNF)
##### Symmetry Relations (DNF)
##### Relations to Spherical Harmonics (DNF)
##### Characters
Recall that **all rotations by the same angle about different axis are of the same conjugate class**. Thus, it suffices to evaluate the group characters $R_3(\phi)$: $$\chi^j(\psi)=\sum_mD^j{[R_3(\psi)]^m}_m$$
We have two equivalent ways to calculate the RHS.
(A) We know that once basis of vector space be chosen, the $(l,m)$ element of the matrix representing an operator $A$ is equal to $\bra{l}A\ket{m}={A^l}_m$. Thus in this case: $${D^j[R_3(\psi)]^l}_m=\bra{j\space l}R_3(\psi)\ket{j\space m}=\bra{j\space l}e^{-i\psi J_3}\ket{j\space m}$$but $J_3\ket{j\space m}=m\ket{j\space m}$ so we have $RHS=e^{-i\psi m}\delta_{lm}$, eventually we get the character: $$\chi^j(\psi)=\sum_{m=-j}^je^{-i\psi m}=\frac{\sin((j+1/2)\psi)}{\sin{\psi/2}}$$(B) The second way is directly I use: $${D^j(\alpha,\beta,\gamma)^l}_m=e^{-i\alpha l}{d^j(\beta)^l}_me^{-i\gamma m}$$where: $${d^j(\beta)^l}_m=\bra{j\space l}e^{-i\beta J_2}\ket{j\space m}$$. Which would give the exactly same result.

## 7.5 Application to a Particle in Central Potential (DNS)

## 7.6 Transformation Properties of Wave Functions and Operators (DNF)
## 7.7 Direct Product Representation (DNF)
## 7.8 Irreducible tensors and the Wigner-Eckart Theorem (DNS)



# The Group SU(2) and More about SO(3)








# Appendix I
## I.1 Summation Convention
## I.2 Vectors and Vector Indices
**(a)** Vectors in ordinary 2-or-3 dim **Euclidean** will be denoted:$$\mathbf{x,y,z}$$
2-or-3 dim Euclidean vectors with unit length will be denoted:$$\mathbf{\hat{x},\hat{y},\hat{z}}$$
**(b)** Vectors in **non-Euclidean** space
**(c)** Vectors in **general linear** vector space are denoted in Dirac’s bra-ket notation:$$\ket{x},\ket{\phi},\bra{x},\bra{\phi}$$
**(d)** Multiplication by a **number** $\alpha$ is denoted by:$$\ket{\alpha x}=\alpha \cdot \ket{x}=\ket{x}\alpha$$(In general, the last form is the preferred one, i.e. **the multiplied number should always be put on the $>$ side**)
**(e)** Lower indices are used to **label** ket basis; then to satisfy summation convention, components of ket vectors are labeled by upper indices:$$\ket{x}=\ket{e_i}x^i$$
**(f)** In correspondence to (e): $$\bra{x}=x_i\bra{e^i}$$
## I.3 Matrix Indices
**(i)** Elements of a matrix will be denoted by **a row index followed by a column index**
The element on $i$-th row and $j$-th column of matrix $A$ is then:$A_{ij}$.Now that operator $A$ applied on a ket vector is represented by matrix $A$ multiplied from left to a column vector:
$$A\ket{x}=\begin{pmatrix}
A_{11} &A_{22}&…&A_{1n}\\
A_{21} &A_{22}&…&A_{2n}\\
…\\
A_{n1} &A_{n2}&…&A_{nn}
\end{pmatrix}
\begin{pmatrix}
x^1\\
x^2\\
…\\
x^n
\end{pmatrix}$$componentwise: $(A\ket{x})^i=\sum_j A_{ij}x^i$, to  consist with Einstein convention: ***the element on $i$-th row, $j$-th column of the matrix $A$ that represents the operator applied on ket vector is denoted:***
$${A^i}_j$$
**(ii)** Now consider apply the same (but Hermitian conjugate) operator $A$ on the same (but bra) vector $\bra{x}$:
$$\bra{x}A^{\dagger}=\begin{pmatrix}
x_1 &x_2 &…&x_n
\end{pmatrix}
\begin{pmatrix}
A^{\dagger}_{11} & A^{\dagger}_{12} &…&A^{\dagger}_{1n}\\
A^{\dagger}_{21} &…\\
…\\
A^{\dagger}_{n1} &…&…&A^{\dagger}_{nn}
\end{pmatrix}$$componentwise: $(\bra{x}A^{\dagger})_i=\sum_j x_jA^{\dagger}_{ji}=\sum_j x_j$, **again, to consist with Einstein convention, *the element on the $i$-th row and $j$-th column of the matrix $A^{\dagger}$ that represents the operator applied on bra vectors is denoted:***
$${A^{\dagger i}}_j$$(which is in exactly the same form of (i))
**(iii)** ***In addition, we deliberately denote the $i$-th row and $j$-th column element transposed matrix $A^T$ by:
$${{A^T}_i}^j={A^j}_i$$

In conclusion, for all matrices, the first index levels rows while the second labels columns. For non-transposed matrices, the row indices are upper indices while the column indices are lower; for transposed matrices, the row indices are lower while the column indices are upper.
# Appendix II (DNF)
## II.1 Linear Vector Space
##### Definition II.1 : Linear Vector Space $V$
##### Definition II.2 : Linearly Independent Vectors
##### Definition II.3 : a Basis of $V$
##### Theorem II.1 : All bases of a finite-dim vector space $V$ have the same number of members
**Proof**: 
##### Definition II.4 : the Dimension of $V$

##### Definition II.5 : Isomorphic Vector Spaces
##### Theorem II.2 : Every $V_n$ of dim $n$ is isomorphic to $C_n$ (the space of $n$ ordered complex numbers). Hence are isomorphic to each other.
##### Theorem II.3 : Given $V_n$ and a sub space $V_m$, one can always choose a basis $\{\mathbf{e_i}\}$ such that the first $m$ basis-vectors lie in $V_m$

##### Definition II.7 : Direct Sum
Let $V_1,V_2$ be subspaces of $V$, we say $V$ is the direct sum of $V_,V_2$ and write $V=V_1\oplus V_2$ if:    (i) $V_1\cap V_2=$$0$; (ii) $\forall \mathbf{x}\in V$ can be written as $\mathbf{x}=\mathbf{x_1}+\mathbf{x_2}$ where $\mathbf{x_1}\in V_1,\mathbf{x_2}\in V_2$.

## II.2 Linear Transformation on Vector Spaces (DNF)
##### Definition II.8 : Linear Transformation $A$ (operator $A$)
A mapping of elements of one vector space $V$ onto elements of another vector space $V’$ is said to be a linear transformation if:
(i) $\ket{x}\in V \xrightarrow{A}\ A\ket{x}\in V’$
(ii) **if** $\ket{y}=\ket{x_1}a_1+\ket{x_2}a_2$, **then** $A\ket{y}=A\ket{x_1} a_1 +A\ket{x_2} a_2$

(Notice the two vector spaces don’t have to be of same dimension)
A very special class of linear transformations is the **linear functional**:
##### Definition II.9 : Linear Functionals
The linear transformations from $V$ onto $V’=\mathbb{C}$  Are called linear functionals.
##### Definition II.10 : Multiplications and Additions of Operators
##### The set of all linear transformations defined on a given vector space $V$, endowed with operator multiplication and addition, form *the algebra of linear transformations* on $V$.

##### Remarks on linear transformations (DNS)


## II.5 Inner Product and Inner Product Space (DNF)
##### Definition II.17 : Inner Product (on vector space $V$)

##### A vector space endowed with an inner product is called an inner product space
##### Definition II.18 : Length of a Vector; Cosine Angle
Categories
_NOTES_ Tools

Tutorial: Running Wolfram Mathematica $($RPi ARM64 version$) $Locally on iPad Pro with Apple Silicon M1 Chip on UTM-based Debian12-on-ARM64 VM

Although the existence of Wolfram Cloud has allowed us to access Wolfram engine from almost any Internet-connected device, there has always been a temptation to deploy and run Mathematica on mobile devices due to performance, response time, Internet dependency, connection quality, and many other factors. This point is particularly prominent at the moment when the computing power of mobile devices has significantly expanded over the past few years.

The biggest obstacle in trying to do this is that most mobile devices are based on arm64 architecture processors, while almost all distributions of Mathematica on Mac OS, Win, Linux, and even versions that rely on docker containers on Linux require the platform architecture to be x86/x86-64/amd64. The sole commonly seen ARM64-based Mathematica distro is the one developed for AppleSilicon-based new Macs. But the difficulty of deploying a mac OS-like environment on a mobile platform seems to be more than enough to abandon the project

The biggest obstacle in trying to do this is that most mobile devices are based on arm64 architecture processors, while almost all distributions of Mathematica on Mac OS, Win, Linux, and even versions that rely on docker containers on Linux require the platform architecture to be x86/x86-64/amd64. The  sole commonly seen ARM64-based Mathematica distro is the one developed for AppleSilicon-based new Macs. But the difficulty of deploying a mac OS-like environment on a mobile platform seems to be more than enough to abandon the project.

Fortunately, we do have some distro of Mathematica that runs on a ARM64/ARMhf platform, the one that was developed to be deployed on a Raspberrry Pi. This version of Wolfram engine actually runs on a Ubuntu-on-ARM distro for RPi. So we tried to install this version of Wolfram engine on a Ubuntu22.04 VM running on UTM, by directly dpkg its .deb file instead of bash a shell installation script, to avoid potential source connecting problems. However, this method also turned out to be ineffective due to dependency problems. I’m not so familiar with Linux system structure and failed to fix these problems.

However, to my surprise and delight, when I repeated the exact process above on a Debbian12-on-ARM VM by chance,  it turned out all dependency issues had gone and the only problem we encountered, which was easy to fix, was the lack of some packages of the native net-installed Debian system. We simply apt install these packages and then Wolfram engine and GUI Mathematica can then be activated by a Linux-version activating key.

After some adjustment of VM parameters, this system constructed on a iPad Pro with M1 chip got a score about 2.5 on Mathematica benchmark.

A performance test can be viewed on my Bilibili page

Tutorial

I. The first step is to install UTM, the environment our VMs to be deployed

First install the latest version of Sideloady on your Mac;

Download the TrollinstallerXipa file on you Mac and cable connect your iPad. Side load the ipa by Sideloadly to your iPad;

Trust Trolinstaller^ sideloaded on your iPad, open and run it. If failed, try reboot and try again;

Now Trollstore2II. Deploy Debian12 VM on UTM has been installed on your iPad, find the proper ipa file of UTM on its release page on github and then you can install UTM on your device by Trollstore2.

II. Deploy Debian12 VM on UTM

Find the net install image of Debian12 on its official website, download it to your iPad;

Open UTM, create your VM by virtualization, which is much faster, if you are iOS version is under upmost 16.3.1; or by emulation if your system disabled virtualzion option;

Follow th instruction to install Debian12 onto your iPad, because we are not-installing the system, keep your internet connection when installing the system; an optional step that can directly avoid the package issue you may encounter when pkg Wolfram-engine later is to install all optional desktop environments when installing the system;

Remove the CD image and reboot the VM, now you have a Debian12 VM on your iPad.

III. Install Wolfram engine on your Debian12 VM

First download the .deb installing file from RPi website;

Open terminal on the directory where you have download the .deb file, running the following script with filename replaced properly;

sudo dpkg -i filename.deb

Moments later your will see the configuration interface of wolfram-engine, agree the licenses then you are automatically proceeded to installation;

You should have properly installed Wolfram engine and Mathematica now, but if package errors occur, you may run the following script to install the package needed:

sudo apt install packagename
Categories
_NOTES_ General Relativity Gravitational Waves

A Quick Look at GWs

Lecture notes 14.11.20

This is my draft for a 20min introduction to gravitational wave for sophomore students, presented on Prof.Fan’s Theoretical Mechanics course. A rearranged LaTeX version may be uploaded later.
Categories
_NOTES_ General Relativity by Wald Wald C2

[Notes] General Relativity [C2]

Acknowledgement

All notes under this category are based on Prof. Robert M. Wald’s great work. If you feel copyright infringed, please contact the author through the footer, I will immediately delete the relevant content.
Categories
_NOTES_ General Relativity by Wald Wald C1

[Notes] General Relativity [C1]

Acknowledgement

All notes under this category are based on Prof. Robert M. Wald’s great work. If you feel copyright infringed, please contact the author through the footer, I will immediately delete the relevant content.
Categories
_NOTES_ General Relativity

MTW chapter1: Geometry in Brief

Categories
_NOTES_ Uncategorized

Thanks 4 U support!

Sorry for being away for quite a long while. I was preparing for my finals delayed due to the pandemic, which will finally end by March 5th. If everything works as scheduled, I will update some learning notes for MTW (the huge black book on gravitation) after the exams.

As a sophomore in his second semester of the year, I will be taught electrodynamics and atomic physics officially this semester, so I will also upload materials on these topics if possible.

Learning physics is still a long way to go and fortunately I still enjoy it! And it’s my great luck to have all your support along this fantastic journey!

Categories
_NOTES_ MIT8.04 Quantum Physics 1 PRLabour Notes

PRLabour Notes on MIT8.04 Quantum Physics 1 [Category]

MIT8.04 [part1] Linearity and superposition, linear operator, Schrodinger equation, necessity of complex number, Mach-Zehnder interferometer, polarizer experiment and spin experiment......
Read More
MIT8.04 [part2] Mach-Zehnder interferometer and Elitzur-Vaidman bomb MIT 8.04 Complete Video link Guide page other notes
Read More
MIT8.04 [part4] Galilean transformation of de Broglie wavelength. Wave-packets and group velocity. MIT 8.04 Complete Video link Guide page other...
Read More
MIT8.04 [part5] Matter wave for a particle. Momentum and position operators. Schrödinger equation. MIT 8.04 Complete Video link Guide page...
Read More
MIT8.04 [part6] Interpretation of the wavefunction. Probability density, probability current. Current conservation. Hermitian operators. MIT 8.04 Complete Video link Guide...
Read More
MIT8.04 [part7] Expectation values of ˆx. Wave-packets and uncertainty. Time evolution of wave-packets. Shape changes. Fourier transforms and Parseval Theorem....
Read More
MIT8.04 [part8] Momentum expectation values. General definition of expectation values of Hermitian operators. Time derivative of expectation values (Ehrenfest theorem)....
Read More
MIT8.04 [part9] Hermitian operators as observables: Real eigenvalues orthogonal eigenfunctions. Measurement postulate. Uncertainty defined. Uncertainty relation stated. MIT 8.04 Complete...
Read More
MIT8.04 [part10] Stationary states. Boundary conditions for the wavefunction. Particle on a circle. MIT 8.04 Complete Video link Guide page...
Read More
MIT8.04 [part11] Finite and infinite square well MIT 8.04 Complete Video link Guide page other notes
Read More
MIT8.04 [part12] The Dirac Well and Scattering off the Finite Step MIT 8.04 Complete Video link Guide page other notes
Read More
MIT8.04 [part13] Δ function potential and harmonic oscillator MIT 8.04 Complete Video link Guide page other notes
Read More
MIT8.04 [part14] Harmonic oscillator WCM0202下载 MIT 8.04 Complete Video link Guide page other notes
Read More
MIT8.04 [part15] Algebraic approach to simple harmonic oscillator WCM0213下载 MIT 8.04 Complete Video link Guide page other notes
Read More
MIT8.04 [part16] Scattering states and step potential section16下载 MIT 8.04 Complete Video link Guide page other notes
Read More
MIT 8.04 Complete Video link Guide page other notes
Read More
Categories
_NOTES_ MIT8.04 Quantum Physics 1 PRLabour Notes

PRLabour’s note on MIT8.04 [part 16]

MIT8.04 [part16]

Scattering states and step potential