https://www.cla.temple.edu/RePEc/documents/DETU_18_04.pdf
CvMn X = p X 1 fg > tg n bJ(u; g; t; ^pg) Fn;X (du) : g=2t=2 X (4.5) This choice of test statistic is similar to the one used by Escanciano (2008) in a di erent context.
https://cis.temple.edu/~jiewu/research/publications/Publication_files/FedCPD.pdf
0 2 2 2 L1 + E 0G2 + E2 0G2 2 + 2 L2E2 0G2 2 + 2 Theorem 2. (Non-convex FedCPD convergence). 0 < e < 0, e 2 f1 1; 2; : : : ; Eg, where represents the de- 2; cay factor for the learning rate. If the learning rate for each epoch satisfies the following condition, the loss function de-creases monotonically, leading to convergence:
https://cst.temple.edu/sites/cst/files/theses1/bao.pdf
Note that in the case p = 2, an explicit basis of M2k( 0(2)) is given by f(E 2)aEb 4j2a + 4b = 2kg; where E 2(z) = E2(z) 2E(2z) and E2(z); E4(z) is Eisenstein series of weight 2 and 4 respectively (see page 56 of [14]). One checks that S10( 0(2)) is one dimensional and the Fourier expansion of