https://cis.temple.edu/~jiewu/research/publications/Publication_files/FedCPD.pdf
0 2 2 2 L1 + E 0G2 + E2 0G2 2 + 2 L2E2 0G2 2 + 2 Theorem 2. (Non-convex FedCPD convergence). 0 < e < 0, e 2 f1 1; 2; : : : ; Eg, where represents the de- 2; cay factor for the learning rate. If the learning rate for each epoch satisfies the following condition, the loss function de-creases monotonically, leading to convergence:
https://faculty.cst.temple.edu/~szyld/reports/randCholQR_rev2_report.pdf
Only the trian-gular factor ˆR is needed, so some (exactly) orthogonal Qtmp exists such that Qtmp ˆR = ˆW + E2 = S2S1V + E1 + E2. (17) Analysis of E2 is provided in Section 5.2.4. In step 3, solving the triangular system Q ˆR = V also creates errors. These are analyzed in a row-wise fashion in Section 5.2.5, taking the form
https://cis.temple.edu/~jiewu/research/publications/Publication_files/ICDE2024_Online_Federated_Learning_on_Distributed_Unknown_Data_Using_UAVs.pdf
For the energy consumption during the learning phase, we set e1 = 0.01J and e2 = 80J [18]. To better align with real-world data collection scenarios, we design fine-grained PoI data models from three perspectives: data distribution, data generation patterns, and data quality.
https://cis.temple.edu/~latecki/Courses/AI-Fall12/Lectures/GreatMatrixIntro.pdf
r„[’(x;„)]=’(x;„)¢Vx¡1(x¡„)(15:40b) Example13. Considerobtainingtheleast-squaressolutionforthegenerallinear model,y = Xfl+ e, where we wish to find the value of that minimizes the residual error givenyandX. In matrix form, Xn i=1 e2 i= e Te =(y¡Xfl)T(y¡xfl) =yTy¡flXTy¡yTXfl+flXTXfl =yTy¡2flXTy+flXTXfl
https://ronlevygroup.cst.temple.edu/courses/2016_fall/biost5312/lectures/biostat_lecture_10.pdf
3. For any two data points (x1,y1), (x2,y2) the error terms e1,e2 are independent of each other. These assumptions may be tested by using several different kinds of plots. The simplest being the x-y scatter plot. Here, we plot the dependent variable y vs. the independent variable x and superimpose the regression line y = a + bx on the same plot.
https://cis.temple.edu/~wu/research/publications/Publication_files/ICDE2024_Xu.pdf
Similar to Lemma 1, we can also prove that the mapping satisfies the assumptions (H1-H5) in [37]. Next, given this contraction mapping, there exists a unique fixed-point based on the fixed-point theorem, i.e., the fixed point can be found by initializing the iterations with an arbitrary point.