Quiz Cover

Math for Machine Learning - Chapter 13: Orthogonality and Orthogonal Matrices

Created by Shiju P John ยท 11/4/2025

๐Ÿ“š Subject

math for machine learning

๐ŸŽ“ Exam

Any

๐Ÿ—ฃ Language

English

๐ŸŽฏ Mode

Practice

๐Ÿš€ Taken

0 times

Verified:

No. of Questions

35

Availability

Free


๐Ÿ“„ Description

This quiz assesses deep understanding of orthogonality and orthogonal matrices, crucial concepts in the mathematical foundations of machine learning and numerical analysis. Questions cover theoretical properties, computational implications, and applications in algorithms like QR decomposition and attention mechanisms. Learners will encounter challenging problems requiring a strong grasp of linear algebra, numerical stability, and their relevance in advanced machine learning contexts. Key formulas include:

  • Orthogonal Matrix Definition: QTQ=QQT=IQ^T Q = QQ^T = I

  • Preservation of Length (Euclidean Norm): โˆฅQxโˆฅ=โˆฅxโˆฅ\|Qx\| = \|x\|

  • Preservation of Inner Product/Angle: (Qx)โ‹…(Qy)=xโ‹…y(Qx) \cdot (Qy) = x \cdot y

  • Determinant of Orthogonal Matrix: detโก(Q)=ยฑ1\det(Q) = \pm 1

  • Orthonormal Basis Property: uiTuj=ฮดiju_i^T u_j = \delta_{ij} (Kronecker delta)

  • Projection onto Subspace WW with orthonormal basis UU: PW=UUTP_W = UU^T

  • QR Decomposition: A=QRA = QR, where QQ is orthogonal/has orthonormal columns and RR is upper triangular.

  • Least Squares Solution via QR: For Axโ‰ˆbAx \approx b, the solution x^\hat{x} minimizes โˆฅAxโˆ’bโˆฅ2\|Ax - b\|^2 and satisfies Rx^=QTbR\hat{x} = Q^T b.

  • Scaled Dot-Product Attention: Attention(Q,K,V)=softmax(QKTdk)VAttention(Q, K, V) = softmax(\frac{QK^T}{\sqrt{d_k}})V

๐Ÿท Tags

#orthogonality#orthogonal matrices#QR decomposition#orthonormal basis#machine learning#numerical stability#linear algebra#attention mechanisms#eigenvalues#matrix transformations

๐Ÿ”— Resource

internal

โฑ๏ธ Timed Mode Options

Choose Timing Mode

๐Ÿค Share Results

๐Ÿ”€ Question Options