The Kaczmarz algorithm is an iterative method for solving linear systems of equations of the form Ax=y. Owing to its low memory footprint, the Kaczmarz algorithm has gained popularity for its practicality in applications to large-scale data, acting only on single rows of A at a time. In this talk, we discuss selecting rows of A randomly (Randomized Kaczmarz), selecting rows in a greedy fashion (Motzkin's Method), and selecting rows in a partially greedy fashion (Sampling Kaczmarz-Motzkin algorithm). Despite their variable computational costs, these algorithms have been proven to have the same theoretical upper bound on the convergence rate. Here we present an improvement upon previous known convergence bounds of the Sampling Kaczmarz-Motzkin algorithm, capturing the benefit of partially greedy selection schemes. Time permitting, we also will discuss an extension of the Kaczmarz algorithm to the setting where data takes on the form of a tensor and make connections between the new Tensor Kaczmarz algorithm and previously established algorithms. This presentation contains joint work with Jamie Haddock and Denali Molitor.