[3Xin2-61] Evaluating the Capability of Transformers in Computational Algebra
Keywords:Transformer, computational algebra, machine learning
Recent studies have revealed that Transformers, one of the most successful deep neural networks in recent machine learning, can learn to solve algebraic computations from many pairs of problems and solutions. For example, symbolic integration can be learned through supervised learning using many functions and the corresponding primitive functions. Unlike mathematical studies, this learning approach does not need algorithm design. Most of the prior studies individually address challenging algebraic problems, including NP-hard ones, but comprehensive investigation of the learning approach is still limited. In this study, we analyze the general Transformer's ability to learn algebraic computations through several computational algebra problems. Our experiments show that Transformers successfully learn polynomial factorization, polynomial greatest common divisor computation, and polynomial factor counting when working on integer coefficients. Interestingly, when finite field coefficients are used, we observe a significant performance degradation. We also observe that the input encoding scheme affects the performance.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.