License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.CCC.2022.22
URN: urn:nbn:de:0030-drops-165846
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/16584/
Aggarwal, Amol ;
Alman, Josh
Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation
Abstract
For any real numbers B ≥ 1 and δ ∈ (0,1) and function f: [0,B] → ℝ, let d_{B; δ}(f) ∈ ℤ_{> 0} denote the minimum degree of a polynomial p(x) satisfying sup_{x ∈ [0,B]} |p(x) - f(x)| < δ. In this paper, we provide precise asymptotics for d_{B; δ}(e^{-x}) and d_{B; δ}(e^x) in terms of both B and δ, improving both the previously known upper bounds and lower bounds. In particular, we show d_{B; δ}(e^{-x}) = Θ(max{√{B log(δ^{-1})}, log(δ^{-1})/{log(B^{-1} log(δ^{-1}))}}), and d_{B; δ}(e^{x}) = Θ(max{B, log(δ^{-1})/{log(B^{-1} log(δ^{-1}))}}), and we explicitly determine the leading coefficients in most parameter regimes.
Polynomial approximations for e^{-x} and e^x have applications to the design of algorithms for many problems, including in scientific computing, graph algorithms, machine learning, and statistics. Our degree bounds show both the power and limitations of these algorithms.
We focus in particular on the Batch Gaussian Kernel Density Estimation problem for n sample points in Θ(log n) dimensions with error δ = n^{-Θ(1)}. We show that the running time one can achieve depends on the square of the diameter of the point set, B, with a transition at B = Θ(log n) mirroring the corresponding transition in d_{B; δ}(e^{-x}):
- When B = o(log n), we give the first algorithm running in time n^{1 + o(1)}.
- When B = κ log n for a small constant κ > 0, we give an algorithm running in time n^{1 + O(log log κ^{-1} /log κ^{-1})}. The log log κ^{-1} /log κ^{-1} term in the exponent comes from analyzing the behavior of the leading constant in our computation of d_{B; δ}(e^{-x}).
- When B = ω(log n), we show that time n^{2 - o(1)} is necessary assuming SETH.
BibTeX - Entry
@InProceedings{aggarwal_et_al:LIPIcs.CCC.2022.22,
author = {Aggarwal, Amol and Alman, Josh},
title = {{Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation}},
booktitle = {37th Computational Complexity Conference (CCC 2022)},
pages = {22:1--22:23},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-241-9},
ISSN = {1868-8969},
year = {2022},
volume = {234},
editor = {Lovett, Shachar},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2022/16584},
URN = {urn:nbn:de:0030-drops-165846},
doi = {10.4230/LIPIcs.CCC.2022.22},
annote = {Keywords: polynomial approximation, kernel density estimation, Chebyshev polynomials}
}
Keywords: |
|
polynomial approximation, kernel density estimation, Chebyshev polynomials |
Collection: |
|
37th Computational Complexity Conference (CCC 2022) |
Issue Date: |
|
2022 |
Date of publication: |
|
11.07.2022 |