||Denoising a stationary process X corrupted by additive white Gaussian noise Z, i.e. recovering X^n from Y^n = X^n Z^n, is a classic and fundamental problem in information theory and statistical signal processing. Denoising algorithms, for general analog sources, which are theoretically-founded and computationally-efficient are yet to be found. In a Bayesian setup, given the distribution of X^n, a minimum mean square error (MMSE) denoiser computes E[X^n|Y^n]. However, for general sources, computing E[X^n|Y^n] is computationally very challenging, if not infeasible. In this paper, starting from a Bayesian setup, a novel denoising method, namely, quantized maximum a posteriori (Q-MAP) denoiser, is proposed and its asymptotic performance is analyzed. Both for memoryless sources, and for structured first-order Markov sources, it is shown that, asymptotically, as the noise variance converges to zero, the mean-squared error converges to the information dimension of the source. For the studied memoryless sources, this limit is known to be the optimal. A key advantage of the Q-MAP denoiser, unlike an MMSE denoiser, is that it highlights the key properties of the source distribution that are to be used in its denoising. This property dramatically reduces the computational complexity of approximating the solution of the Q-MAP denoiser. Additionally, it naturally leads to a learning-based denoiser. Using ImageNet database for training, initial simulation results exploring the performance of such a learning-based denoiser in image denoising are presented.