Gaussian processes are the dominant method for nonparametric regression on small to medium datasets. One of the main challenges is kernel selection and hyperparameter optimization. We propose a new regression method that does not require specification of kernels, length scales, variances, or prior means. Its only hyperparameter is the assumed regularity (differentiability) of the true function. We achieve this with a novel non-Gaussian stochastic process that builds from minimal assumptions of translation and scale invariance. This process can be thought of as a hierarchical Gaussian process model with hyperparameters built into the process itself. We develop the necessary mathematical tools to perform inference in this process. For interpolation, we find that the posterior is a t-process using the polyharmonic spline as the mean. Regression gives the exact posterior distribution and finds its mean (again the polyharmonic spline) and approximate variance by sampling. Experiments show comparable performance to Gaussian processes with optimized hyperparameters. The most important insight is that it is possible to derive working machine learning methods by assuming only regularity and scale and translation invariance, without any other model assumptions.