We present an artificial neural network which self-organizes in an unsupervised manner to form a sparse distributed representation of the underlying causes in data sets. This coding is achieved by introducing several rectification constraints to a PCA network, based on our prior beliefs about the data. Through experimentation we investigate the relative performance of these rectifications on the weights and/or outputs of the network. We find that use of an exponential function on the output to the network is most reliable in discovering all the causes in data sets even when the input data are strongly corrupted by random noise. Preprocessing our inputs to achieve unit variance on each is very effective in helping us to discover all underlying causes when the power on each cause is variable. Our resulting network methodologies are straightforward yet extremely robust over many trials.