Identifying anomalies in large data sets is an area of research with many practical applications. Auto-associative neural network architectures, such as autoencoders and replicator neural networks, identify anomalies by modeling normality and detecting deviations from the normal state. In contrast to autoencoders, the mechanisms that drive the ability of replicator neural networks to detect anomalies are not well understood. In this research, we provide the same explanation of replicator neural networks that currently exists for autoencoders. By analyzing the reconstruction manifolds of both techniques, we formulate several advantages and disadvantages of replicator neural networks over autoencoders. These theoretical advantages and disadvantages are then evaluated in a simulation study, where we show that, while the autoencoder is superior in most scenarios, the replicator neural network performs especially well for certain types of anomalies in data that contain clear segments. The methods are empirically compared using three publicly available datasets.