Abstract: Feature regression is a simple way to distill larger neural network models to lighter ones. In this work we show that, with simple changes to network architecture, regression can outperform more complex state-of-the-art approaches for knowledge distillation from self-supervised models. Surprisingly, the addition of multi-layer perceptron head to CNN backbone is beneficial even if used only during distillation and discarded for downstream task. Deeper non-linear projections can thus be used to accurately mimic the teacher without changing inference architecture and time. We utilize independent projection heads to simultaneously distill multiple teacher networks. Additionally, we find that using the same weakly augmented image as input for both teacher and student networks is crucial for distillation. Experiments on large scale ImageNet dataset demonstrate the efficacy of the proposed changes in various self-supervised distillation settings.