Federated Learning allows multiple parties to train a model collaboratively while keeping data locally. Two main concerns when using Federated Learning are communication costs and privacy. A technique proposed to significantly reduce communication costs and increase privacy is Partial Weight Sharing (PWS). However, this method is insecure due to the possibility to reconstruct the original data from the partial gradients, called inversion attacks. In this paper, we propose a novel method to successfully combine these PWS and Secure Multi-Party Computation, a method for increasing privacy. This is done by making clients share the same part of their gradient, and adding noise to those entries, which are canceled on aggregation. We show that this method does not decrease the accuracy compared to existing methods while preserving privacy.