Considering the impact of recommendations on item providers is one of the duties of multi-sided recommender systems. Item providers are key stakeholders in online platforms, and their earnings and plans are influenced by the exposure their items receive in recommended lists. Prior work showed that certain minority groups of providers, characterized by a common sensitive attribute (e.g., gender or race), are being disproportionately affected by indirect and unintentional discrimination. Existing fairness-aware frameworks expose limits to handle a situation where all these conditions hold: (i) the same provider is associated to multiple items of a list suggested to a user, (ii) an item is created by more than one provider jointly, and (iii) predicted user-item relevance scores are biasedly estimated for items of provider groups. Under this scenario, we characterize provider (un)fairness with a novel metric, claiming for equity of relevance scores among providers groups, based on their contribution in the catalog. We assess this form of equity on synthetic data, simulating diverse representations of the minority group in the catalog and the observations. Based on learned lessons, we devise a treatment that combines observation upsampling and loss regularization, while learning user-item relevance scores. Experiments on real-world data show that our treatment leads to higher equity of relevance scores. The resulting suggested items provide fairer visibility and exposure, wider minority-group item coverage, and no or limited loss in recommendation utility.