Score Distillation Sampling (SDS) has emerged as a highly effective technique for leveraging 2D diffusion priors for a diverse set of tasks such as text-to-3D generation. While powerful, SDS still struggles with achieving fine-grained alignment to user intent. To overcome this limitation, we introduce RewardSDS, a novel approach that weights noise samples based on the alignment scores of a reward model, producing weighted SDS loss. This loss prioritizes gradients from noise samples that yield aligned high-reward output. Our approach is broadly applicable and can be applied to diverse methods extending SDS. In particular, we also demonstrate its applicability to Variational Score Distillation (VSD) by introducing RewardVSD. We evaluate RewardSDS and RewardVSD on text-to-image, 2D editing, and text-to-3D generation tasks, demonstrating a significant improvement over SDS and VSD on a diverse set of metrics measuring generation quality and alignment to desired reward models, enabling state-of-the-art performance.
Overview of our method, an image is first rendered from a given view and random noises are applied (at a given timestep). The noisy images are then scored by denoising them and applying a reward model on the output. These scores are then mapped to corresponding weights, which are used to weigh the contribution of each noisy sample in score distillation.
@misc{chachy2025rewardsdsaligningscoredistillation,
title={RewardSDS: Aligning Score Distillation via Reward-Weighted Sampling},
author={Itay Chachy and Guy Yariv and Sagie Benaim},
year={2025},
eprint={2503.09601},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.09601},
}