The Sesame system-level simulation framework targets efficient design space exploration of embedded multimedia systems. Even despite Sesame’s high efficiency, it would still fail to explore large parts of the design space simply because system-level simulation is too slow for this. Therefore, Sesame uses analytical performance models to provide steering to the system-level simulation, guiding it toward promising system architectures and thus pruning the design space. In this paper, we present a mechanism based on execution profiles, referred to as signatures, to calibrate these analytical models with the aim to deliver trustworthy estimates. Moreover, we also present a number of experiments in which we evaluate the accuracy of our signature-based performance models using a case study with a Motion-JPEG encoder and the Mediabench benchmark suite for performing off-line calibration of the models.