A significant issue associated with the use of head-mounted displays for augmented reality is the presence of latency between the real world and the augmented images displayed to the headset, which, if uncompensated for, results in registration error that may limit the effectiveness of the augmented information as well as bring about user discomfort. In addition, further temporal discrepancies arise in the case when fusing information from multiple cameras of different capture frequencies to construct the augmented image as, in addition to temporal misalignment between the real world and the augmented image, there is desynchronization among the different sensors that may also lead to registration mismatch. In order to address these temporal inconsistencies, we present AR Timewarping, a novel temporal synchronization framework that is particularly adapted for video see-Through (VST) head-mounted displays and consists of two main algorithms, one for head motion and one for scene motion, that together act to temporally warp and merge information from multiple sensors to significantly improve registration in the augmented image. System tests of our algorithms show that we can reduce registration error between two unsynchronized video streams by 87.04% and 81.64% for registration error arising from head motion and scene motion, respectively. Results from a user experiment show that subjects' abilities to track a moving object were significantly improved by our algorithms, with a 32.14% average reduction in angular tracking error, and furthermore subjects rated the combined algorithms better overall than the base case in terms of both image clarity and user comfort.