Ascertaining the level of reading comprehension in a learner is often a challenging task. Although written tests and self-evaluations can provide feedback as to whether an individual understands a particular topic, they are not real time, do not necessarily provide a full picture of the reader's comprehension, and can be subjective.In this paper, we present initial results of a study to determine better ways to evaluate a user's comprehension and understanding of educational comic books using pupillometry. Our system recreates the reading experience of an immunology comic book in virtual reality (VR), allows users to rate their comprehension of a particular section, and records eye data during the learning task. Through experiments, we explore the potential of this interface to facilitate learning and examine pupil metrics that might be used to automatically classify comprehension and understanding at the category (topic) level. We also discuss numerous design considerations that should be taken into account when designing future interfaces for evaluation of learning or comprehension.