Human eye gaze has recently been used as an effective input interface for wearable displays. In this paper, we propose a gaze-based interaction framework for optical see-through displays. The proposed system can automatically judge whether a user is engaged with virtual content in the display or focused on the real environment and can determine his or her cognitive state. With these analytic capacities, we implement several proactive system functions including adaptive brightness, scrolling, messaging, notification, and highlighting, which would otherwise require manual interaction. The goal is to manage the relationship between virtual and real, creating a more cohesive and seamless experience for the user. We conduct user experiments including attention engagement and cognitive state analysis, such as reading detection and gaze position estimation in a wearable display towards the design of augmented reality text display applications. The results from the experiments show robustness of the attention engagement and cognitive state analysis methods. A majority of the experiment participants (8/12) stated the proactive system functions are beneficial. Copyrightc 2015 ACM.