In recent years, see-through display technology has begun to reach a point where we will have the ability to continuously display virtual information in a variety of real world situations. However, augmented reality (AR) interfaces are currently limited in their ability to interact with the wearer and environment to provide specific, safe, and useful information when needed. Moreover, many questions remain about how to make content more relevant, especially in dynamic applications like rescue or manufacturing. By overcoming these issues, visual perception and cognition can potentially be enhanced past innate human ability. This paper describes the notion of Parallel Consciousness, the thought that technology can function as an extension of human memory and cognition, and outlines a framework to implement such an interface using AR. This involves understanding both the environment and user's mental and visual states to more effectively augment vision, and managing the retrieval of content to improve enhancements and assist both cognitive function and memory. To achieve these goals, we are exploring unique combinations of eye tracking and Artificial Intelligence (AI) to help monitor user attention and cognitive state. We hypothesize that by using these resulting states in conjunction with environmental analysis, we can better automate the retrieval and merge of virtual content into a user's view.