Viewing and interacting with text based content safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a user's path, and scene variation must be dealt with effectively. My PhD research focuses on answering the following questions: 1) What are appropriate methods to intelligently move digital content such as e-mail, SMS messeges, and news articles, throughout the real world? 2) Once a user stops moving, in what way should dynamics of the current workspace change when migrated to a new static environment? 3) Lastly, how can users manipulate mobile content using the fewest number of interactions possible? My strategy for developing solutions to these problems primarily involves automatic or semi-automatic movement of digital content throughout the real world using camera tracking. I have already developed an intelligent text management system that actively manages movement of text in a user's field of view while mobile . I am optimizing and expanding on this type of management system, developing appropriate interaction methodology, and conducting experiments to verify effectiveness, usability, and safety when used with an HMD in various environments.