When viewing content in a see-through head mounted display (HMD), displaying readable information is still difficult when text is overlayed onto a changing background or lighted surface. Moving text or content to a more appropriate place on the screen through automation or intelligent algorithms is one viable solution to this kind of issue. However, many of these algorithms fail to act as a human would when placing text in a more appropriate location in real time. In order to improve these text and view management algorithms, we report the results and analysis of an experiment designed to evaluate user tendencies when placing virtual text in the real world through an HMD. In the conducted experiment, 20 users manually overlayed text in real time onto 4 different videos taken from the first-person perspective of a pedestrian. We find that users have a tendency to place overlayed text in locations near the center of the viewing field, gravitating towards a point just below the horizon. Common locations for text overlay such as walls, shaded areas, and pavement are classified and discussed.