Automated generation of commercial tweets has become a useful and important tool in the use of social media for marketing and advertising. In this context, paraphrase generation has emerged as an important problem. This type of paraphrase generation has the unique requirement of requiring certain elements to be kept in the result, such as the product name or the promotion details. To address this need, we propose a Constraint-Embedded Language Modeling (CELM) framework, in which hard constraints are embedded in the text content and learned through a language model. This embedding helps the model learn not only paraphrase generation but also constraints in the content of the paraphrase specific to commercial tweets. In addition, we apply knowledge learned from a general domain to the generation task of commercial tweets. Our model is shown to outperform general paraphrase generation models as well as the state-of-the-art CopyNet model, in terms of paraphrase similarity, diversity, and the ability to conform to hard constraints.