Semantic mapping is computationally expensive, requiring either large GPUs on the robot, or significant numbers of uploaded images to the cloud. Neither solution is appropriate for home robots, where the hardware must be inexpensive, and privacy is a real concern. Instead of resorting fully to hand-labeled maps to address privacy concerns, where label noise can be a big problem depending on the quality of the input interface, we propose an interactive solution integrating hand drawn boxes with robot exploration data. Specifically, nonlinear optimization is conducted on each user-submitted proposal based on the bounding boxes and detection information collected by the robot, generating higher quality estimates quickly for human review as part of an interaction. In this manner, images are processed once on the robot with cost effective algorithms, and then discarded, minimizing the risk of exposing sensitive information. This privacy-aware approach improves map and object quality compared to using hand-labeled maps directly, even when working with user proposals that have up to 50% label noise.