Video situation monitoring is essential for various applications such as surveillance and monitoring and societal applications (e.g., senior care or Assisted Living (AL) monitoring). Quality of life can be significantly improved in AL environments if various situations (e.g., monitoring community engagement, inactivity for a long time, etc.) can be analyzed automatically. These situations are currently monitored using manual analysis or custom solutions. The manual analysis process uses human-in-the-loop to watch videos exhaustively, and it is not practical and scalable for videos with long duration. On the other hand, custom solutions require each situation to be predefined and typically use sensors, machine learning algorithms, Internet of Things devices, and others. A new algorithm or device must be developed for each situation type. This paper proposes an alternative approach for automated situation monitoring by posing situations as queries. The proposed Querying Video Contents framework avoids or minimizes human-in-the-loop and can eventually support 'real-time' analysis. QVC framework extracts video contents once and allows 'ad-hoc' queries to be specified as needed and processed. Two alternative representation models (extended relational and graphs) are supported. Primitive operators and algorithms are developed for both models to analyze various situations. This paper discusses only the graph model of the QVC framework. Alternative graph models for representing extracted video contents, their properties for video situation analysis, and primitive algorithms for analyzing extracted contents for situation analysis are proposed in this paper. Additionally, the proposed approach identifies several situations related to the AL domain and approaches to their analysis using the proposed algorithms. The experimental results discuss the accuracy, scalability, and efficiency of the proposed approach for analyzing these situations.