Large Language Models (LLMs) have garnered significant attention within the academic community due to their advanced capabilities in natural language understanding and generation. While empirical studies have shed light on LLMs' proficiency in complex task reasoning, a lingering question persists in the field of Financial Sentiment Analysis (FSA): the extent to which LLMs can effectively reason about various financial attributes for FSA. This study employs a prompting framework to investigate this topic, assessing multiple financial attribute reasoning capabilities of LLMs in the context of FSA. By studying relevant literature, we first identified six key financial attributes related to semantic, numerical, temporal, comparative, causal, and risk factors. Our experimental results uncover a deficiency in the financial attribute reasoning capabilities of LLMs for FSA. For example, the examined LLMs such as PaLM-2 and GPT-3.5 display weaknesses in reasoning numerical and comparative attributes within financial texts. On the other hand, explicit prompts related to other financial attributes showcase varied utilities, contributing to LLMs' proficiency in discerning financial sentiment.