Fuzzy recurrent stochastic configuration networks (F-RSCNs) have shown great potential in modeling nonlinear dynamic systems due to their high learning efficiency, less human intervention, and universal approximation capability. However, their remarkable performance is accompanied by a lack of theoretical guidance regarding parameter selection in fuzzy inference systems, making it challenging to obtain the optimal fuzzy rules. In this article, we propose an improved version of F-RSCNs termed IF-RSCNs for better model performance. Unlike traditional neuro-fuzzy models, IF-RSCNs do not rely on a fixed number of fuzzy rules. Instead, each fuzzy rule is associated with a subreservoir, which is incrementally constructed in the light of a sub-reservoir mechanism to ensure the adaptability and universal approximation property of the built model. Through this hybrid framework, the interpretability of the network is enhanced by performing fuzzy reasoning, and the parameters of both fuzzy systems and neural networks are determined using the recurrent stochastic configuration (RSC) algorithm, which inherits the fast learning speed and strong approximation ability of RSCNs. In addition, an online update of readout weights using the projection algorithm is implemented to handle complex dynamics, and the convergence analysis of the learning parameters is provided. Comprehensive experiments demonstrate that our proposed IF-RSCNs outperform other classical neuro-fuzzy and nonfuzzy models in terms of learning and generalization performance, highlighting their effectiveness in modeling nonlinear systems.