Conventional practice in modelling requires checking that a model is correct with respect to its conceptualisation (verification) and that it corresponds to the real world phenomenon modelled (validation). Verification and validation assure the external and operational validity of a model (its quality). In settings where data for estimation is not readily available, the behaviour of the computational model and its results are questionable. An alternative approach that has been recently gaining attention is docking or replication, which is a process where one model is tested against another to see if they produce the same results. This paper reports on the docking experience and validation stages performed when replicating a fuzzy logic (FL) model's findings with an agent-based model (ABM) in the context of innovation in business networks. Using two modelling paradigms and software programs, we modelled in an 18 month-interval a network of three agent categories, which collaborate on adopting and advancing new ideas and technologies. The network links describe relations between agents, which drive processes of innovation. The autonomous agents are organisations of different sizes, characteristics, and roles and they interact/share resources/collaborate for the purpose of adoption and diffusion of innovation that fits with the organisation's goals. Depending on their resources, there is scope for innovation or otherwise. In addition, the environment can foster or hinder the innovation processes. The verification and validation of these two models involved several stages: 1) Expert judgement - the structure of the conceptual model is supported by literature and discussions with colleagues in various forums; 2) Checking the correspondence between what is emerging from the model and what is expected to be seen in the real world (passing the believability test); it is desirable for the model components to adequately represent a real equivalent behavioural effect but as real data was not available at the time of designing the models, the alignment of the model results to expectations acts as an external validation of the model; 3) Internal validity - assessing consistency by changing input data distributions and analysing extreme conditions. 4) Docking (also known as alignment or replication with contrasting alternative theories) - comparing the results of the two different modelling approaches. The models ensured the distributional equivalence, but they were not identical. As both models used the same parameters, we believe that the differences in results arose only from relaxing the restrictive assumptions in the FL or ABM models. The ABM results matched the FL conditions tested. The stochastic ABM generated a distribution of outcomes caused by random encounters among agents, while FL generated an ensemble of crisp values as result of multiple rules of interaction applying simultaneously. The replication experience has been a positive one. Although this does not justify the models' acceptance, the docking results encourage us to pursue collecting data to validate empirically both models in the near future. We conclude with some thoughts from Kleindorfer et al. (1998) in relation to various positions in the philosophy of science with respect to validation: in the simulation literature there is a continuum of opinions ranging from extreme objectivist (model validation can be separated from model builder and its context) to relativist ("model and model builder are inseparable" and "validity is a matter of opinion" - Kleindorfer et al., 1998: 1097). Their debate leads to a perspective that simulation modelling should not follow a prescriptive set of approaches to validation, but rather modellers should "responsibly and professionally argue for the warrant of the model".