In writer-independent verification systems, a single model is trained for all users of the system using dissimilarity vectors obtained through a dichotomy transformation that converts a multi-class problem into a 2-class problem comprising: (i) the intra-class dissimilarity vectors computed from samples of the same user, (ii) the inter-class dissimilarity vectors computed from samples of different users. When mapping handwritten signature representations, it is desired to obtain well-separated dense clusters of signature representations for each user, in such a way that transformed intra-class dissimilarity vectors tend to be separated from the inter-class dissimilarity vectors. Moreover, since skilled forgeries resemble reference signatures, it is also desired to obtain skilled forgery dissimilarity vectors that are further away from the region of the intra-class dissimilarity vectors. In this work, it is hypothesized that an improved dissimilarity space can be achieved through a multi-task framework for learning handwritten signature feature representations based on deep contrastive learning. The proposed framework is composed of two objective-specific tasks; it does not use skilled forgeries for training. The first task aims to map signature examples of a given user in a dense cluster, while linearly separating the signature representations of different users. The second task aims to adjust forgery representations by adopting a contrastive loss with the ability to perform hard negative mining. Hard negatives are similar examples but from different classes that can be seen as artificially generated skilled forgeries for training. In a writer-independent verification approach, the model obtained with the proposed framework is evaluated in terms of the equal error rate on GPDS-300, CEDAR and MCYT-75 datasets. Experiments demonstrated a statistically significant improvement in signature verification compared to the state-of-the-art SigNet feature extraction method.