Although the dominance of in-house development, contracting and outsourcing, crowdsourcing has been accepted as an alternative software development approach in recent years. It is a form of collective intelligence to work on software engineering tasks, from requirement extraction to testing. In the most common crowdsourcing model, a client (a firm or some type of institution) first broadcasts the task/project, which is intended to be developed, via a crowdsourcing platform. Each task/project has a prize and a deadline. Then, members of the platform community/crowd/individuals select the tasks that they want to submit solutions, either individually or in a collaborative manner. The submitted solutions are evaluated by the crowdsourcing platform and the most successful one is remunerated by the initiating organization. The prize of each task may be determined in terms of the expected duration of the task, its complexity, its quality, the required effort to complete, and the online reputation of the client. TopCoder awarding mechanism, performance-based, game theory-based, deadline driven or revenue sharing models are several approaches for determining task prizes in crowdsourced software development (CSD). In this paper, Putnam model, a well-known software effort estimation model, is proposed to be applicable as a prize determination approach in crowdsourcing. The demonstrative examples are chosen from TopCoder projects, and Function Point Analysis is used to calculate expected effort of the projects.