top of page

RESEARCH

​

Publications

​

[1] “The Crowd Beyond Funders – An Integrative Review of and Research Agenda for Crowdfunding”

with Vivianna He (St. Gallen) and Alex Murray (U of Oregon) - Academy of Management Annals, 2024, Vol. 18, No. 1, 348–394. https://doi.org/10.5465/annals.2022.0064.

 

Abstract: Crowdfunding, or soliciting small contributions from large and dispersed crowds through online platforms, is an increasingly indispensable strategy for established firms, young ventures, and aspiring entrepreneurs alike. Synthesizing research in the fields of management, entrepreneurship, innovation, operations, information systems, and marketing, we conduct an integrative review of the crowdfunding research accumulated over the past decade. We aim to break down disciplinary silos to develop a framework that integrates insights across research communities. We identify three underlying dimensions that differentiate extant research: the goal of the campaigner, the role of the crowd, and the boundary of the crowdfunding event. Scholars have brought two perspectives to bear on these questions: an elemental perspective and a processual perspective. We outline an integrative model that takes account of crowdfunding as a process involving heterogeneous participants with idiosyncratic monetary and non-monetary goals at different stages. Our multidisciplinary review of this expanding body of literature not only integrates dispersed insights but also, more importantly, stimulates a future research agenda that goes beyond the traditional boundaries of crowdfunding research.

Working Papers

​

[2] [Title Withheld] Crowd Feedback

with Sen Chai (McGill) – Under review 

​​

[Abstract Withheld]

​​

​

​

​

​

​

​

​

​[3] “Collective Problem-Solving Without Authority: Enhancing Deliberation with Shared Attribute Spaces”

with Vivianna He (St. Gallen) and Phanish Puranam (INSEAD) - Full manuscript

nominated: SMS 2024 Best Conference Paper Competition

nominated: SMS 2024 Best PhD Paper Competition

​

Abstract: Technology platforms have made online deliberation and collective problem-solving possible among a diversity of individuals distributed across locations. To avoid deadlock and help converge diverse perspectives into a useful solution, it is common to use some form of centralization—such as mediators, community managers, or a panel that selects among proposed solutions or screens participants. However, centralization can cause premature convergence to suboptimal solutions and suppress the motivation to participate. We develop a minimally invasive intervention that allows participants in online deliberation to converge in a decentralized manner on solutions while improving their impact. Specifically, we provide participants with a Shared Attribute Space (SAS)—a set of attribute dimensions along which they can compare solution proposals. We do not offer any weights on these attributes, which are extracted from prior deliberations. Nonetheless, in two online experiments using a custom-built online decentralized deliberation platform, we find that providing SAS enhances deliberation quality and quality of solutions without materially impeding solution convergence. Our study thus provides initial empirical evidence for the possibility of improving quality without losing convergence in collective problem-solving through decentralized online deliberation.

​

​[4] “How Experience Moderates the Impact of GenAI Ideas on the Research Process”

with Sen Chai (McGill) and Anil Doshi (UCL) - First manuscript draft

​

Abstract: At the heart of scientific discovery are expert scientists who identify research ideas worthy of inquiry. While generative artificial intelligence (AI) tools—large language models, in particular—have been found to outperform humans in tasks involving pattern recognition and reasoning, their impact on one of the most fundamental tasks in science, that of generating research ideas, remains underexplored. We investigate how the use of generative AI affects the generation of scientific ideas in the creation of research proposals and scientists’ attitudes toward integrating AI-generated ideas into their research process. Through a randomized online experiment with 310 scientists on a custom-built research proposal generation platform, we study how generative AI ideas affect scientists’ self-evaluation of their proposals, research agenda, and other attitudes. We do not find any average effect on their assessment of the proposals’ novelty or usefulness. However, research experience is an important moderator: experience negatively moderates the effect of generative AI on novelty and other views of the proposal, as well as views of their research agendas more broadly. Further analyses suggest that experience and expertise are associated with increased aversion to generative AI, due to mechanisms ranging mainly from mistrust of the technology to also professional identity threat and simple resistance to technology. Our findings contribute to the innovation literature by offering first insights into generative AI’s role in the research idea generation process, and to the growing literature on generative AI’s role in complementing human tasks. We conclude by discussing implications for researchers, policymakers, and administrators.

​​

Research In Progress

​

[5] “Balancing Participation and Consensus in Deliberation”

with Vivianna He (St. Gallen) and Phanish Puranam (INSEAD) - Data collection

​​

[6] “Crowd Feedback and Aspiration Levels”

with Jiongni Mao (Bocconi) – Initial stage

bottom of page