Dynamic Networks Improve Remote Decision-Making
Crisis events demand decision-making networks that are dynamic, not static, and respond to frequent feedback.
The idea of collective intelligence is not new. Research has long shown that in a wide range of settings, groups of people working together outperform individuals toiling alone. But how do drastic shifts in circumstances, such as people working mostly at a distance during the COVID-19 pandemic, affect the quality of collective decision-making? After all, public health decisions can be a matter of life and death, and business decisions in crisis periods can have lasting effects on the economy.
During a crisis, it’s crucial to manage the flow of ideas deliberatively and strategically so that communication pathways and decision-making are optimized. Our recently published research shows that optimal communication networks can emerge from within an organization when decision makers interact dynamically and receive frequent performance feedback. The results have practical implications for effective decision-making in times of dramatic change.
Email updates on the Future of Work
Monthly research-based updates on what the future of work means for your workplace, teams, and culture.
Please enter a valid email address
Thank you for signing up
How Feedback and Network Flexibility Affect Collective Intelligence
In two web-based experiments, each involving more than 700 people recruited online, we examined how organizational structures influence collective decision systems.
In both experiments, participants were asked to estimate, in a series of 20 rounds, the strength of statistical correlations between two variables (such as height and weight) that were graphed on a scatterplot. Without the participants’ knowledge, we introduced distracting statistical noise into the graphed data and systematically varied the degree of distraction across individuals. Then, in the middle of the 20-round series, we abruptly shuffled the noise levels (thereby inducing a drastic shift). Monetary prizes were awarded for accurate performance.
Our aims were to test whether networks of participants that were dynamically structured would make more accurate estimates than either static networks or individual participants, and whether getting frequent, high-quality feedback on performance would make participants perform better.
In the first experiment, groups of 12 participants were randomly assigned to one of three conditions: (1) solo, whereby each group member did the 20 estimation rounds in isolation; (2) static network, in which each member submitted his or her estimates after collaborating with three other preassigned participants; or (3) dynamic network, in which members chose three collaborators to interact with before submitting their individual answers. In all conditions, participants received performance feedback after each round, but those assigned to the dynamic condition were also able to choose three new collaborators after getting the feedback.