Site icon Beehive CX and Insight Agency

Co-creation builds better online research communities

Better Research Communities

Paul Kavanagh,  Founder and Director of Beehive Research Ltd, noted research has traditionally been a linear process wherein organizations hire an agency to query consumers, who provide answers to the agency, who returns final results to manufacturers. Co-creation however adds several layers of communication to that model, with organizations engaging in two-way dialogue with consumers, consumers asking questions of and getting answers from agencies, and consumers having dialogues with each other that are captured and included in the research process.

Co-creation builds better online research communities

“Co-creation abounds in the online world,” he said,  “and manufacturers are capitalizing on it. Procter & Gamble, for example, has been working with consumers through its Tremor Panel to generate word-of-mouth marketing. Several organizations and agencies are conducting research in online virtual worlds like Second Life. Organizations are also including consumers in product design and innovation,” he said, citing websites NIKEID.com and legofactory.com, which both allow consumers to design their own products. “Similarly, companies like McDonald’s and L’Oréal have partnered with consumers on advertising creation.”

Kavanagh  suggests that online research communities are the forerunner of “Research 2.0,” suggesting, “The key to successful ‘research communities’ is finding the middle ground between the kind of engagement found at social networking sites like Facebook and YouTube and virtual worlds like Second Life, and the more traditional online panels and research studies.” The motivations for being a panelist on a commercial access panel are also very different from those seen on an organizations customer panel. Typical motivations driving the latter research community participation, he said, include brand loyalty, the allure of community information sharing and varying degrees of engagement / communication.

While traditional research “has always been about representative sample, Word of Mouth Advertising (WOM) often utilizes elite groups, like ‘Mavens,’ people who have a disproportionate influence on other members of their network,” Kavanagh explained. “In research 2.0 can these influencers cause results to be altered and, as such, should we be concerned by biases in social networks? Will our community include all people we should engage with, and how do we balance it to engage just with the people we want to speak to?”

Kavanagh presented a case study on Screwfix.com, the UK’s largest direct and online supplier of trade tools, accessories and hardware products. “They had an existing talk forum and panel, but faced challenges with managing diversity, engagement, segmentation and communication,” he described. Asked for suggestions, Screwfix.com panelists said they wanted to be able to leave feedback on purchased items, score other panel members’ suggestions, see feedback or a survey summary, and see the actions Screwfix.com took as a result of panel input.

To assist in driving strategy forwards, Beehive polled Screwfix panelists about their interaction with and affinities for “Web 2.0” venues like YouTube and Facebook, and to assess what types of  co-creative engagement would be attractive, with responses including quick polls, blogs, product reviews, forums, chat and others. “Ratings, for example, have become a major feature of communities like Trip Advisor and Amazon,” Kavanagh said. “They can provide users with feelings of engagement or empowerment, and an opportunity to influence, share, praise or even disrupt.”

Before implementing a rating system, Kavanagh advised researchers to consider the following:

• Will previous knowledge of other’s opinions cause a “sheep effect?”

• What are the motivations for people to leave rating feedback?

• What type of people leave ratings?

• What is the profile of these responders?

• Can we trust the results or are ratings just good for engagement?

To investigate this further, Beehive ran three test questions using 1,127 respondents split into three groups to see if displaying the average rating before a vote is cast––a common practice––significantly affected the outcome. “In each test, all three groups were shown the same suggestion for improving the Screwfix community and asked to rate it,” Kavanagh related. Group A and Group B each saw a different average rating on the screen when they read the question and cast their vote, and Group C saw no rating (control). “Initial findings suggest showing average scores may have an influential effect on subsequent responders,” Kavanagh revealed. “In addition, younger responders gave higher scores in all three tests; they generally appear more in tune with social networks and interacting with other community members, and this has significant importance to the type of community an organization should build.”

Future Beehive communities will be built through co-creation between Beehive, the client and community members,  and based on the business requirements, audience type, and new opportunities/techniques. “We feel this will maximize research insight, engagement, diversity and thus representativeness.”

Kavanagh concluded by drawing parallels between co-created online research communities and those in nature. “Man-made communities are evolving as members work together to design, shape and improve the environment through communication and feedback,” he described. “Members exhibit altruistic behavior and also adopt distinct roles  whether Mavens, early adopters, advocates, shapers, sheep, etc.

Bees, on the other hand, have been doing this is nature for years, working and communicating together to design and build their community, exhibiting altruistic behavior–collecting nectar for common good –improving the environment through pollination  and having distinct roles.”

Exit mobile version