The Excel-based approach to sharing participation data may be quick, easy and effective but it has its own limitations. For example, it has to be compiled by a person, and by its nature is rather susceptible to human error. It takes around 20 minutes to produce and share, and needs a little first-time explaining to recipients.
Put simply, it relies on people power, lacks visual impact and struggles to tell a convincing story ‘at a glance’.
Increasingly, project teams want to engage in post-event communications activities with their stakeholders, and to use data visualisation tools for that purpose.
We’ve therefore worked to experiment with open tools and techniques to help us / them better convey aspects of online discussions. In this blog we share some examples, discuss issues and challenges we’ve discovered along the way, and share reflections on this area of work.
We’ve experimented with a number of approaches, including using third party open data tools. Here are some examples:
Discussion group dashboard: in this example we used a combination of open data tools to explore how we could provide data services for Eldis Communities group administrators. The aim was to bring together a variety of data metrics that otherwise were unavailable, difficult to obtain or poorly presented, so that administrators could ‘tell the story’ about their group(s).
Data for the dashboard is exposed from the Webcrossing Neigbours platform to a Solr Index using three functions (for group, member and post data). The data is then presented via webpages in a variety of textual and graphical means (using Google Charts). The data can be filtered by specific date periods and to include / exclude group administrators. The screenshots below give examples of the kinds of data presented for a particular group.
Word clouds: Though by no means novel, word clouds remain useful and evocative ways to present a body of text, and are particularly apt for reflecting the multitude of keywords that get used by groups of discussion participants. We use Wordle because of the flexibility of its in-built tools to define what terms get included / excluded, to choose how the cloud is presented and because it enables the user to re-generate the output until satisfied. The example below was generated for a report the project team prepared and shared with their stakeholders.
Infographics: we have created just a single ediscussions infographic, and although it has been usefully referenced many times the demand to produce more has not materialised. The key value of this particular data visualisation output has been that it has enabled us to present an aggregate view of a number of discussions put together.
Though normally we are tentative about comparing events, our strong portfolio experience in gender-focused discussions prompted the decision to invest the time and effort required to produce this. It was completed using Piktochart – a web-based authoring tool providing ready-to-use templates and design assets for bespoke application within predefined limits.
Contributions coding: this approach aims to visualise trends and patterns in engagement, and can be used to differentiate how different groups of people participate, how ideas are discussed and develop over time.
We applied this approach describe data from a specific discussion that fed into the Lancet 2013 Nutrition Series. We extracted data from the discussion platform via a Solr Interface, and then transferred data into Tableau. The screenshots below show two examples:
The first ‘theme 2 constraints table’ presents the quantity of issues cited by participants (disaggregated by country) coded into 5 key groups of constrains / challenges. The data clearly shows that Kenyans and Nigerians were most vocal of the countries, raising issues across the spectrum in quantity. In contrast, the Indonesians and Bangladeshis raised constraining issues much less frequently.
The second presentation shows the pattern of contributions raising a specific issue over the course of a single day. It highlights how, after being raised by a Canadian and cited by a few other contributors, eventually the Ethiopian and Nepalese contributors engage back and forth on this specific question.
The analysis was presented at the 2015 ResUp Meetup, held in Nairobi. It stimulated a vibrant discussion on approaches for stimulus-response tracking i.e. the following of and engagement in virtual debates.
Challenges of data visualisation activities
Through our limited forays in this area, we have learned a number of lessons to share.
- Good data visualisation work requires effective collaboration between experts in three fields: data management, graphic design, and development communications specialists.
- Interactive data visualisation is yet more complex, as it also requires expertise in user interface design
- Presenting a visual representation of a data story is often single purpose – when the specification changes (which is common from one project team to another) existing assets don’t necessarily transfer easily. This is in stark contrast to simpler, less visually appealing approaches to presenting data
- All of the approaches we have tried involve non-trivial levels of time and resources. This is a major inhibiting factor in the spread of this technique in the global South as few projects can afford the time required or have the resources to pay for hardware / software needed.
- The expensive visualisations of data may not necessarily be better at ‘telling the story’ than something cheaper. The complexity of the process increases the risks of send a very glossy but ultimately misleading or confusing message. Sometimes a simple pie chart is the best way to share a story about a pie!
- The availability of data, of time and of tools increases the temptation to explore and visualise ‘in order to find an interesting story to tell’. Obviously, we should avoid solutions looking for problems.
- Finally, it is also true that exploring data does not necessarily lead to greater clarity. Often we get left frustrated by trying to ask the unanswerable questions – the data may be absent, unsuitable or inconclusive.