Thinking about RSS, Aggregation and Credibility

I think it is fair, and perhaps rather obvious, to say that RSS fundamentally changed our experience of the web. Lately, I’ve been thinking quite a bit about the idea of aggregation, both in terms of personal aggregation I can achieve by subscribing to feeds in my RSS reader, and aggregation as a service that is provided by an ever increasing number of news and information portals. As aggregation of media sources becomes an increasingly common and accepted practice, I’ve become more interested in how sources for feeds get vetted and served up at some of the more popular information portals. I know this is not a new idea and that others have certainly entertained the question before, but I’m not sure we ask the question enough of ourselves, or in the work we do with students.

A couple of class sessions ago I explored the idea of RSS and aggregation with the students in my Learning with Digital Media class. I wanted my students to do three things:

  • Set up and use web-based RSS readers to customize information and resources they wanted to have regular access to; essentially setting up a personal newspaper.
  • Explore a process for determining the credibility of content that appears in sites that aggregate news and information.
  • Critique the process of aggregation and raise questions about its impact on how we access news and information.

Students worked in small groups to evaluate four different sites that aggregate news and information. We looked at Google News, NewsCred, Alltop and digg. I selected these sites because I felt they represented different approaches to selecting content for their aggregated feeds. I encouraged students to consider some questions as they examined these sites: Is the process for content selection transparent? In other words, does the site describe how it aggregates the information it presents? To what extent do you feel the content is credible (subjective 1-10 rating scale with 10 being most credible)? What can you offer as a rationale that supports your credibility rating?

The ensuing discussion was interesting. There actually seemed to be a general consensus among the students about the credibility of content on each of these sites. I’ll talk about them briefly here from least to most credible as viewed by the students and try to tease out some overall themes.

Alltop seemed to be viewed as the least credible among the sites. Students tended to see it as middle of the road in terms of credibility. There is a preponderance of blog content on Alltop and the students maintained a general skepticism of information on blogs, even though they acknowledged that blogs can be interesting to read and be quite credible. In addition, there was uncertainty about the selection process on the blogs that were aggregated on Alltop. On one hand it seemed like blogs were aggregated by a combination of self-nomination and then peer review through readership which determined the shelf-life of a blog. This created a view of Alltop that was clubbish, and this diminished the perceived credibility of the content.

digg held slightly greater appeal for students, perhaps due to familiarity with the site. Students seemed to find value in the user rating process that digg employs – the more “diggs” the better or more credible the story or site. However, students also acknowledged that digg tends to cater to a particular demographic, and that content there may not always be of interest to the general population. So while they favored the “voting” process for determining credibility, they also acknowledged that the most popular content on digg was not always the most credible.

NewsCred was a site that not many students had heard of prior to the activity, and represented a hybrid model where custom feeds can be created from established reputable sources, and individual stories / sources receive a credibility rating from users. Students seemed to value what NewsCred was trying to accomplish by making the vetting process for its feeds more transparent, but lacking a complete understanding of the process some students remained uncertain about the ultimate credibility of the feeds. Overall however, NewsCred was viewed quite favorably by students in terms of perceived credibility of aggregated content.

Google News was generally viewed as the most popular and credible source for aggregated information feeds. One student commented that Google is a “household brand” and is simply trusted. A few students however commented that Google News functions on the same kind of algorithms that drive search returns, and that these can be “gamed” or manipulated. So the top stories in Google News are not necessarily the most credible or even the most relevant. Students also commented on the recent mistake, where a 2002 story about a United Airlines plane crash showed up as a top story in Google News and sent the company’s stock tumbling. Accidents like this were viewed as occasionally occurring in an automated system where there is no human intervention or oversight.

As I reflected on this experience with my students a few points emerged that I continue to think about:

1) Students are aware of the range of reliability and credibility of specific sources (e.g., NY Times, BBC, CNN, etc.), and they occasionally seek these out for top stories and breaking news.

2) Credibility of aggregated feeds seems to be significantly shaped by perceived credibility (Google is a “household brand”) and not necessarily an understanding of any vetting process.

3) Students place value on a vetting process where users can vote on the popularity or credibility of a story or source. Human intervention is seen as valuable, and students rely on their social networks for identifying interesting and relevant content on the web. Credibility is shaped by views of the social network.

As the demand for quick and mobile access to news and information continues to grow, aggregation is positioned to play a defining role in the ways we obtain information. And while providers of aggregation services need to strive to be more transparent about their vetting processes, it seems like we need to be thinking about ways to engage students to think critically about the process of aggregation as well. I’d be very interested in your views on this and whether you even see it as an issue. If so, what questions and suggestions do you have for helping us all better understand how aggregation is changing the game?

{Image credit: Picture Perfect Pose}