2.0 or 2.Faux?: Wrapping our heads around digital assessment – An interactive article Print E-mail
By Kristin Fontichiaro   

What’s the problem?

This article began in an email conversation with Susan La Marca, Synergy’s editor. “Have you seen what I’ve seen over on your side of the world?”, I wrote. “Are you seeing well-intentioned teachers accept an administrative mandate to incorporate more technology who struggle to make it rigorous? Have you seen teachers who carve out precious instructional time for technology products that fall short of the rigour you’d see in paper-and-pencil work? Have you seen a general sense that we’re not quite sure how to have a conversation about rigorous learning with technology?”
I clicked SEND and hoped she’d reply that the rudder of the assessment and evaluation ship had long been set right in Australia. Moments later came the response. “It is of real concern – people are blinded by the bells and whistles rather than seeing any new possibility as just another tool.” Rats. Turns out we’re all in the same boat – that web 2.0 has brought an abundance of new tools and unprecedented aesthetic appeal, but that we’re having a hard time separating the cake from the frosting when it comes to rigour and curricular relevance.

. . . we need to develop a common vocabulary and a common lens for evaluating digital work and our own instructional practice.

The time is long overdue for a deep, thoughtful conversation about how we use technology to promote more rigorous work. To do so, we need to develop a common vocabulary and a common lens for evaluating digital work and our own instructional practice. In this article, I’ll propose a lens to look at student work and instructional design that has helped me gain some vocabulary and traction, and I hope you’ll respond with your own thoughts so that we can continue to tweak the model.
For this to happen, we have to take this article beyond the pages of Synergy and into the open web. So pull up a chair, fire up your laptop, and surf over to <http://digitalrigor.wikispaces.com>, the site we’ll use as our home base throughout the article. You can log in using slavreader as the username and synergy as the password. Let’s look at some digital work together and begin a conversation about what really matters when students create digital work.
First, let’s begin by looking at two common educational technology flaws: little input and little output.

Scenario 1: Big output, little input

Many school librarians recognise this scenario. Your building principal comes to you with great excitement about an animated tech product a student or class has created and is thrilled to share it with you as the building’s ‘tech expert’’ She sees the animation – a series of figures whose most recent Tweets pop up over their heads – as an excellent example of the 21st-century term ‘crowd-sourcing,’ and the quality of the animations points to excellent use of technology skills. Plus, imagine how intrinsically motivating it was for the student to create! She hopes that you agree, because this seems like the kind of project that should be replicated and displayed at the upcoming Parent Night.
You watch the animation later that day (see Scenario 1 on the Digital Rigor site). What are your initial instincts upon viewing it? If this were student work, what might you identify as the skills and dispositions necessary to create this work? What keywords would you use to describe it? Use the wiki’s Discussion tab to share your thoughts.
Continuing the scenario, you decide to investigate how the animation was created. You discover that the animation was created with Twitter Parade (http://isparade.jp). You discover that creating a Twitter Parade is as simple as typing in your own Twitter username or a keyword. Everything else – the animation, the music, the word balloons that pop up, the selection and retrieval of Tweets – has been generated by the software developers. Though the output is impressive, the student’s input was just a few keystrokes.
Web 2.0 tools are often created for entertainment purposes, not educational use. Entertainment tools like Twitter Parade can be wonderful toys and extraordinarily fun to create and share. But in this case, a quick peek behind the scenes reveals that the real work of the video was done by the software developers and engineers, not the student. The kudos belong to the designers, not the student. It’s a case of high output, low input – and nonexistent student learning.

Scenario 2: Big input, little output

You are a primary school librarian. The lab has been busy the past few weeks as middle-grade students work on biography presentations. The class worked on research for about an hour on the first day and has spent the next two weeks working on creating a PowerPoint to demonstrate what they learned. View Scenario 2 in the wiki. Use the rubric in the link given to evaluate the work. What kind of score do you give according to the rubric? Does the score reflect your gut instinct as to the quality of the work? Please share your ideas in the Discussion tab.
Many of us have made rubrics like the sample one linked on the rubric to make certain that students showed understanding of the technology tool, perhaps to meet a district technology curriculum. However, should the majority of the grade on a two-week project go to technical manipulation and almost none to content? (You’ll notice that my sample completely overlooks the requirement of writing in various forms of poetry, as the rubric requires, yet because of how the rubric is constructed, it is possible to award it a high score nonetheless.) The rubric focused mainly on procedural work and mastery of technology skills, with content quality and evidence of learning restricted to very minor areas of evaluation.
What do we say about projects that take far more time to create than they did to research and process, these high input, low output examples? Is this the most effective use of our students’ time?
We can’t measure rigour by how hard the students work. We need a more focused lens in order to facilitate a conversation of rigour.

A model-in-progress for assessing work

Having worked with classroom teachers and fellow school librarians as both a district staff development facilitator and a school librarian, it has become clear to me that years of in-services and administrative mandates have led to technology adoption without much thought about technology rigour. The focus has been on doing technology, not reflecting on how it affects learning. In thousands of classrooms and libraries, teachers lack the vocabulary to discuss what should be assessed, why, and to what degree, when technology is part of the instructional design.

The focus has been on doing technology, not reflecting on how it affects learning.

I needed a way to quickly have conversations that moved our focus from tool adoption to examination of rigour. I reflected on various 21st-century learning documents (AASL 2007; ISTE 2007; Partnership for 21st Century Skills, 2009) and began to brainstorm options, consulting with colleagues along the way. About ten drafts later, this graphic emerged (it is also available on the home page of the Digital Rigor wiki). At the centre of the graphic is our goal: rigorous technology-based learning. Pointing toward the goal are four arrows that represent four continua.

Decontextualized vs. authentic: work in context

The first arrow, in the top-left corner, shows a range of possibilities for how the work is situated within the student’s life. At one extreme is decontextualised work, work that seems completely separated from the student, his life, his colleagues, his interests, and his prior knowledge. A classic example of decontextualised work is poster board country reports for countries the child has never heard of. The audience for such work does not extend beyond the teacher; it is of little interest to colleagues, the real world, or the student herself. The facts gathered are essentially outliers; they are disconnected, stand-alone information.
On the opposite end is authenticity: work that resonates within the world of the student. Authentic topics are those that have relevance to the student’s world. In the case of the country report, perhaps the student is researching her own country of origin, or the host country for a sporting event that interests her. By rooting such topics in the student’s locus of existing knowledge, the student does not begin from scratch; instead, she builds on a stronger foundation of existing or prior knowledge, (Darling-Hammond and George Lucas Educational Foundation, 2008). Authentic topics lead to authentic questions, which help move the project from rote  retelling to deeper information-seeking behaviour and synthesis (more on this below).

Authentic work also relates to authentic problems that, if solved by the student, can have resonance beyond the classroom and into the real world.

Authentic work also relates to authentic problems that, if solved by the student, can have resonance beyond the classroom and into the real world. The awareness of such potential impact can bestow upon the student what Heathcote and Bolton (1995) call “mantle of  the expert”, imbuing students with an authority that can grant them more courage for investigations and a sense that their work matters and so should be done well. While authenticity does not, on its own, guarantee rigorous work, it lays a foundation and provides a setting in which rigour can flourish.

Teacher-directed vs. student-centred: Who pulls the cognitive weight?

This summer, we asked a class of more than 50 University of Michigan (USA) pre-service teachers about the classrooms and teachers who had most influenced them. Among the criteria they remembered with greatest fondness were those teachers who allowed them – the students – to be at the centre of their own learning. The second continuum addresses this idea of student-centred learning. Teacher-directed work is at one end of the continuum, and student-centred work is at the other. Too often, in our hurry to achieve curriculum objectives, we over-package curriculum content instead of sending our students forth to tackle content unmediated. Instead of letting learning get messy, we glance at the clock and over-rely on teacher-created graphic organisers and instructions for how work should be completed.

The rigorous cognitive work needs to be done by the learner, not the teacher.

In doing so, we shift the responsibility, and the teachers begin to work harder than the students. This can accidentally create a cycle of distrust and learned helplessness. By broadcasting that students need every step explained to them, students receive a subliminal message that they cannot be trusted to successfully work and achieve on their own. The rigorous cognitive work needs to be done by the learner, not the teacher.
Gardner (in Curtis, 1966) says: “all too often we are giving our young people cut flowers when we should be teaching them to grow their own plants”. When teachers predetermine the content of technology products too specifically (for example, by providing students with an outline that specifies the content that must appear on each PowerPoint slide), students spend more time following directions than developing their own viewpoints and responses. Students spend less time thinking and more time being procedural and ticking off boxes on checklists.
It should be said that student-centred learning does not mean a loosey-goosey, ‘do whatever you want’ free-for-all. Teachers provide a valuable role as guide and scaffolder. What should be avoided is a teacher-dominant mentality (‘sage from the stage’) that accidentally oversimplifies the work task, infantilises our learners, and reduces the growth rate of their cognitive ‘muscles’ over time.

Automating vs. informating: Adding value

At the University of Michigan School of Information (USA), our motto is, ‘Connecting People, Information, and Technology in more valuable ways’ (emphasis added; University of Michigan, 2010). This idea – that technology should contribute to more valuable interactions – informs the third continuum: automating versus informating. Zuboff (1988) used the word informating as a synonym for more valuable to define the ways in which mechanisation and technology could produce additional data and information beyond what human power could generate. She points out that replacing a grocery store cashier (who hand-keyed each item into the cash register) with scannable UPC codes did more than help complete a purchase transaction (automating). The mechanisation also permitted grocery stores to conduct passive inventory and record customer purchasing habits (informating; generating additional data that served additional purposes).
Consider a technology tool like Wordle.net, which generates word clouds, with more frequently used words appearing larger on the resulting graphic. If we ask students to merely input words that describe themselves to auto-generate a colourful graphic, we essentially ask them to do the same thing that they could do with paper or pencil. And, while such an activity might contribute to the social climate of a classroom, it does not contribute to rigorous intellectual learning. In this case, technology adds no real cognitive value to the activity – it merely automates what was done with paper and pencil. In fact, given the additional cost of technology over paper and pencil, one could make an argument that this kind of technology use takes financial resources away from other learning needs.
If, however, we ask students to cut-and-paste the text of a pivotal government document into Wordle.net, they can now analyse why certain words might appear more frequently and draw conclusions about the point of view or perspective of the document’s author. This approach uses technology to fuel further discussion. The first example does not.
Similarly, a KidPix slide show where a student draws and labels a cell? Automation. A KidPix slide show in which a student creates an animation of cell mitosis, which helps him gain knowledge of how the cell changes over time? Informating.
Note that in the automating/informating discussion, we want to be careful not to glorify or demonise any particular tool. It is the way in which the tools are used that determines its automating or informating capabilities.

Retelling vs. synthesis: Creating new knowledge

Get a group of librarians in a room, give them five minutes, and they’ll barrage you with stories from the field about low-level student work. What they are identifying is  retelling, or merely restating what someone else had said. PowerPoints with robotic bullet points that are copied right from the book or web is the most frequent complaint. The meteoric popularity of Glogster.com that facilitates the collection of audio, images, video, and text amid an eye-poppingly amazing animated background can sometimes result in the mere collection of facts without processing or understanding them.
McKenzie (1996) called this process ‘word moving’, and the frequency with which ‘research’ becomes ‘word moving’ is alarming. Technology facilitates cut-and-paste unless the teacher engineers an environment in which that low level of recall is not acceptable. Sadly, the continuing proliferation of ‘country reports’, ‘animal reports’, and ‘biography reports’, that merely ask students to spit back discrete facts, do our students a disservice and lower the bar of rigour. If instructional design is not effective, the cut-and-paste environment creates an opening in which students can merely repeat – or  retell – information from a resource without comprehending, comparing, drawing inferences, or synthesising work. Human behaviour tells us that students will strive to do work that is ‘good enough’, fulfilling only the minimum expectations, unless they are expected to do more.
Is this sense of ‘good enough’, or what Simon (1983) called ‘satisficing’, enough for our students to thrive in our current fact-paced global economy? Can we realistically believe that merely retelling someone else’s facts create a competitive member of the work force? Of course not. Our students need synthesis. Great student work should push our students to process information and make sense of it. Professional researchers go beyond reporting – they knead their findings, compare them to existing knowledge, draw conclusions, and share their new learnings with each other.

Content and curriculum: The pond we swim in

A student project can meet all four criteria and still fail as an educational learning product. It could be authentic, student-centred, informated, and synthesised. However, there is a missing piece. As Samet (2010) points out, “curriculum is the pond we swim in”. To be a successful educational technology experience, projects must connect to the curriculum. As obvious as this sounds, consider the number of times technology promotes a socialisation agenda, not one of learning: Mother’s Day cards and valentines, introduce-yourself posters, student-created videos that wish the principal Happy Birthday, or emailing friends.

The rapidity with which tools change and new tools come on the market no longer supports ‘tech for tech’s sake’.

In an age where most technology tools used in K-12 require minimal training, we can no longer justify the past-generation excuse that ‘they need to learn the tool to prepare for the workforce’. The rapidity with which tools change and new tools come on the market no longer supports ‘tech for tech’s sake’. Spending time on these ‘social’ projects at the cost of time spent on rich content exploration may ascend to the level of academic malpractice.

Creativity: The silent partner

There is a lot of talk in 21st-century learning circles about the importance of creativity, and yet this does not appear on this working model of rigour. That is not an accident. In beginning conversations about rigour, ‘creativity’ is one of the words that has the widest range of definitions. Some teachers believe that any tool that creates a visual result is, de facto, creative. Upload 20 photos into Animoto.com and let Animoto spin and flip them? Creative! Twitter Parade? Creative! When working with teachers who work within this mindset, it can be very difficult to redirect a long-held belief system and encourage an alternate approach. Instead of leaping directly into a discussion of rigour, the conversation stalls and loses traction. Others, including myself, take a different approach, defining creativity as ‘creative thinking’ (ISTE, 2007) and seeking opportunities where, in essence, creativity is a partner in synthesis. However, until a school culture has embraced building block definitions for discussion rigour, ‘creativity’ can muddy, rather than clarify, the assessment waters.
Now let’s take the model for a spin and see how the model-in-progress can help us have a conversation about student work.

Scenario 3: Videotaping a report

Let’s head back to http://digitalrigor.wikispaces.comand look at Scenario 3. In this video, a primary school student is sharing her paper animal report. This kind of technology integration – filming children sharing traditional projects – is relatively common in American primary schools. But is it rigorous? Take a look and evaluate it against the four continua. Where would you place this video in each of the four categories? How did the model enhance your ability to discuss the value of the work itself? Was the exclusion of ‘creativity’ a help or a hindrance? Use the Scenario 3 Discussion tab to share your thoughts. My take is that this video is: 
  • More decontextualised than authentic. The blank affect of the child indicates that she is not enthusiastic about her work or about sharing it with others, though one could argue that the video being on YouTube lends an air of authentic audience. 
  • More teacher-directed than student-centred. The student’s project has prompts pre-printed on the paper, so her information search was limited to finding and copying the answers. Questioning and curiosity are not explicitly included in the instructional design. The student’s work must proceed within the confines established by the teacher. 
  • More automating than informating. Although one could argue that videotaping allows for easy digital distribution, which would be less possible with paper-and-pencil tasks, I am doubtful that the student’s learning increased in more valuable ways as a result of the introduction of the video camera. 
  • More retelling than synthesis.The student merely reports answers without bringing those ideas together into new understanding or new knowledge. Had the student used the pre-printed form to gather information, and then applied the information (for example, creating an advertisement for why someone should visit the animal at the zoo), the project would move down the continuum toward synthesis. Young students may be emerging readers, but they have the capacity to think about what information means. 

Scenario 4: Hamlet’s revenge

The wiki’s Scenario 4 provides a much more complex example for evaluation. Full of technical effects and humour, the ‘Hamlet’s Revenge’ video gives us further practice with the lens of rigour. View the video on the wiki and add your thoughts to the Discussion tab; consider how effectively the student justifies the comparisons he makes. This video is extremely fun to watch, but the humour sometimes obscures incomplete thinking. Examining it through the four continua, my assessment was that this project was: 
  • More authentic than decontextualised. This is a classic example of something Ralph Fletcher asserts in Boy Writers (2006): that many girls write for their teachers, but many boys write for one another. This video probably brought the house down when it premiered in class! 
  • Both teacher-directed and student-centred. Despite the student’s clever mashup of music, camera angles, and humour, the teacher’s assignment is evident: choose a theme from Hamlet. Provide three examples from the text to support that theme, and make three connections to that theme in modern life. The student works extremely hard to add his own personality to the project, but ultimately, it is held back by the formula of the assignment. 
  • Mostly automating; some informating. Despite the genuinely entertaining frame of this video, the actual information is mostly quotes and restating content. (Google Hamlet revenge and you’ll see that his examples match those of the top result.) The inclusion of a photo from a stage production does not add value and is not explained. 
  • Midway between retelling and synthesis. This project is quite close to synthesis, because the student does provide modern-day examples of revenge in popular culture. However, he does not explain those connections. The September 11 example he gives packs an emotional wallop, but the viewer isn’t clear who is avenging whom. Does he mean that the terrorists were seeking revenge? Or that the American people sought revenge after the attacks? To be more successful in this arena, the instructors could coach the student to make explicit his implicit connections.


Having looked at several examples of student work, and taking the model-in-progress for a spin, now it’s your turn. I hope you’ll visit the Reflection page of the Digital Rigor wiki and share your thoughts about the effectiveness and utility of the model and its design for your work. Share your suggestions and additions so we can continue to refine the model. Assessment of the effectiveness of digital tool integration is not something we learned in our pre-service work as educators or librarians. Let’s work together to build a common vocabulary so we can learn to look at our assessment practices more meaningfully.


American Association of School Librarians (2007) Standards for the 21st-Century Learner, Chicago, Ill.: American Association of School Librarians.
Darling-Hammond, Linda, and the George Lucas Educational Foundation (2008) Powerful Learning: What We Know About Teaching for Understanding, San Francisco, CA: Jossey-Bass.
Fletcher, Ralph J.(2006) Boy Writers: Reclaiming Their Voices, Portsmouth, ME: Heinemann.
Gardner, John. In Huber, Curtis (1966) ‘Man, Morals, and Mass Education’ in The Phi Delta Kappan, 47 (7 ), pp. 382-386. Accessed August 5th, 2010 at <http://www.jstor.org/stable/20371599>.
Heathcote, Dorothy, and Gavin M. Bolton (1995) Drama for Learning: Dorothy Heathcote's Mantle of the Expert Approach to Education, The Dimensions of drama series. Portmouth, NH: Heinemann.
International Society for Technology in Education (2007) National Educational Technology Standards for Students. Accessed August 5, 2010 at: < http://www.iste.org/nets> .
McKenzie, Jamie (1996) ‘Making WEB Meaning’ in Educational Leadership, 54(3), (Nov.), pp. 30-32.
Partnership for 21st Century Skills (2009) ‘P21 Framework Definitions.’ Accessed August 5, 2010, at: <http://www.p21.org/documents/P21_Framework_Definitions.pdf >.
Samet, Raya (2010) ‘Personal Communication’ (June 6).
University of Michigan School of Information (2010) ‘Home page’ Accessed August 5, 2010, at: <http://si.umich.edu>.
Simon, Herbert A. (1983) Reason in Human Affairs, Palo Alto, CA: Stanford University Press.
Wiggins, Grant, and Jay McTighe (2008) ‘Put Understanding First’ in Educational Leadership 65(8), pp. 36-41.
Zuboff, Shoshanna (1988) In the Age of the Smart Machine. New York: Basic Books.
Kristin Fontichiaro is a clinical professor and coordinator of the school library program at the University of Michigan, Ann Arbor (USA). Her most recent publication is 21st-Century Learning in School Libraries (Libraries Unlimited, 2009). She blogs about school libraries at: <http://blog.schoollibrarymedia.com>.