With the use of digital learning devices in classrooms, educators and policy makers have set ‘interactivity’ and ‘attention span’ as the benchmark for progress in classrooms. But with psychologists and academicians constantly warning us about the permanent brain and behavior changes that our ‘connected lives’ are leading to, are we forcing our children to learn and grow up in an environment that is riddled with cliches, confusion and contradiction, asks Nilofar Ansher
The term ‘digital native’ in the headline of this blog would ensure that almost half the people who would have cursorily glanced at a page related to technology or digital culture (if they happened upon this randomly through a tweet or Facebook share), would not care to read beyond the introduction. What is it about the phrase that invites skepticism or worse, dismissal from scholars, media and cultural practitioners, and a section of the public alike?
For digital natives like me who study the frameworks and mechanics of how we began self-subscribing to this moniker, it’s a cause for concern. Dismissing an entire eco-system of people from a range of background, qualification, talent, and purpose is denying them the opportunity to reach the very people they are aiming to engage and collaborate with – you!
What has stuck on like industrial adhesive is a decade-old behavioral summation of youngsters and their gadgets: [They] have spent their entire lives surrounded by and using computers, videogames, digital music players, video cams, cell phones, and all the other toys and tools of the digital age”, writes Marc Prensky, an American educationist and writer who coined the term digital natives in 2001 to specifically refer to American students in educational establishments (read the full text here).
The operative word for being a digital native is the span of time the youngsters spent interacting with digital technologies. [Today’s] average college grads have spent less than 5,000 hours of their lives reading, but over 10,000 hours playing video games (not to mention 20,000 hours watching TV)”, Prensky writes. His text doesn’t take into account the vast differentials in usage that would prevail among those who come from similar or even identical socio-economic backgrounds, and it left out questions of access, ownership, and nature of usage for scores of teenagers and young adults.
“The distinction between ‘native’, ‘settler’ and ‘immigrant’ does not only separate chronological generations; it also re-awakens the debate between the offline and online realities that preceded the emergence of the term. From a spatial point of view, it also distinguishes between the places of birth of different generations…In the digital context, however, the chronological order is reversed. For digital natives were not born into a digital ‘terra nullius’; digital spaces were conceived, shaped and already inhabited by those referred to as ‘settlers’ and ‘immigrants’. Ironically, it is the settlers who set the grounds for natives, and whose practices precede those of the natives”, writes Anat Ben-David in ‘Digital Natives and the Return of the Local Cause’ (Book 1, To Be, ‘Digital AlterNatives with a Cause?’ published by CIS and HIVOS).
That digital natives have an inevitable claim to being the native users of a technology whose definition is narrowed down to the number of hours they indulge in habit forming reflexes, is reflective of a reductionist rhetoric. If you extrapolate Prensky’s findings, then youngsters with less than 10,000 hours of TV watching or gaming would invariably be misfits in the digital age, not having enough ‘practice’ with devices and in extension, be judged as non-performing in classrooms (and non-conformists outside it). Not to mention, millions of kids from less priviledged, socio-economic and even cultural backgrounds (several American and immigrant communities and religions don’t favor modern amenities and consumer durables) who would now be at an even greater disadvantage because of the presumed lack of facility with modern tools and modes of instruction.
We are seeing a movement where ‘attention’ and ‘interactivity’ are being treated as currency denominations for a student’s progress. What happens when educational institutions set learning goals that require the mastery of specific devices before deeming the student capable of handling advanced courses? Ease of use, speed, interactivity and facilitation of instructions are cornerstones of learning in the digital century – and there’s certainly no harm in learning about the solar system or thermodynamics through a new medium (such as an iPad, or a projector). My contention is not about the device itself, but how the values attached to adopting specific devices get translated into a culture where learning through tablets is seen as an activity with higher values than going on an astronomy field trip. This view sees the product as more instrumental in shaping how youngsters learn (and respond to stimuli) rather than place equal responsibility on the instructor and environment, which can happen if you nurture a pedagogy-based (practice and problem solving) ecology.
In this scenario, the stereotype associated with digital natives, of being plugged into their devices 24×7 with no empathy for social causes or one-on-one interaction only gets perpetuated. And academic research into brain behavior, psychology, and sociology of play and learning into the lives of digital natives certainly doesn’t help. They present a doomsday scenario where our attention span is reducing, where our interpersonal skills are dwindling. It doesn’t bode well when the dominant outlook of society towards young, thriving, intelligent teenagers is that of distrust, worry and a ‘problem’ to be solved. These students will be the engineers, teachers, policy makers, judges and scientists of tomorrow and it seems that we are incentivizing them to learn using the very methods (devices) that scientists proclaim affect their capacity to judge, problem solve, indulge in lateral thinking or be leaders.
This is the contradiction that we need to highlight in public narratives. While the tendency to be alarmist where any new technology is concerned is quite common in public discourse, the criss-crossing end notes and inferences drawn from academics, psychologists, educators and industry people are dangerously teetering on chaos. Academic polemic discourages use of social media and digital devices citing the rise of behavioral problems but policy makers incentivize adoption of the very devices (such as iPads) in schools in a bid to engage the dwindling attention of students. The digital natives are right on track in fulfilling the doomsday prophecy of growing up into confused adults of the future, and we, the digital immigrants are to blame.
For more information on the Digital Natives with a Cause Project, visit the website.