The distinction between reality and not reality has intrigued society for thousands of years. When I say society, though, I suppose I'm discussing only those individuals who actually want to discern the difference between the two, since it is readily apparent that a great deal of people rather enjoy the blur between the two worlds. In the past, "unreality" could be considered anything from a dream sequence to joining a secret cult to playing/cheering for your favorite sports team. However, with technology at the helm, we have another life available: one that is virtual.
This is not an entirely new development. For ages, writers and readers have immersed themselves in fantasy worlds, sometimes as an escape to current events, but oftentimes, emulating pressing world matters. The line between fiction and non-fiction can be a bit hazy, whether we are talking about books, radio programs, TV shows, specifically "reality TV." Computer relationships, networlds, etc., usurp our former outlets because they do, indeed, take on a life of their own.
Author Sherry Turkle discusses corporate role-playing situations and compares that to the role-playing contained within a virtual self in "Life on the Screen: Identity in the Age of the internet." She mentions how people start to resemble "little" businesses where an individual, collectively, plays the roles of various individual employees (Turkle, p. 256). Turkle compares characteristics of people who enjoy online role-playing games, or MUDs, to some of the symptoms found in individuals who have a multiple personality disorder (p. 260). I can understand the relationship, but she doesn't seem to discuss thoroughly the difference here, as people playing online games have control over what the do, and people with disorders do not always elicit the same type of authority over their minds.
Then again, there have been stories about people who have died due to sleep and food deprivation from playing video games too long. Yeah, sort of scary, isn't it?
I'm curious as to whether or not author Linda M. Harasim thinks the same today as she does in her 1993 article "Networld: Networks as Social Space." Her quote needs no explanation: "Computer networking does not replace other forms of human communication; it increases our range of human connectedness and the number of ways in which we are able to make contact with others" (Harasim , p. 16). I agree and disagree with her at the same time, if that is possible. "Talking" online is definitely different than talking in person. But there is plenty of face-to-face communication that can be, and has been, eliminated with the use of email, IM, text messages, and the like. It's pretty nice to be able to send one email/text message to five friends and find out what's going on Saturday night, as opposed to contacting each person individually.
This carries over into the workplace, too, because people can be a part of a group project by reviewing information in a central location, somewhat like meeting notes. A group email is fairly similar to an interdepartmental memorandum, right? I would assume that counts as "human communication," but Harasim also infers that broadcast media will always be a one-way street. Even though websites have online polls and virtual communities that can be seen as enhancements, these outlets have always had ways to give feedback, whether it be through a phone call, focus groups, classifieds, etc. The difference, as the author states, is the commonplace of asynchronous communication, and the fact that it can lead to anxiety if people do not respond immediately (p. 24). I doubt anyone expects to read an article in the print version of the newspaper and immediately be able to talk to the writer, but newspapers are getting closer to achieving this possibility.
Finally, we turn our attention to the purchasing of diplomas, which seems rampant these days. Is technology the reason for this atrocity? There is definitely a vicious circle of getting ahead in life, making more money, and needing extra lines on a resume to make one more attractive to prospective companies. Author David Noble correctly comments on the move 30 or so years ago for industries and schools to form partnerships, which basically has amounted to an attempt to monetize education. It is rather scary to see the hundreds of for-profit companies that make money by pushing their respective agendas on universities and school districts.
To make matters worse, some of the learning tools are intentionally created to allow the companies a direct view into the lives of the students, helping the companies get richer and maybe helping the students learn more. As Noble points out, the "courses are studying them." Teachers do the same thing by fine-tuning their lesson plans during years of using them. But what can a software package really learn about the ways humans interact with it? Is it possible for a program to take an honest, heuristic approach to determining the best way a group of children can learn and solve particular problems?
I was relieved to find out the Blackboard, a.k.a. WEB-CT, has been around since the early days of online educational packages. That at least explains why the software is terrible. This frequently happens in the programming world; people are too lazy to just start over and build something new, when they can just "hack" the old system to make improvements. This is similar to taking a regular row house, and after an asteroid pierces it, changing the light bulbs in the dining room chandelier. What a great idea!
Harasim, L. (1993). Global networks: Computers and international communication. Cambridge: MIT Press.
Turkle, S. (1997). Life on the screen: Identity in the age of the internet. New York: Simon & Schuster.
Comments