Woes of Technology…

The continued wanderings of a newly minted librarian….

On our community college library website we have a really nifty set of Bibliographic Instruction (information literacy) tests …. A pre-test and a post test. These are designed for students to take before (pre) and after (post) having a bibliographic instruction session with a librarian. They would be a wonderful source of feedback on what the students are getting out of the session and for someone who has never used the library, the pre-test is quite informative in itself.

These tests even look cool…They are hosted by Zoomerang which does online surveys… so these are kind of modified surveys.  The pre-test is 19 questions including name of professor and class. They are all multiple choice with nice radio buttons for selecting your answer. In all they are easy to use and should be an excellent tool for our librarians.

Notice the word SHOULD. These wonderful resources are, alas, not used. WHY? You ask… well, it is because of how they are hosted. The librarians don’t have access to any of the surveys/ test answers. They must talk to another department who collects all the information from the Zoomerang folks and then does who knows what with that information (disperse it to appropriate departments one would hope). Well, those folks don’t seem to want to be checking for submissions of library questions and repeated requests have apparently gone unanswered. Of course, I am not exactly in the loop so there could be other factors involved such as the format in which Zoomerang sends results.

I guess my point is that this wonderful tool is not so wonderful because librarians don’t have direct control over getting the results when they need them. There are so many other issues that we should be focusing our time on such as thinking of ways to get the students to take the pre and post tests (small class assignment like a one to 5 point grade or perhaps extra credit [both of which depend on the students’ professor in order to be effective but which would not require the professor to do much other than telling the students to do the tests]). In stead theses tests sit idle on the library website and our librarians have to think up other ways to get feedback.

Week 14

 Holding on to reality pt. 3 & Kellner’s Review of Borgmann:

 Holding on to reality pt. 3: “Technological Information-Information as reality”

 Ch. 11: “Elementary measures”

 Borgmann (1999) discusses the Ancient Greek’s theory of atoms and goes on to describe the binary system and bits. These bits can be used to structure information about, for and as reality. Borgmann (1999) tries to explain the various patterns in which these bits of information can be combined in order for us “…to understand how the characteristic bits of present-day information have insinuated themselves into our lives…”(p. 140).

 Ch. 12: “Basic structures”

 Borgmann (1999) starts this chapter by explaining that division is the most basic structure. When a mind divides it makes a distinction between two things, in reality a division is a difference. A division can also be an act of generation. Amoebas reproduce by the act of division (they multiply by dividing!). Borgmann (1999) continues to explain the historical evolution of the computer from Charles Babbage and ENIAC.  Computers “…cushion and comfort the human condition. In some way they disburden us from having to cope with the contingency of reality” (p. 144). Of course so do other machines, not just computers. The engine and then the car disburden humans from having to walk or rely on horses or animals for transportation.


Borgmann (1999) also goes on to explain Boolean logic which has been so useful in retrieval of information form databases. Borgmann doesn’t mention that use but instead goes on to explain transistors, invertors, and resistors and how they are basically the same NOT, OR, AND gates of Boolean logic. Combining these gates we get the computer CPU (Central Processing Unit) and the outline of an Intel Chip.


Early computers and their users shared an intimate relationship. The computer’s operations were transparent and Borgmann (1999) compares the users’ sense of wholeness with an urban dweller being refreshed after spending time in the wilderness living in a comprehendible world where one must carry water from a stream and cut wood for a fire. However, he goes on to say that this early vision of coherence and “surge of competence” was undone by the advancement of the computer. Borgmann (1999) compares the rapid advancement in computer technology with the difference of living in a log cabin hauling water and cutting wood to living in a high rise building -“a move from a limited and laborious environment to a world of tremendous capacity, convenience, and speed” (pp. 164-165).


Ch. 13: “Transparency and control”


This chapter begins with Borgmann (1999) looking at the history of information technology (IT). IT is a result of the “convergence of two technologies: the transmission of information and the automation of computation” (p. 166). To put it in other words Borgmann (1999) defines it structurally as “information that is measured in bits, ordered by Boolean algebra, and conveyed by electrons” (p. 166).


Borgmann (1999) uses the examples of mapping to explain the problems of storing information. A small scale map only provides a brief overview of an area. A large scale map shows a smaller area and cannot provide an overview. To store the necessary maps to be useful (a large amount of information) requires piles of maps of all scales. When compared to digital storage the piles of paper seem almost ridiculous. As Borgmann (1999) points out, this is the genius of information technology-making large amounts of information available by conveniently storing massive amounts and letting us use it with processing and display devices.


The problem is that as computers do more calculations and more work for us, we understand the results less. A math equation that we puzzle out by hand is completely understood and absorbed by us. When that same equation is solved by a computer for us we do not have the same intimate understanding. Another example of Borgmann’s is in mapping. A map that we put together on a computer with a few mouse clicks is not understood as well as a map that is hand crafted on a table. Information technology provides information to a greater swath of people but the depth of understanding is much shallower.


Borgmann (1999) tells of a physics professor at MIT who says that his students no longer build computer simulations because they are too complicated. The simulations are bought and thus the inner workings are not known. “If the assumptions behind some simulation were flawed, my students wouldn’t even know where or how to look for the problem” (p. 176).


Ch. 14: “Virtuality and ambiguity”


Borgmann (1999) tries to explain the concept of resolution by giving a couple of examples. The first is about TV screens and how they are being improved and the new ones are “better”. By “better” he says they are larger and have higher resolution. His other example is about Bach’s music. A piece recorded on shellac disks and played with steel needles amplified through a tube has lower resolution than one recorded on a CD and retrieved with laser beams and using transistors for amplification.


Another example is how we transfer information. If someone says “I heard a great piece of music-it was Bach’s Cantata no. 10” that message has information but the amount that is received and understood depends on the recipient. A reasonably educated person would understand “Bach” and “Cantata” but “Cantata no. 10” would most likely be meaningless. A famous conductor or music scholar would realize the full information message because they would know the musical piece called Cantata no. 10 by Bach. In this case the information is of low resolution. However if the same person provided the score along with their message then more people would fully understand the informational message. The resolution would be even higher if instead of a score a CD of the music were provided. “As resolution rises, the demands on intelligence decline” (Borgmann, 1999, p. 181).


When the resolution is high enough the sign and thing become one. “The sign equals the thing, information has become reality” (p. 181). Borgmann (1999) acknowledges that information technology can require high human skill in some cases, however he continues: “But overall, and emphatically so in the realm of leisure and consumption, technology …in the engineering sense and technology…in the cultural sense have converged to obviate powerful skills and habits of realizing information” (p. 183). He continues his thought: “The computer, when it harbors virtual reality, is no longer a machine that helps us to cope with the world by making a beneficial difference in reality; it makes all the difference and liberates us from actual reality” (p. 183).


In describing virtual reality and virtual lives Borgmann (1999) mentions the position of some thinkers that cyberspace supersedes the actual world. He describes people in MUDA (Multi-User Domains) and we could now include Second Life in this category along with other online games. Even though the resolution in these is low (the scenery can’t really be mistaken for actual scenery) the participants can alter themselves as they can’t in the real world and this makes them overlook the poor resolution. Some people feel this alter self is more “true” or “real” than their physical reality.


There is ambiguity to cyberspace. We still can’t escape reality. A shy person who is an eloquent speaker and intelligent friend through email, in reality stays a recluse who can’t carry on a conversation with anyone.


In cyberspace the lives between truth and fiction are blurred. Recently a teenager set up a webcam and recorded himself committing suicide with a suicide note available for everyone as well as the live feed of him dying. Many people supposedly watched but none reported it until after many hours when his body was still not moving. Many thought it was a joke until they saw police officers through the webcam feed.


Ch. 15:  “Fragility and noise”


Technological information appears extremely strong and long reaching, but Borgmann (1999) says this strength is threatened by fragility and noise. Digital structures are strongest because they can be copied without error unlike material structures. A written work on paper that is deteriorating can be copied onto fresh paper. Something like a statue or architecture cannot be replaced exactly-the duplication might look similar but the material will differ slightly from the original.


Borgmann (1999) reminds us that we may have overlooked the fragility of technological information. Written material information can be read by people and thus doesn’t require a lot to endure-“paper, pencil, a teacher and a student” (p. 194). Anyone who is taught the language and writing system can keep recording and copying the information. There may be some errors overtime but these can often be resolved through textual study. Technological information is completely dependent upon supporting devices. These devices can be fragile and are subject to obsolescence.


Not too long ago we stored lots of digital information on large and small floppy discs. Now many computers don’t have floppy disc drives. Effectively the material stored on those discs is lost. Many people use flash drives to store information on, but if the flash drive is damaged then the information is gone (all of the information, not just a piece here and there as with a damaged piece of papyrus). The information we store on a CD ROM is given a lifespan of 30 years (assuming that it does not become obsolete before that). Odes of Horace have survived much longer because they weren’t digitized. We don’t have the originals from 20 BCE but we do have a copied manuscript from the 9th century! That writing has survived over 1000 years-there is no technological device that thus far has hopes of matching that long term survival rate. “Technological information is socially fragile because of our heedless rush toward more powerful technologies that condemn older one to obsolescence and illegibility” (Borgmann, 1999, pp. 195-196).


Borgmann (1999) points out the structural fragility of our technological information. The massive complexity of our system means that despite all our redundant actions and analysis and attempts for standardization, there will always be “bugs” in the systems. These bugs have the potential for making these systems fail. The only thing that keeps these large computerized systems running is continuous monitoring by programmers and engineers who can step in and attempt to fix problems as they develop.


In spite of the limits of information technology Borgmann (1999) points out the amazing accomplishments. The task for us is to combine “the fluidity of information technology with the stability of the things and practices that have served us well and we continue to depend on for our material and spiritual well-being-the grandeur of nature, the splendor of cities, competence of work…” (p. 201).


Borgmann (1999) claims that cultural noise is sometimes getting in the way of nature and art:

“It is rather the profound fragility of the voices of reality that has allowed so much loudness and shrillness to invade the public conversation…cultural noise is merely annoying at certain times and truly injurious at others” (pp. 201-202).


Borgmann (1999) also claims that the statements that the new  information ages “‘will change forever the way an entire nation works, plays, travels, and even thinks'” (p. 203) are harmless but  distracting propaganda. He is much more concerned with what is going on in higher education. Borgmann (1999) acknowledges the benefits of technology scholars have reaped being able to reach people and data rapidly for research. Students get training for the needed computer skills but Borgmann (1999) is not convinced it is enough: “these changes, however, leave a large part of instruction more or less in its traditional shape and fail …to follow through on the possibilities of technological information” (p. 203).


In the world of natural information, teaching was showing and demonstrating and learning was imitating and practicing-this led to the education model of apprenticeship (Borgmann, 1999). With cultural information there were texts, plans, scores, etc. that were quite intricate. With this literacy had to be taught and students had to learn the skill of realizing this cultural information. As Borgmann (1999) points out-“since most learning came to be set down in writing, reading and comprehension became focal points of education” (p. 204). We get our lectures from antiquity and the Middle Ages when texts were rare. They would be read out loud to students and then commented on or explained. This same structure of universities and lectures has passed into our modern higher education. Apprenticeship also continues both in vocational training and in labs and seminars and workshops. Borgmann (1999) feels that all these are being endangered: “All these traditional forms of teaching, however, are under attack by the proponents of technological reform: (p. 204).


Borgmann (1999) states that, the apprenticeship method of teaching yields an artisan, and teaching by lecturing results in a scholar. Since a distance learning student has access to lots of information easily rather than committing this information to memory, they need to learn how to learn, how to find information, and general problem solving. “The goal, then, of education in cyberspace is to produce the learner, the person who has learned how to learn but otherwise knows nothing” (p. 206).


Borgmann (1999) admits that in the 19th century there was a lot of excessive memorizing-“but as has happened in modern culture generally, the line between genuine liberation and indulgent disburdenment was thoughtlessly crossed” (p. 206). Lots of people clamor for the primacy of information technology in the classroom but as Borgmann (1999) points out:

 “Billions of dollars are dedicated to educational hardware and software, but next to nothing is spent to get reliable information on the costs and benefits of the expenditures. But what little is available … suggests that our enthusiasm for computerized learning and research will serve students and scholarship poorly” (p. 207).

 Few people will claim that distance learning will require no effort by “they fail to see that the discipline needed to sustain effort in turn needs the support of timing, spacing, and socializing that have been part of human nature ever since it has evolved in a world of natural information” (Borgmann, 1999, p. 207).


In the study of Ancient Greek texts, a student had to memorize vocabulary and grammar before beginning the laborious task of translating. Information technology has given us two databases, Perseus and Thesaurus Linguae Graecae, which contain the old texts and will have a complete morphological analysis for every word and provide an English translation for every sentence. To work through the texts in the traditional way may seem ludicrous. However, as a student of Ancient Greek and Latin, I can assure you that there is no substitution for the labour. I have read the Odyssey several times in English, but the parts that I painfully went over in Greek are what are imprinted in my memory so vividly. The labour, dedication, and concentration of puzzling through the work myself resulted in my understanding. The modern tools available are great, but should not replace the traditional method-otherwise nothing is truly learned.


In the business world information technology promised a huge growth of productivity, economic prosperity, and a new era of leisure and affluence. “But the information we have shows clearly enough that during the very period when investments in information technology were climbing steeply, productivity gains have flattened out” (Borgmann, 1999, p. 211).


Borgmann (1999) sums up this chapter with a statement that should make librarians smile:

“The inherent fluidity and facility of information technology may move us to consider a radically different way of presenting information…so that it will invite quiet attention, and a manner of making it spare and austere enough to engage memory and imagination. We may find a new regard for an old vessel of information -the book. And when we have recovered the book we may want to restore the place that used to be dedicated to the quietude and concentration the book inspires-the library” (p. 212).


Conclusion: “Information and reality”


In his conclusion, Borgmann (1999) says that information technology has many benefits and has helped to curtail overt misery, but he claims it aggravates a hidden misery that follows from “the slow obliteration of human substance. It is the misery of persons who lose their well-being not to violence or oblivion, but to the dilation and attenuation they suffer when the moral gravity and material density of things is over laid by the lightness of information. People are losing their character and definition in the levity of cyberspace” (p. 232).


First, I disagree that IT is causing overt misery to wane. Globally as a species there is still plenty of overt misery. It is probably true that it is harder to define one’s self and have a strong character in cyberspace. But it is also harder to define oneself in a large pool of people (large city, big university) than it is in a smaller group (small town, small college). In cyberspace it is easy to be absorbed into the mass of “characters” online. However, when we separate into smaller groups (online or otherwise) we have a better chance of self definition. Borgmann (1999) goes on to explain that Christians owe their fidelity to persons to the history of salvation and look forward to being forever remembered and having their souls rocked in the bosom of Abraham. I am not sure why he insists on this reliance of Christian religion to support some of his ideas. I think they would be stronger without it and am not entirely sure what he means by this ending. It is quite disappointing because I think he makes some excellent points about the dangers of information technology. We do need to rethink some of our methods of use to reap the greatest benefit and minimize the negatives. Some of this gets overwhelmed by Borgmann’s insistence of Christian examples, especially at the end of his conclusion.



Kellner on Borgmann:


Kellner (1999) states that to succeed in his points, Borgmann needs to “overcome the postmodern dictum that reality…is a social construct… [and] demonstrate that a more fundamental and compelling ‘reality’ is being overcome and displaced by the new … realities of cyberspace, informational technology, and new multimedia, and must persuade his readers to take more seriously and ground their lives in this more primal ‘reality'” (p. 3). Kellner also points out the theological underpinnings of Borgmann’s arguments. I agree that Borgmann does lean heavily on theology, but he also makes some good arguments drawn from classical philosophy. I think Borgmann makes a good attempt to overcome the postmodern idea of reality as a social construct, but I am not sure that he needs to. I think there is a duality of “reality” or a “relative reality” for each of us. If an event occurs many people may recall it differently (some drastically so). For each of them what they remember is reality for them. Borgmann makes a good stab at “general reality” and how our natural information or the reality of our ancestors is taking a back seat to technology and technological information and virtual reality. He does advocate for a balance. However, I don’t think it is information technology and cyberspace alone that is to blame. Technology in general has caused us to live unlike our ancestors did. We are much more mobile now then even our grandparents were. This results in us not having strong ties to a local “real” community. We often rely on technology to help us maintain connections with distant communities or reach out to cyber communities because we don’t stay in one place long enough to make strong physical connections. I am not sure it is Internet technology that is to blame, but multiple different technologies over decades or even a century that have changed how we live. We do need to be reminded of the importance of natural information and Borgmann’s reality, and need to head the call to moderate and balance our lives, but cyberspace is only one of several technologies vying for dominance in our lives.

Websites reviewed:

None this week.

Week 14 references:

Borgmann, A. (1999). Holding on to reality: The nature of information at the turn of the millennium. Chicago: University of Chicago Press.


Kellner, D. (1999, September). Review of Albert Borgmann’s holding onto reality. In RCCS Resource center for cyberculture studies. Retrieved November 24, 2008, from http://rccs.usfca.edu/bookinfo.asp?ReviewID=56&BookID=57

Week 13

Holding on to reality pt. 2 & Ess 2002:

Holding on to reality pt. 2: “Cultural information -information for reality”

Ch. 6: “Producing information-writing and structure”

Borgmann (1999) states that while natural information is about reality, cultural information is specifically for shaping or forming reality, although it can also be about reality. Borgmann points out that the sketchbook of a medieval mason, Villard de Honnecourt, contains sketches of existing cathedrals (information about reality) as well as designs for building a new church (information for reality).

Regardless of whether the cultural information is for or about reality, Borgmann (1999) insists that it comes about in a different manner than natural information. Natural information appears and conveys its meaning and then disappears. Cultural information is pulled from reality and has a specific shape and content that lasts. It cannot just appear, it must be created by humans, this cannot happen before language and alphabets emerge.

Borgmann (1999) mentions that Plato was skeptical of writing because he felt it would threaten the importance of community. While Plato was thinking of a particular community (his students) there is something in his fear. I believe in a way that writing and later communication technologies have partially destroyed the importance of communities. When few people were literate and when there was not the abundance of written news as there is today, people gathered in communities to find out what was going on. This could be a central market place or a religious institution. Regardless, people came together in a physical place to exchange news. Later when everyone (or nearly everyone) could read newspapers, and with the advent of radio and TV there is a lessening of importance on those community gatherings. One does not have to go to the market or to church to find out what is going on in the world.

Borgmann (1999) explains how Plato’s analysis of the structure of writing -made up of finite elements (letters) led to his analysis of reality and that it has finite elements also (atomic structure). It is this search for structure (the structure of language and the structure of reality) which has inspired many throughout time. Just as building a physical structures is “structured”, so too is the field of information architecture. In seeking structure geometry was discovered which reveled the structure of the world’s form and physics which reveled the structure of what filled the world or its content (Borgmann, 1999).

Ch. 7: “Producing information-measures and grids”

The search for physical structure tied to the structure of language proved to be disappointing. Microscopic structures did not carry on to the larger picture. Scientific information about a building (its height, and the molecular structure of its components) is quite limiting. It does not tell us everything about the building. “There is, then, and information gap between the structural information that is uncovered by scientific analysis and measurement and the contingent information about the expressive faces and eloquent voices of people and things” (Borgmann, 1999, p. 74). Just because you know my DNA and fingerprints, does not mean you know me! Our basic structure leaves out a lot of information.

We started creating our own structure-standardized measurements for example. We started using grids to extract information from reality. Elaborate descriptions of locations using natural landmarks such as trees and stumps are not very helpful over time. When this information is extracted and placed on a grid (mapping) it becomes more meaningful over longer periods of time.

Borgmann (1999) gives an early example of “…how the progress of information technology yields information more instantaneously and easily while at the same time it disengages us from reality and diminishes our expertise, the latter being assumed by the machines of a device” (p. 79). He is referring to the invention of a chronometer which could be taken to sea so sailors could calculate their longitude in relation to the Greenwich Meridian. The only other option was something called the lunar distance method which required the sailors to accurately determine the position of the moon and then perform lots of mathematical calculations in order to get their longitude. The chronometer basically did this work for them and one only had to determine the difference between noon in Greenwich and noon at the ship’s location.

The invention of printing and engraving formed a new medium of information, allowing many people in many places to access the information-this has now be transferred to cyberspace.

Ch. 8: “Realizing information-reading”

Writing and drawing allows one to take time to record ideas over time with considerable thought. This cultural information increases our ability to think before we act. An oral society needs to risk trying something or decide not to try something but with writing we can explore and record ideas and refine them and other variations before action is taken.

Borgmann (1999) provides the term “realization of information” to mean the act of following cultural information (recipe, plan, etc.) to produce what is described (cake, building, etc.). He takes this further and says that we are realizing information when we read. Borgmann (1999) believes that “reading” actually means to make sense of signs or to “puzzle out” something (i.e. read peoples expressions to determine mood, read flow of water to determine best route for boat, etc.). Borgmann (1999) compares 18th century women’s reading to escape their lives with virtual reality:

“Under such conditions, the reality conjured up by the reader resembles the virtual reality produced by information technology. The hallmark of both realms is escape and seclusion from the actual world although even then there is a difference. The reader’s world is diffuse and suggestive while a virtual reality is definite and detailed” (p. 91).

In reading we are given a blueprint or outline by the author but we must construct the complete structure with our minds. With virtual reality a machine is constructing this for us.

Later Borgmann (1999) goes on to say that the best reading is like a vision quest and realizes a world view: “Intelligent reading of fiction and poetry, far from being an escape, is a tacit conversation with actual reality”(p.92).

Personally, I think reading is both escape and “a conversation with actual reality”. One can often escape one’s immediate situation while being drawn to thinking about world reality. One can contemplate the reality of physics or similarities in world mythology while escaping from one’s inability to pay one’s bills or the immediate reality of an abusive relationship. It also, of course, depends on what is being read. Cheap thrill or romance novels are designed for escapism and entertainment.

Ch. 9: “Realizing information-playing”

Borgmann (1999) discusses music in this chapter. He puts forth the idea that although there is most often a definite structure (score) to music, there is also an infinite variety. Each performance of a particular piece can be quite different but it is still the same piece of music with the same basic structure.

Borgmann (1999) also questions the Pythagorean idea that the purer or clearer the structure the more pristine and “better” the work.  If that is true, why do we produce fuzzier works? Why is an impressionistic paining so worthy when a photograph would be more precise and structurally pure? Mathematical structures are quite “pure” but a s Borgmann (1999) points out: “Mathematical structures can be applied to music or cosmology, but they do not of themselves encapsulate the essence of a cantata or the universe” (p. 98). Borgmann (1999) also points out that recordings of music (records, CDs, etc.) disguise the full structure since recordings “…occlude the place, the time, the ardor, and the grandeur that provide the setting for the musical realization of structure” (p. 103).

Ch. 10: “Realizing information-building”

Construction of a building is the most tangible and public way in which information is realized. The plan and blueprint when followed give rise to something that could potentially become a landmark. Borgmann (1999) discusses how the context of communal information can greatly change over time. The Freiburg Minster was a grand gothic cathedral built starting in 1200 and completed in about 1513. For centuries as it was being built and afterwards it was an important religious site. Now, however, it often seems like a tourist attraction or a concert hall. In some ways its original context has been lost; it now has a new modern context.

Ess “Borgmann and the Borg”:

Ess (2002) points out that Borgmann believes humans are rooted in natural information in our bodies. Thus when we do venture beyond reality eventually our bodies pull us back. This is probably true. Even though people may become obsessed with virtual spaces (social networks like Second Life, or games, etc.) eventually the literal call of nature returns them to reality. They find they must eat or drink or use the bathroom. These are all natural signs (natural information) pulling our bodies back to reality. Of course one could take drastic measures to prevent this (hooked up to IVs and a catheter) but most people would not go to this extreme and would find themselves pulled back to reality.

Disposable commodities:

Ess (2002) cites Borgmann as noting that the online world has the potential to reduce people “to disposable commodities” (p. 31). He is discussing love and eros versus online sexuality. In one sense I agree that the online world can reduce humans to commodities. When one looks at an avatar one sees a character. Much like a game character it can be manipulated (bought, sold, abused, loved, killed, etc.) all virtually. It is a thing not a human. On the other hand, we humans have a long history of treating other humans like commodities (street prostitutes, slaves, etc.). I think to some extent online worlds do make it easier for us to treat other humans as “disposable commodities” but it is by no means the only source to blame as we’ve been doing it before the technology was around.

Borgman and consumerism:

Borgmann (1999) believes that consumption aided by technology is “…unencumbered enjoyment of whatever one pleases. The pleasures of consumption require no effort and hence no discipline.” (p. 207). Ess (2002) follows up on this thought by explaining that the person who consumes technological information via computers and the internet is quite removed from the effort, discipline and skills that are used offline with natural information and cultural information. According to Ess (2002), Borgmann’s critiques consumerism because it helps foster “…a kind of careless acceptance of technological information and a correlative loss of the skills and engagements that defined the good life in pre-modern worlds…” (p. 35). I partially agree with this. I think that online consumerism and using the internet so much for gathering information is helping to deteriorate reading skills. Many studies have been done which conclude that people read online very differently than how they read printed text. Web designers and Information Architects know to design web pages with important information at the top and do not put vital information in the lower right hand corner of a web page as that area is seldom read. Web users tend to skim and don’t do deep reading; some say that younger users who have grown up with the Web lack deep reading skills (Bauerlein, 2008).

Websites reviewed:

None this week

Week 13 references:

Bauerlein, M. (2008). The dumbest generation : How the digital age stupefies young Americans and jeopardizes our future (or, don’t trust anyone under 30). New York: Jeremy P. Tarcher/Penguin.

Borgmann, A. (1999). Holding on to reality: The nature of information at the turn of the millennium. Chicago: University of Chicago Press.

Ess, C. (2002). Borgmann and the borg: Consumerism vs. holding on to reality. Techne, 6(1), 28-45. Retrieved November 17, 2008, from http://scholar.lib.vt.edu/ejournals/SPT/v6n1/pdf/Ess.pdf

Week 12

This week features a lot of reading. We move from Burnett and Marshall’s 2003 Web Theory to Borgmann’s 1999 Holding on to reality.

Borgmann, A. (1999). Holding on to reality: The nature of information at the turn of the millennium. Chicago: University of Chicago Press.

Burnett, R., & Marshall, P. D. (2003). Web theory: An introduction. New York: Routledge.

Web Theory Ch. 8, 9, & Conclusion AND part 1 Holding on to reality:

Web Theory Chapter 8: “Web of informational news”-Burnett and Marshall (2003) explore how the Web has changed how new has been recontextualized in the search for information. They claim this shift has an important effect on how our society is organized.

Digital news is different from traditional news in 3 ways:

  • Short lifespan
  • Immediacy
  • Capacity to link to other sources

Interactive news allows for users to personalize their new sources and pick the time, the content and form that their news will take. Several online options for news delivery are:

  • News databases-this is basic document retrieval, users have a task oriented goal-they are searching for particular information.
  • Web browsers/WWW-following the newspaper metaphor there are news sites which divide information into sections and allow browsing of several news stories. News stories along with photos and embedded video clips are shown across the screen instead of just one document at a time.
  • News Groups-these have a very narrow subject focus with most items posted by the subscribers of the newsgroup.

There are some important differences between traditional news delivery and digital delivery:

  • Content is integrated collection of multimedia and multi-source items
  • Provider has less control of packaging and delivery
  • Reader has more control of packaging and delivery
  • Selection, classification, and prioritization functions of editing may be supplemented by 3rd parties for subscriber fees
  • Layout is dependent on display facilities (software & hardware) that the reader provides
  • Delivery is through 3rd party (telephone, Cable Company, etc.)

Constraints of informational news: Burnett and Marshall (2003) point out some problems associated with informational news.

Although there is an implication of plenty of news, information, and sources, there are limitations and problems. The biggest problem is that informational news produces the opposite of its potential for plenty of news content. It actually can reduce the content of what is viewed and read rather than increasing the amount of news content. Much of this is due to how people read on the Web. Sites that allow for the personalization of news mean that people filter out news and only are exposed to news that fits their profile. Also, when newspaper sources are converted to the Web, they are often shorter in textual content. It is perceived that people read less text form the screen than they do in print form. Therefore news sites have more pictures and less text than most newspaper copy. Users also tend to surf through and skim online news instead of deep reading news stories.

This is changing the newspaper business. In the effort to collect and disseminate the news faster and faster 24 hours a day, there is a huge loss of in-depth coverage and perspective of analysis. More news stories are over simplified and lack local perspective.

On the one hand since most news sources online are supported by advertising (rather than paid subscription) the news content has a abundance of advertisements and a focus on entertainment information and links around news stories. On the other hand, the Web allows traditional news forms of TV, radio, newspapers, and magazines to expand beyond their technical confines. Websites connected with these traditional sources can be filled with colours, pictures, sound, and hypertext.

Question of quality: “…online publishing brings…the forces of commercial television to all content publishers: the direct drive to attract audiences, the short attention span of readers and the need to produce captivating material. Quality newspapers, with their established tradition of fair and objective reporting, are at the moment forming a necessary counterweight to the more superficial news reporting often brought to us by radio and television. This same quality could be brought to the Internet …For this option to become effective, newspapers need to treat their online versions not merely as an experiment but as a serious part of their publication’s business” (Burnett & Marshall, 2003, pp. 170-171).

Burnett and Marshall (2003) also ask how we can recognize the levels of bias and accuracy in news when it is delivered in this new format. They point out that individual newspapers, TV, and radio stations gained a reputation for bias and reliability. It is predicted that a small number of larger conglomerates will dominate the mass information market. Already Rupert Murdoch owns much of what is currently disseminated.

Again Burnett and Marshall (2003) point out the shift in quality that has occurred: “On one level, the quality of news as it is now reorganized as informational news has questionable values in terms of validity and legitimacy…On a second level, the quality of news has shifted in terms of modality” (p. 172). It has become a “do it yourself” project as users become their own editors and selectors of news stories.

Web Theory Chapter 9: “Web of Entertainment”-in this chapter Burnett and Marshall (2003) focus on the influences that the internet and digital technology are having on the Entertainment Industry -with a special focus on the Music Industry. This industry was ht with the realization that the centralized control it wants to impose is impossible, and file sharing is a daily activity on the Web.

One big impact of digitalization is that the physical boundary of production which was linked to intellectual property has been broken. Cassettes, CDs, etc. are no longer necessary to disseminate music-this makes intellectual property laws tricky and distribution tough to control.

Digitalization has drastically changed the Music Industry. Musicians switched from analog recording to digital. Musicians no longer needed to be in the studio together to work on a recording. A distant musician can lay down a track in a MIDI file and send it via the Internet to the producer. Some innovative musicians are composing in cyberspace.

The Web allows better connections between the musicians and their audience. A Web page can promote an artist, offer audio clips of music, promote concert tours, and even stream video clips of a concert. The Internet can give the musician much more control but could potentially end music labels as we know them-they are fighting back.

As Burnett and Marshall (2003) point out, it is easy to give away music on the internet, it is harder to arrange a system where artists get paid for what they create. Companies such as itunes have developed such a system.

Industry Response: The Recording Industry Association of America (RIAA) claims that 95% of MP3s downloaded are illegal. The music industry is concerned with the amount of piracy that goes on in web available music. It is this battle over music on the internet and the legal battles of RIAA and Napster which generate much of the news.

3 Mistakes of the music industry (Burnett & Marshall, 2003):

  • Music industry underestimated the massive effect of the internet and the rapid speed at which it would develop-instead of getting exited about the internet hype in the 1990s, corporate leaders attacked the Internet and strengthened copyright laws. They missed a huge opportunity to bring music to the digital age.
  • Music industry saw the arrival of MP3 format and Napster as a threat rather than an opportunity-companies feared they would lose their investment in CD pressing plants and that musicians would bypass them and sell directly to their fans.
  • Music industry chose to view file sharing music fans as criminals rather than early adaptors and innovators-the music industry did not create an early virual marketplace to sell music so individuals started trading their collections with one another. When the music industry clamped down and closed fan sites and prosecuted individuals they alienated a whole generation of music fans.

Gift economy and the loose Web: Some artists such as the Grateful Dead have proven that artists can do well by virtually giving music away. The trade of bootleg recordings often causes fans to purchase more music and to pay for concerts etc.

Lessons for the Web of Entertainment:

  • Content industries need to pay attention to what happened with music-as other products become digitized similar situations might happen with software, film, games, books, etc.
  • The battles over intellectual property rights reach into a variety of different areas. If entertainment companies want to stay competitive with pirate sources, they must accept lower profit margins for digital sales.
  • The fight over standards is the most intense struggle and has the most far reaching effects on many segments including artists, producers, publishers, and consumers.

Digital technology and the web have changed the relationships among producers, distributers, and consumers. In this environment it is important to have proactive policies rather than reactive policies.

The arrival of P2P (peer-to-peer) file sharing has threatened the traditional economy of the music industry. File sharing is an integral part of the internet and will not go away. Five mega companies control the global flow of music and conglomeration and corporate concentration has spread to other entertainment industries. However, if we do not allow for innovators and people willing to experiment and push the boundaries (hackers, pirates, etc.) then the industry might become irrelevant and disappear.

Web Theory Conclusion:

Unlike past forms of media, the Web has so many exceptions and inconsistencies that make determining an “essential nature” close to impossible. Burnett and Marshall 92003) have worked on the premise of 2 core concepts: the loose web and the cultural production thesis. They encourage people to think about how we routinely use the Web and how it has been normalized and integrated into our society.

The Web offers a variety of connection and communication functions: emails, chat, video phone, etc. It is also a source of information both authoritative and anonymous. The loose web allows the user to be involved in private chats and simultaneously be an anonymous searcher of information. The user is also part of the web economy by being a potential consumer of goods. While the user browses and consumes “free” information and communicates with others, they are often bombarded with advertisements for goods and services which can be purchased. The Web serves as a large agora for global peoples. While users might feel anonymous, a great deal of information about them and their needs is being collected from every site they go to and every query they type. The virtual nature of the Web makes it difficult to maintain intellectual property rights. Digital files (music, image, text, etc.) can be sassily shared with multiple users instantaneously and is very hard to control. Thus the user is much more active and actually produces the Web itself. It is this ability of the Web to allow users to produce content that has made it such a cultural phenomenon. This is part of the cultural production thesis.

In the field of news there have been huge changes. News is no longer just produced by journalists and consumed by the masses-on the Web it is a blend of professionally produced and user produced. The same is going on in the entertainment field. Digital files are passed from one person to another and modified for their personal use into something different.

The Web has been integrated into every part of our society. We speak the lingo of the Web. “I Googled it” is a common expression now and has become second nature to a large portion of society.

Holding on to reality Intro: Borgmann (1999) introduces 3 types of information:

  • Natural information (Information about reality)-an example is a report; it informs us about what is out there.
  • Cultural information (Information for reality)-a recipe, score, plans-these are instructions that tell us how to create reality (how to make a pie, construct a building, play a piece of music or in the case of a constitution, build a nation).
  • Technological information (Information as reality)-our new technologies rival reality as we have known it. A piece of music on a CD is not telling us about the piece, nor is it telling us how it should be played (as the score does); it IS the music itself and is a rival of reality (a live performance of the piece).

The types of information are layered, but Borgmann (1999) claims that technological information is more than a layer-“a deluge that threatens to erode, suspend, and dissolve its predecessors” (p. 2).

Borgmann (1999) questions whether the flood of information we have been experiencing is good for us. People question whether the new technology will deepen the economic gap between the haves and have not’s, but now it seems as if everyone is in danger of drowning in our information overloaded society. Borgmann (1999) sees a need for both a theory of information and an ethics of information and acknowledges that “a good life requires an adjustment among the three kinds of information and a balance of signs and things” (p. 6).

Holding on to reality pt. 1: “Natural Information-information about reality”

In chapter 1 (Decline of meaning and the vise of information) Borgmann (1999) sums up the history of the term information; from its old form in Latin to 1948 when it became a prominent word. In the middle ages it was reality that mattered and had power over people. In the modern time period “Eloquence and meaning began to drain from reality” (Borgmann, 1999, p. 10).

With the influx of science many spiritual and moral edifices of our society collapsed. But the scientific reality of molecular structures seems to be lacking meaning and structure. Borgmann (1999) cites Donald MacKay as pointing out that information is the ingredient to add order and structure to our world of matter and energy-“information… is that which determines form…”(p. 11).

Borgmann (1999) also points out the difference between having information and knowing. There is a difference between direct knowledge and indirect knowledge:

  • Direct knowledge-have a direct experience (to know someone or some place because one has met them or been there).
  • Indirect knowledge-this is the same as having information about something (to know ABOUT someone, or know ABOUT some place because you have read about it).

As distinguished by Bertrand Russell, there is knowledge by acquaintance and knowledge by description. Information is knowledge by description or indirect knowledge. Borgmann (1999) believes that this distinction is eroding away because there is a decline of meaning. Cultural landmarks dissolve as everyone becomes indifferently related to everyone and everything and this process has been culminating through information technology (Borgmann, 1999).

Ch. 2-Nature of information:

Instructive information is about a distant thing. It does not bring the reality of the thing to someone but it does provide someone with the sense of a thing. There are signs or objects about things which provide us with this sense. Borgmann (1999) points out that information is the relation of 5 things: “Intelligence provided, a person is informed by a sign about some thing within a certain context” (p. 22).

Ch. 3-Ancestral information:

Borgmann (1999) claims that: in our modern world we have been confused and distracted by “the intrusion of signs into the presence of things. By now we are so inured to the blight of untrammeled information that it takes a deliberate withdrawal to something like the ancestral environment if one is to notice the damage done” (p. 26).

Ch. 4-From landmarks to letters:

Oral cultures did not have as many physical signs as our modern society but this was balanced by a greater ability to retain information and “a fuller engagement of the person, and a greater intimacy of the context” (Borgmann, 1999, p. 38).

By using counting devices (pebbles, beans, shells, sticks, etc.) people did not have the burden of remembering numbers of things and were removed from the confinement of context. These devices could be taken from one place to another unlike a monument or a wall. However, this also meant a diminishment of information-the devices only served as a reminder of a number, not a symbol of the context. Writing and alphabets combined the best of both worlds. The writing was portable, saved people from having to keep so much in their memory and could convey and symbolize important information.

Writing leaves behind the fluidness and context of speech and provides “a rigid, permanent, and detached piece of information. In fact, writing extricates information from persons and contexts and sets it off against humanity and reality” (Borgmann, 1999, p. 46).

Ch. 5-The rise of literacy:

Literacy has never been resisted. It enters and transforms oral cultures. Borgmann (1999) mentions its transformation in Ancient Greece, its effect in the early middle ages, and its effect in the European conquest of the “New World”. It is a very compact and powerful form of information.

At first people were suspicious that writing would make people forgetful as they would not be actively using their memory. Plato recognized the fear that people had that people would be mistaken for knowing things merely because they had a book on it (not having read it). However as Borgmann (1999) relates “…the Greeks soon came to realize that ownership of a book will enhance one’s reputation only if one has read the book comprehendingly” (p. 49).

Writing does upset the balance of things. In the natural information scenario, natural signs appear and then shrink away-a bend in the river alerts one to a campsite area, but then becomes the river again. Writing creates a massive accumulation of information. Too much accumulation can be confusing.

While writing and literacy can drain some life from an oral society, writing can be extremely powerful and transcending. In oral speaking one takes what is at hand and in the mind at the moment; it can be monumental or it can fall flat. Writing can be a work of art. Borgmann (1999) compares writing to carving marble. It can be worked and reworked and refined and polished until the result is an incredible work of art.

Websites reviewed:

None this week

Week 12 references:

Borgmann, A. (1999). Holding on to reality: The nature of information at the turn of the millennium. Chicago: University of Chicago Press.

Burnett, R., & Marshall, P. D. (2003). Web theory: An introduction. New York: Routledge.

Week 11

Web Theory Ch. 5, 6, &7:


Web Theory Chapter 5: “Look of the Web”-In this chapter, Burnett and Marshall (2003) examine the changes in appearance of the Web, and some general characterizations. Their focus is on content and design. They study the new web “aesthetic” and identify what past media forms it relied on to form its “look” and how it differentiated itself from its ancestors. These past forms have been layered on to a present form.


The history of the Web and HTML is outlined but the aesthetics of the Web is difficult to establish. Content is meshed with structure, text combines with image. Virtually anything that can be done off the web can be done or at least portrayed on the web.  The Web’s lineage goes back to screen media such as TV and film, but also integrates print media like magazines. The Web aesthetic follows fashionable styles, but is designed for accessibility. Websites follow a pattern of familiarity-they stay in a comfort zone. New sites build on past aesthetics. A magazine website mimics a magazine in its styling. However, the nature of the Web means that paths must lead to other websites so these structures cannot be rigid. Any website that tries to contain its users will soon be passed by.


Web Theory Chapter 6: “Web Economy”-the 1999 holiday shopping season heralded the web commerce era. Previously consumers were afraid about online purchases. This changed with the enormous presence Ebay and Amazon and the fact that online companies were advertising along with traditional stores. In 1999 $4 billion was spent online in the U.S. (Burnett & Marshall, 2003).


In the span of 5 years the Web moved from a place that catered to the sale of software and other technologies to selling every product and every service. Burnett and Marshall (2003) compare the Web commerce with the traditional agora or marketplace in which not only goods are exchanged, but also ideas and politics. This Web culture is public but also routinely reveals the private life of users. The new agora of the Web has resemblances to past marketplaces but is something unique.


The Web also has clear ties to free exchange through its early connection with universities and research. In part the Web is like a library and the concept that once you are a member you get borrowing privileges is quite strong. Online users often resemble traditional library users. They can have a specific need or they sometimes just want to browse. Like library content that links different items by a common connection (subject, author, etc.) the Web also links items or pages by some common thread.


The Web also has connections to government, public service and education. Programs have developed to get schools connected to the Web, not for commerce but for exchange of ideas and communication.


Companies have been searching for the ultimate software or program that every internet user would pay to have on their computers. As the Internet took off many companies have risen and continue to try to trade on the NASDAQ stock exchange. As Burnett and Marshall (2003) point out, few of the internet companies traded there have made profits. They may generate large revenues but most operate at a loss! Amazon is cited as making a profit for the first time in 2001! In April 2000 there was a market correction to readjust Web related companies that had been overvalued. This was the dot com crash where the NASDAQ lost 25% of its value and many companies went bankrupt.


Not surprisingly the “killer app of the first decade of the Web” is pornography (what a lot of people are willing to pay to have on their computer). Burnett and Marshall (2003) claim that internet pornography sales helped to commercialize the Web.


This chapter examines 4 divisions of work that are related to the Web economy:

  • Internet Infrastructure Layer-Internet service providers, companies that sell equipment to access the internet-includes telecommunication companies (AT&T, Verizon, etc.).
  • Internet Applications Infrastructure Layer-produces software and services to facilitate web transactions, designers of websites and portals (Microsoft, Oracle, Adobe, etc.).
  • Internet Intermediary Layer-workers and activities that help mediate the movement of goods and services-content providers, advertising agencies that produce banner adds, search engines, online brokerages, online travel agents, etc.
  • Internet Commerce Layer-workers of companies that actively sell products and services online-Amazon, direct sales by airlines, etc.


“The Web produces an economy that intersects with all industries, television in contrast tends to intersect with many parts of the industrial culture through advertising and its direct sales through home shopping channels provide a stunted relationship to retailing” (Burnett & Marshall, 2003, p. 120).


Media industries such as TV, radio, and magazines manufacture audiences. These audiences are then sold to advertisers who buy the audience’s time through commercials and ads. This creates a strong media economy where the audience gets the program for “free” but has to sit through commercials whereas in going to view a film the audience pays to view it. To determine the value of a particular “spot” there are calculations that determine the size of the audience. The most expensive advertising slots are during the Superbowl on American network TV. A similar economy is emerging on the Web. Some sites attract more people and thus ads on these sites potentially reach a greater audience. This has led to an abundance of people counters and hit counters to determine how many people visit a site. Companies such as Nielsen do official website counts and guarantee to advertisers and the website owner that the “audience” of the site is a certain number. These statistics reveal a lot about how the Web is used and also how that information is used to create a system where the value of the audience-commodity is determined.


Web Theory Chapter 7: “Web of policy, regulation, and copyright”-Burnett and Marshall (2003) take a hard look at the ideology of information technology which has had a huge impact on the policies and regulations that have developed. In the 1990s governments jumped on the bandwagon and actively supported and helped fund the development of the Web. A lot of money has been invested in this new industry and Burnett and Marshall (2003) question how governments could so readily accept these terms (information technology, global information infrastructure, etc.) and make them the basis of policy action.


It is pointed out that new technologies usually end up being used in ways that the inventors did not dream of. Often metaphors of older technologies are used for new technology but we must be critical of adopting these metaphors as history has shown how far off our initial thoughts about new technology have been.


According to Burnett and Marshall (2003) “the current ideology of information technology first emerged n the  [U.S.] when federal government policy favoured scientific and technical research with development in the private and public sectors that would contribute to economic growth” (p. 128). Competitive innovation was given a priority over social welfare; science and technology were glorified. The idea of a new information society with a workforce class of knowledge workers was pushed by many and the drive for greater productivity led to an increased demand for information and technologies. These trends led to the deregulation of telecommunications. Burnett and Marshall (2003) point out that technical innovation does not drive all the change-instead it is Free Market Capitalism which demands new technologies to drive the economy. This linking of new technology to free market economy means that all aspects of society become commoditized. In order to adopt the new technology one must adopt free market values and the “commodification of information”.


In discussing the political myth of the ideology of information technology, Burnett and Marshall (2003) point out that studies across the globe failed to show a strong correlation between the investment in technology and a growth in productivity.


Regulation and Copyright: As a new technology the Internet both inspires and threatens traditional institutions. The public identifies threats to the future of the internet as: censorship, accessibility, privacy, and pornography. However, as Burnett and Marshall (2003) point out, the real battle is the national and international legal and government battles over intellectual copyright laws and how they pertain to the Internet.


There are a lot of problems with copyright law. Many claim that the current copyright laws are inflexible and only protect the entrenched interests of corporations and harm everyone else-even those creators the laws were designed to protect. The original copyright law gave protection for 14 years with a renewal option for another 14 years if the author was alive. By Congressional law this has now been changed to the life of the author plus 70 years! This will keep most works out of the public domain for a century or more. Critics point out that the intended goal of copyright law was to create incentives for being creative-not just the original creator of a work but for others who wanted to build on that work in new ways.


Content Regulation: Obscenity, pornography, and “adult” or “objectionable” topics on the Web are easier for children to access than in previous media forms. Legally it is very difficult to define obscene material. Various political, religious, and community groups are concerned with “moral” issues and the “free” access that children have to “objectionable” material online. This has created a huge conflict between Free Speech and Child Protection. Burnett and Marshall (2003) outline some key rulings from 1996 to 2001 but for libraries the Supreme Court 2003 decision concerning the Children’s Internet Protection Act (CIPA) (see United States, et al., appellants v. American Library Association, Inc., et al. No. 02-361) has had a huge impact. CIPA states that any libraries that receive Federal funds to help pay for internet access (many libraries are eligible for special funding to help breach the digital divide) must have internet filtering software installed on their computers or they lose the funding. This software blocks sites even from adult users. Libraries have had to make important budget and access decisions including refusing the funding and keeping full internet access or accepting the funding and having policies to allow adult users to ask for access to needed sites which are blocked.




Websites reviewed:



None this week.



Week 11 references:


Burnett, R., & Marshall, P. D. (2003). Web theory: An introduction. New York: Routledge.


United States, et al., appellants v. American Library Association, Inc., et al. No. 02-361. (2008). In U. S. Supreme Court Cases, Lawyers [Web]. Retrieved October 9, 2008, from LexisNexis database.

Week 10



Web Theory Ch. 2, 3, & 4:


Web Theory Chapter 2: “The web as information network”-Burnett and Marshall (2003) identify two elements of the web. The first is that it is made up of information and the second is that it is significant because the information is distributed into a network. The digitalization of information creates a tension between two opposite positions. Because all information can be reduced to binary code (digitalization) the web can expand infinitely as it absorbs previous forms of information distribution (books, TV, radio, etc.). On the other hand, the cybernetic network of the web is a system of control (all networks are to some extent a control system). This means that the cybernetic system and the information therein is subject to control and surveillance. Because of the network structure the center is hard to determine, but information does flow in certain directions and some of these recreate or represent traditional power structures.


Like other networks in the past the web can spread information to a wide audience. In TV or Radio networks there was a great deal of centralization. The same message could be sent to a national audience at the same time. Before cable, when all networks were broadcasting the same thing, one was assured that most of the nation was simultaneously getting the same message. The web works in a similar way, in that the same message can be sent out and broadcasted to a wide audience. However, because of the vast amount of information being sent on the web, one cannot be assured that everyone is “tuning in” the same message. Also, because so many people have access to producing information on the web, it can be difficult to verify the accuracy and authority of the information. In more closed networks (TV, Radio) only certain people were allowed to produce and broadcast the information.


Web Theory Chapter 3: “The Web as Communication”-The Internet has merged 3 traditional types of communication and added a 4th:

  • Interpersonal (one-to-one) -conversation, telephone, email.
  • Mass communication (one-to-many)-TV, radio, newspaper, book.
  • Computing (many-to-one)-databases, (and possibly comments on blogs or websites).
  • Many-to-many-web pages


Other ways of defining or categorizing communication are discussed. One distinguishes time and place instead of just senders and receivers:

  • Synchronous (face-to-face)-sender and receiver are in the same place and message is received at the same time.
  • Asynchronous-sender and receiver are in the same place but message is received at a later time (note left on door).
  • Synchronous distributed-sender and receiver are in different places, but the message is received at the same time (telephone, online chat).
  • Asynchronous distributed-sender and receiver are in different places and the message is received at a later time (message on answering machine, email, mail).


The Web also “networks” people. It encourages a lot of weaker connections between people who are not in the same physical place. Face-to-face interaction is no longer the only way to maintain relationships between people. Much like letters and phone calls, computer connections maintain those loose human links. Burnett and Marshall (2003) claim that these weak connections are more diverse socially and thus provide wider ranges of information. Burnett and Marshall (2003) cite Wellman and Hampton [1999] who agree that this connectivity reduces identity and pressures of belonging to groups while also increasing opportunity and globalization through social networking.


The Web is all about connections and communication in a multitude of different modes. The Web can be one-to-one and a mass mediated communication system at the same time. It is both a placed of production and consumption. These different modes make the borders much looser. The way people use this loose web is quite different from other traditional forms of communication. As Burnett and Marshall (2003) point out “The loose Web is an interconnected media and communication mix that produces simultaneously audiences, community, conversation, and connection.” All of these get blurred together. People on the Web have started blurring personal and professional identities as these become linked.


Web Theory Chapter 4: “Web of Identity”-The Web of identity is the role that the Internet plays in constructing identities. The nature of the Web enhances the role of user. People are no longer just an audience. As users they create, interact, and consume. Personal websites and social networking sites allow a user to create an online identity which is becoming more and more important. Often the online identity is seen first before any physical identity.


Burnett and Marshall (2003) cite Turkle [1995] who discusses how the Web is causing fundamental shifts in how we create and experience human identity-” In the real-time communities of cyberspace, we are dwellers on the threshold between the real and the virtual, unsure of our footing, inventing ourselves as we go along” (p. 63).


Different terms for people using the Web are also examined:

  • Surfer-“surfing the net” comes from TV channel surfing.
  • Browser-term for both search engine/software and the person, implies less goal directed activities.
  • User-expresses the range of types of engagement with the Web and implies the ability to produce information as well as just “browsing” it.


According to Burnett and Marshall (2003) and important aspect of the Web is that production is never completely separated from reception. They also compare the phenomena of personal web pages with the early 1900s phenomena of personal images taken with personal cameras -especially the ‘Brownie’ camera.


With the ability to both create and receive, the Web has become an active medium for the construction of identity. Burnett and Marshall (2003) conclude that the Web challenges boundaries around several types of identities:

  • Anonymity-part of the web allure is that the web identity is not necessarily connected to one’s physical identity. Avatars allow one to “become” someone or something that may not be physically possible.
  • Language-English was and is currently the lingua franca of the Web, but while 66% of web content is in English, 43% of users do not use English while online. The Web expresses both work and leisure identity and as the Internet expands rapidly in China and other Asian countries, the language balance may shift.
  • Narcissism-personal web pages are expressions of self identity and demonstrate the blending of public and private identities on the Web. To gain desired audiences, some web users push the intimacy level by having 24 hour web cams to display their identity to the world.
  • Gender-gender is at the center of identity construction. The history of computer communication has been male dominated (much like society), and in the beginning the web had the same gender bias. However, in countries where a high percentage of the population uses the Internet, this bias has disappeared. Some researchers suggest that the public sphere is being feminized as is consumer culture. Burnett and Marshall (2003) don’t mention the fact that the web allows users to play with gender identity. A physical male can become a virtual female online and vice versa. They do point out that we must think through classical boundaries about the gendering of contemporary culture.
  • Collective identities-the Web is a hub for collective identities. New political movements develop as do other networks of people. These networks can develop into associations where these groups have a global audience.



Websites reviewed:

None this week



Week 10 references:


Burnett, R., & Marshall, P. D. (2003). Web theory: An introduction. New York: Routledge.

Week 9



Information Arch. Ch. 19 &20 AND Web Theory Ch. 1:


IA Chapter 19: “Information Architecture for the Enterprise”-An enterprise is usually a large, physically distributed organization. However, in reality an enterprise is any organization in which “one hand doesn’t know what the other is doing”. It could be a large company with many subdivisions, or it could be a smaller company that is made up of departments that once used to be independent. Whatever the reasons, enterprises can be quite confusing. Often there is a battle between people who are pushing for centralization and those who are pushing for more autonomy for separate departments. This is often seen in websites which have not set a style for the enterprise (separate pages from different departments all look different). There can be a great deal of duplication and findability issues across diverse semi-autonomous pages.


Enterprise IA pushes for centralization at least for the website. The ultimate goal is to help users find what they need on a humongous site. If every department has the same template for web pages, the users will be more comfortable. Hopefully a search program can be utilized to help users find diverse information content across the whole enterprise. Much of the tough work is trying to integrate metadata and indexes. Some benefits of centralization include:

  • Increased revenues
  • Reduced costs
  • Clearer communication
  • Shared expertise
  • Reduced likelihood of corporate reorganization
  • Centralization is inevitable anyway

(Morville & Rosenfeld, 2006, pp. 395-396)


Centralization isn’t everything, but it usually helps. The real IA goal is to “identify the few most efficient means of connecting users with the information they need most” (Morville & Rosenfeld, 2006, p. 397) whether that is centralizing or offering a content tagging option for employees.


IA Chapter 20: “MSWeb: An Enterprise Intranet”-this chapter is a case study of how Microsoft improved their intranet through three taxonomies: indexing vocabulary, schema, and category labels.


User challenges: The biggest challenge for users is that MSWeb is huge (more than 3 million pages). The information is from 74 countries and is produced by and for more than 50,000 employees. There are more than 8,000 intranet sites!! Users spend so much of their time searching with difficulty for the needed information amongst all the other bundles of information.


  • Users don’t know where to begin
  • Inconsistent navigation systems
  • Different labels used for the same concept
  • Same label used for different concepts


IA challenges: Integrating over 8,000 sites is quite difficult. There was no way to force independent sites to register. MSWeb team had to create incentives for those sites to participate in their new model. All of these independent sites with their own IA and labels and vocabulary has to be integrates with all of the others.


Taxonomies: MSWeb team came up with 3 taxonomies which would help improve searching, browsing and managing the information:

  • Descriptive vocabularies-controlled vocabularies that describe a specific domain (geography, or products and technologies) and include variant terms for the same concept (much like a thesaurus).
  • Metadata schema-collections of labeled attributes for a document (like a library catalog record).
  • Category labels-sets of terms to be used for the options of navigation systems.


Descriptive vocabularies: These would be manually indexed terms which would be supplements to automated indexing. MSWeb team had to decide which would be the most important vocabularies to develop. They chose: geography, languages, proper names, organization and business unit names, subjects, and product, standards, and technology names.


Metadata schema: These describe which metadata should be used to describe a content resource. They borrowed from Dublin Core Metadata Element Set (http://dublincore.org) which was stripped down so that the content creators/ owners would be able to describe their own information so it could be found. Their schema has a required core set of fields but also has the ability to add extensions should the need arise in the future. Morville and Rosenfeld (2006, p. 435) list the core fields:

  • URL title-name of resource
  • URL description-brief description of resource
  • URL-address of resource
  • ToolTip-text displayed on mouseover
  • Comment-not seen by user, for administrative management
  • Contact alias-name of person responsible for resource
  • Review date-date resource should be reviewed , default is 6 months
  • Status-default is active (others include deleted, inactive, etc.) for content management


Optional extended fields:

  • Strongly recommended-this flags resources which are highly appropriate
  • Products-terms for the product, standards, and technology provides vocabulary to describe the subject matter.
  • Category label-these are terms from the official vocabulary of category labels-ensures resource is listed under appropriate label in the navigation system.
  • Keywords-terms from the descriptive vocabularies which can be used to describe the resource.


Category labels: These labels for categories in the navigation system attempt to give the users navigational context. Catalogers can see a list of acceptable labels and then see a description of different nodes. Other intranet site owners asked MSWeb team for help with their navigation and they complied. They offered their user-centered design process as a service for other site owners.


Technology and tools: The MSWeb team uses VocabMan, Metadata Registry (MDR) and URL Cataloging Service (UCS) to work their magic.


Metadata registry: used to store, manage, and share taxonomies on the MSWeb intranet. VocabMan: provides access to MDR  and allows the creating and editing of taxonomies. URL Cataloging Service: used to create records based on metadata schema, category labels, and descriptive vocabularies.



Web Theory Chapter 1: “Web of Technology”-in this first chapter Burnett and Marshall (2003) examine the history of the Web and some theories of technology that have been applied to this emerging technology. Theorists such as Howard Rheingold discussed the importance of communication through space and felt that the Internet as a communication tool would engage the public and foster an era of greater democracy.


Burnett and Marshall (2003) also focus on the ideology of technology whereby any new technology is seen not only as natural and normal, but as what society needs to make it better. Technology is seen as the savior of what is ailing society. Also discussed is technological determinism in which technology has or is given great power over a culture. This technological determinism has created a utopian and dystopian view of technology in our society. We see this in our science fiction-in some cases technology is everywhere and provides good services and a good life (see some of the world fairs and their look into the future). On the other hand there are many examples in science fiction of technology being the downfall of the human race (Terminator movies etc.).


H.A. Innis was a Canadian political economic historian who believed that centre – peripheral economic relations determined the transportation and communication routes (focusing on new Canadian territories). His theory was that each communication technology was biased either toward spatial or temporal concerns and that that bias would shape the nature of society. The medium of communication was either focused on the preservation of information (time-based/ temporal) or focused on the wide distribution of information (space-based/ special). An oral culture maintained and preserved traditions and because of its oral communication the size of its culture was limited. A spaced-based culture used communication technology that allowed for wider dissemination. The development and use of papyrus and then paper allowed a culture to dominate a greater territory. The Web is less centered than TV. If one wants to get an immediate message out to everyone in the country, TV is better than the Web. The TV can be state controlled (all networks can broadcast the same information if needed) and the message doesn’t get lost as it would on the Web. However, the Web can disseminate information farther than TV. TV is state controlled (each country has their own TV stations etc.) whereas the Internet is worldwide. If something big happens in one country, once the news makes it to the Web, it can be spread around the world. As the Internet developed in English speaking countries, the Web has been dominated by English documents and English has been reinforced globally; whether this will continue to be true only time can tell.


Technology has become hidden behind its use and function. In the early days of computers and of the Internet, many using the technology understood how computers worked and used their codes and languages. Now, most people using this technology have no clue how the computer they use actually works. Most people never use or see computer code – we have designed all sorts of visuals and metaphors to remove the “tech” out of technology (windows, desktop, files, etc.). As this technology has integrated into our everyday lives, it becomes normal and mundane and loses some of its power to transform. We can see how the technology of the Web has integrated past, present and future. The Web began as part of military research and was connected to universities and their research.  It has also strong links with media as it became a forum to re-broadcast or re-publish news items. The Web has inherited the transformation of space much like the telegraph and telephone have in the past. However, unlike other technology the Web is not defined with only one specific use. A toaster exists solely for the purpose of browning bread. The Web and Internet have many purposes. They are communication channels, news feeds, entertainment, commerce, socialization networks, education and research areas etc. Because of this, the Web has insinuated itself into our everyday lives and seems to be continuing that trend. It has been said that in the future, our Web or Internet identity will be more important than our physical identity. How we are presented online will be seen first, before we are met in person. This has huge ramifications.



Websites reviewed:


Digital Web Magazine http://www.digital-web.com/

InfoCamp http://asistpnw.org/infocamp2007/

Intranet Roadmap http://www.intranetroadmap.com/

Adaptive Path http://www.adaptivepath.com/

Information ArchiTECH http://www.informationarchitech.com/


Digital Web Magazine http://www.digital-web.com/


Digital Web Magazine is a web based magazine for professional web designers and those who develop websites such as information architects. Most of the published work is contributed by web authors.


As expected, Digital Web Magazine has a very user friendly website. Clearly labeled tabs at the top guide users to sections for contributing, subscribing, contacting, and information on various events and about the magazine. On the left side bar one can search for articles by: topic, date, author, title, or type. There is also a search box and highlighted areas for events going on now, and current news.


I took a look at an IA article, “Getting The Most Out of Your Library” (Hicks, 2008). Through this article Hicks (2008) steers web developers and information architects to the physical libraries and points out the wonderful things that the brick and mortar libraries can do for these patrons. He paints the analogy of the library as the open source movement pre software. There are suggestions for finding items at university libraries and on asking for interlibrary loans. Hicks (2008) even suggests that these patrons use the LibX Firefox toolbar that many libraries offer (and if they don’t he tells them to suggest to the librarians that they offer it). This toolbar auto-detects ISBNs on web pages and will alert library users on Amazon, Barnes & Noble, Google etc. that the item may be available at their local library. It offers a direct ISBN search at the home library from an internet site. He does offer renegade suggestions such as going into archives or rare book rooms with a hidden digital camera to illegally take a photo of a rare book, but he is attempting to educate folks who are used to only using the Internet, to see what is available at the library.



InfoCamp http://asistpnw.org/infocamp2007/


InfoCamp is a conference that focuses on user-centered design and information science. IA, user experience design, interaction design, usability, and information science research were all topics and fields covered by this conference. The 2007 conference was held in Seattle and was sponsored by ASIS&T (American Society for Information Science and Technology) among others. The theme was cross-pollinating the information ecosystem and focused on how librarians, information scientists, information architects, usability engineers and interaction designers can enable and the exchange of ideas across these many disciplines and in different industries.


One session led by Nick Finck was on mobile web design “The Skinny on the Mobile Web”. The power point slides are available and they provide some insight into how websites need to be designed for multiple viewing platforms.



Intranet Roadmap http://www.intranetroadmap.com/


Intranet Roadmap is a portal and guide designed to help those who need to create or improve a corporate intranet. It provides all sorts of resources such as links to intranet related information, and articles on intranets. It basically acts as a massive filter and focuses all intranet related web sources here rather than having the user use Google or Yahoo or another search engine and then wade through tons of information.  This company has been a consultant for companies developing an intranet and they have put together a tutorial reflecting on what they have learned from their experience. This information can be quite useful for someone just starting to put together an intranet or to give idea on what would be required so a pitch could be made for an intranet. It appears to have been last copyrighted in 2006. I don’t know if it has been updated since, but as technology changes quickly, some of the sources and especially software resources may be out dated.



Adaptive Path http://www.adaptivepath.com/


Adaptive Path is a team that focuses on user experience. They work for clients who want to be responsive to their users and offer experience strategy and design consulting, workshops, in-house training, and ideas. They are publishing a book, Subject to Change, which is supposed to teach companies a new way of thinking to create products and services in a world that is rapidly changing.  Some of their suggestions include thinking of people not as market demographics but as they are-really understanding your user. They are guiding companies to forming an experience strategy and thinking of the experience they provide their customers.


Rutter (2008) has an essay on the website about going back to analog tools for visual development. Their team started doing more sketching with pen and paper and found that it provided more visibility for design solutions. When they were in collaborative work sessions there was more engagement with the paper sketches than on print outs designed on a computer. It is quite interesting to see what different things they are doing with the hand sketching (showing people in spaces, mind mapping, showing abstract concepts, etc. It just goes to show you that even in this technological age, sometimes in the act of creativity we need to go to something more basic, and more flexible. If you become adept at quick hand sketching, you can rapidly change a design idea for a client in a workshop and quickly test out different ideas and CHANGES, rather than having to go back to a computer program to redesign something for a client to then look at and think about again.



Information ArchiTECH http://www.informationarchitech.com/


This is the website of a web design company from Louisiana which specializes in findability. Their site seems fairly clear with a global navigation across the top for: home, about us, services, solutions, and contact us. In their left sidebar, they list services they offer: information architecture, web content writing, web site design, category management, search engine optimization, local navigation, case studies, and links/resources.


I went to the information architecture section and read their definition of IA: “Information architecture is the art and science of building virtual structures for a new or existing systems, with the aim of minimizing the distance between the user and the information he/she is seeking.”


This is all well and good so I tried to look at the IA articles (link on left sidebar). I went to the articles page where I was told that:

The articles listed here cover a wide range of topics related to information architecture, ranging from specific issues such as search engine optimization, category management and site search technology, to broader issues such as the philosophy and ethics of the information architect. Please check back often as these pages are frequently updated.

I am glad they ask their users to check back as there are no articles listed on the page! I am glad they have a section for the articles but a bit mystified that they have not one article listed. I know there are IA articles out there so I am not sure what is going on.

Week 9 references:

Burnett, R., & Marshall, P. D. (2003). Web theory: An introduction. New York: Routledge.

Hicks, W. (2008, August 12). Getting the most out of your library. Digital Web Magazine. Retrieved October 21, 2008, from http://www.digital-web.com/articles/getting_the_most_out_of_your_library/

Morville, P., & Rosenfeld, L. (2006). Information architecture for the world wide web (3rd ed.). Sebastopol, CA: O’Reilly.

Rutter, K. (2008, September 8). The joy of sketch: Explorations in hand-crafted visuals. In Adaptive Path. Retrieved October 23, 2008, from Adaptive Path Web site: http://www.adaptivepath.com/ideas/essays/