Skip navigation

Tag Archives: youtube

This was originally posted at my blog Cyborgology – click here to view the original post and to read/write comments.

Chris Baraniuk, who writes one of my favorite blogs, the Machine Starts, is experiencing the current riots in London first hand (they’ve spread to other cites). His account of both the rioting mobs of destruction as well as those mobs trying to clean up the aftermath imply the ever complex pathways in which what I have called “augmented reality” takes form. [I lay out the idea here, and expand on it here]

We are witnessing both the destructive and the constructive “mobs” taking form as “augmented” entities. The rioters emerged in physical space and likely used digital communications to better organize. The “riot cleanup” response came at augmentation from the reverse path, organizing digitally to come together and clean up physical space. Both “mobs” flow quite naturally back and forth across atoms and bits creating an overall situation where, as what so often occurs, the on and offline merge together into an augmented experience.

The rioting mob first realized itself in physical meat-space Read More »

Advertisements

There is an important space between old and new media. This is the grey area between (1) the top-down gatekeeping of old media that separates producers and consumers of content and (2) the bottom-up nature of new, social media where producers and consumers come from the same pool (i.e., they are prosumers).

And in the middle are projects like Global Voices, what might be called curatorial media: where content is produced by the many in a social way from the bottom-up and is then mediated, filtered or curated by some old-media-like gatekeeper.

The current protests in Syria can serve as an important example of how curatorial media works. Especially because foreign journalists have been banned from the country, creating a dearth of information for old media. Alternatively, Read More »

The 2012 presidential race is beginning to take shape, and it is interesting to see how social media is being differently used by candidates. Obama kicked off his re-election campaign on YouTube and is at Facebook today with Zuckerberg to do a Facebook-style town-hall Q&A. Mitt Romney (R-MA) annouced his presidential bid on Twitter and Tim Pawlenty (R-MN) announced on Facebook and even created a Foursquare-style gaming layer where supporters earn points for participating in his campaign. I’ll be analyzing how social media is used throughout the 2012 cycle, but I’d like to start all of this with the question: who will be our first social media president?

FDR became the radio president with his famous “fireside chats” and JFK the television president with his image-centered debates with Nixon. Many consider Obama the first social media president due to his massive fund raising and organizing efforts during the 2008 campaign using the web (though, Howard Dean was there four years earlier – remember his use of meetup.org). However, now that Obama has been in office for more than two years, has he really used the social web effectively in interesting new ways? The New York Times states that Obama treats the Internet like a “television without knobs,” using it primarily to simply upload videos for us to consume. Obama-as-president has thus far been a Web 1.0 leader instead of embracing the Web 2.0 ethic of users collaboratively and socially creating content.

To put it another way, go to Obama’s Twitter account and ask yourself if he is really using the medium in an effective way? It is clearly Read More »

This post is co-authored with PJ Rey and originally appeared on our blog, Cyborgology.

On Jan. 8, 2011, Jared Lee Loughner allegedly shot Congresswoman Gabrielle Giffords (D-AZ) and 19 others resulting in 6 fatalities. This event has drawn attention to a number of new and important roles social media has come to play in our society, including how information is gathered, changed political rhetoric, and how these sites handle the profiles of those involved in high-profile tragedies.

Profiling the Suspect
Media coverage (i.e., cable, network, radio, and newspapers) of the event represented a broader trend in contemporary journalism: almost immediately, news outlets began to piece together a profile of this previously unknown figure using almost exclusively Loughner’s social media profiles (i.e., Facebook, Myspace, Youtube and, most recently, online gaming discussion boards). Even though his MySpace and Facebook profiles were taken down by the site, screenshots of the sites are available, including one showing a photo of gun on a US History textbook as a profile picture.

The digital documentation of our lives via social media offers an easily-accessible, autobiographical source for journalists and anyone else who is interested. Yet, there is a risk in basing our impressions solely off of this information. Loughner’s image of himself is certainly not objective and may very well be inaccurate. News outlets, however, face pressure to “get the scoop” on the story, so they tended to report on Loughner based heavily on this information, as opposed to interviewing a range of people in his life to construct a more holistic perspective.

The Post-Shooting Political Debate
In the wake of the tragedy, a debate emerged over the intensity and tone of contemporary political rhetoric. The political right in general, and Sarah Palin in particular, Read More »

The New York Times recently ran a story about how “The Web Means the End of Forgetting.” It describes a digital age in which our careless mass exhibitionism creates digital documents that will live on forever. The article is chock full of scary stories about how ill-advised status updates can ruin your future life.

These sorts of scare-tactic stories serve a purpose: they provide caution and give pause regarding how we craft our digital personas. Those most vulnerable should be especially careful (e.g., a closeted teen with bigoted parents; a woman with an abusive ex-husband). But after that pause, let’s get more realistic by critiquing the sensationalism on the part of the Times article by acknowledging that, with some common sense, the risks for most of us are actually quite small.

1-Digital Content Lives Forever in Obscurity

The premise of the article is that what is posted online can potentially live on forever. True, but the reality is that the vast majority of digital content we create will be seen by virtually no one. Sometimes I think these worries stem from a vain fantasy that everything we type will reach the eyes of the whole world for all time. Sorry, but your YouTube video probably isn’t going viral and few people will likely read this post.

What interests me about digital content is that it is on the one hand potentially immortal and on the other exceedingly ephemeral. In fact, it is precisely digital content’s immortality that guarantees the very flood of data that makes any one bit exceedingly ephemeral, washed away in the deluge of user-generated banality. Jean Baudrillard taught us that too much knowledge is actually no knowledge at all because the information becomes unusable in its abundance. This is what millions of people tweeting away is: an inundation of data, most of which will never be read by many and will probably be of little consequence [edit for clarification: I like Twitter].

If anything, one problem with social networking applications like Facebook or Twitter is that they do a poor job of archiving and making searchable specific past content. A quick glace on Facebook reveals that I cannot search my friend’s history of status updates. Looking at my Twitter stream, I cannot even find my oldest tweets. My digital content may live forever, but it does so in relative obscurity.

2-Flaws are Forgivable, Perfection is Not

The article draws from a quote about how the immortality of digital content…

“…will forever tether us to all our past actions, making it impossible, in practice, to escape them” […] “without some form of forgetting, forgiving becomes a difficult undertaking.”

I disagree. As we increasingly live our lives online, always index-able, it should be expected that many of us will have some digital dirt on our hands. Instead of this idea that we won’t be able to forgive each other for not being perfect, new realities will change our expectations. I suspect being an imperfect human being will be just as forgivable as it always has.

In fact, it very well might be the too-perfect profile that is unforgivable. As any politician knows, you cannot look too clean and sterile; else you come off as phony. A too-polished and perfect profile is increasingly a sign that you are not living with technology and making it part of your life -and thus seem a bit technologically illiterate. The overly-manicured profile screams that you are not out there using social media tools to their full potential.

In conclusion, use scare-tactic articles like the one being commented on here to remind you that what you say indeed might come back to haunt you. But do not go overboard worrying and cleaning your digital presence. Yes, riding your bike or eating chicken might get you killed (potholes and salmonella scare me more than Googling my name), but we are willing to take these risks because they are exceedingly small. Be smart, don’t post about your boss, but, in any case, the vast majority of people posting status updates about their job today will not get fired tomorrow. ~nathan

On this blog, I typically discuss the intersection of social theory and the changing nature of the Internet (e.g., using Marx, Bourdieu, Goffman, Bauman, DeBord and so on). In a chapter of the new third edition of the McDonaldization Reader edited by George Ritzer, I argue that what we are seeing is a general trend towards the deMcDonaldization of the Internet.

The shift from a top-down centrally conceived and controlled “Web 1.0” to a more user-generated and social “Web 2.0” is a shift away from the dimensions of McDonaldization as Ritzer defines the concept. For example, a corporate-generated website that does not allow user-generated content is paradigmatic of Web 1.0. The site is produced efficiently by few individuals, making it predictable, controllable and relatively devoid of outside human input. Web 2.0, alternatively, is not centered on the efficient production of content [I’ve made this argument previously]. User-generated content is, instead, produced by many individuals, making it much less predictable –evidenced by the random videos we come across on YouTube, articles on Wikipedia, or perhaps the best example is the downright capricious and aleatory experience of Chatroulette. The personalization and community surrounding social networking sites are hard to quantify and make the web far more humanized. Thus, Web 2.0 marks a general deMcDonaldization of the web. Examples of these points are further illustrated in the chapter.

This conclusion also counters the thesis that McDonaldization is something that will only continue to grow – opposed to the “grand narrative” that Ritzer (and Weber before him) put forth.

Finally, further consideration needs to be given to the various ways in which Web 2.0 remains McDonaldized, rationalized and standardized. Many of the sites that allow for unpredictable user-generated content do so precisely because of their rationalized and standardized -and thus McDonaldized- underlying structure. In many ways, our Facebook profiles all seem to look and behave similarly. The rationalized and standardized structures of Web 2.0 seem to coexist comfortably with irrational and unpredictable content they facilitate. ~nathanjurgenson.com

by nathan jurgenson

I’ve written many posts on this blog about the implosion of the spheres of production and consumption indicating the rise of prosumption. This trend has exploded online with the rise of user-generated content. We both produce and consume the content on Facebook, MySpace, Wikipedia, YouTube and so on. And it is from this lens that I describe Apple’s latest creation announced yesterday: the iPad. The observation I want to make is that the iPad is not indicative of prosumption, but rather places a wedge between production and consumption.

From the perspective of the user the iPad is made for consuming content. While future apps might focus on the production of content, the very construction of the device dissuades these activities. Not ideal for typing, and most notably missing a camera, the device is limited in the ways in which users create content. Further, the device, much like Apple’s other devices, is far less customizable than the netbooks Apple is attempting to displace (which often use the endlessly customizable Linux OS).

Instead, the iPad is focused on an enhanced passive consumption experience (and advertised as such, opposed to their earlier focus: can’t resist). Unlike netbooks, the iPad is primarily an entertainment device. Instead of giving users new ways to produce media content, the focus is on making more spectacular and profitable the experience of consuming old media content -music and movies via the iTunes store, books via the new iBookstore and news via Apple’s partnership with the New York Times.

Thus, the story of the iPad’s first 24hours, for me, is the degree to which the tasks of producing and consuming content have been again split into two camps. The few produce it -flashy, glittering and spectacular- and the many consume it as experience. And, of course, for a price.

Does this serve as a rebuttal to an argument that the trend towards the merging of the spheres of production and consumption into prosumption is inevitable? Or is prosumption indeed the trend for a future Apple seems not to grasp? Or will the applications developed for the device overcome its limitations? ~nathan

by nathan jurgenson

500px-Google_wordmark.svgFollowing PJ Rey’s excellent summary of the Internet as Playground and Factory yesterday, I offer a few additional observations from the conference this past weekend, focusing on Web 2.0 capitalism, and Google as the primary target. The roughly 100 presenters were not joined by Google, as the company said that the conference content seemed “slightly anti-capitalist.” Much of the content, indeed, took the corporate ownership of our productive labor online to task.

A common theme was how to discuss Marx’s Labor Theory of Value with respect to Web 2.0. Clearly, companies are exploiting our free labor, but they do not have to coerce us. Julian Kucklich argued that we now have exploitation without alienation. That is, our unpaid labor is used for corporate surveillance and profit, even if the labor is not alienating or “foreign to ourselves.” Simply, we like using Facebook, Twitter and so on. However, Kucklich further argues that we are taught to think Facebook is fun, that companies use the “ideology of play” to seduce us into producing (or better, prosuming). Martin Roberts, in, ironically, perhaps the conference’s most entertaining presentation, also took to task the culture of “fun”, arguing that we have been trained to see our work as “fun”, making us more productive for the capitalist system. Christian Fuchs most forcefully argued for a communist Internet, stating that exploitation on Web 2.0 is infinite because users are not being paid material wages. A good Marxian, he downplayed the importance of immaterial value gained through sites like Facebook because we live in a capitalism system based on the material. And Ulises Mejias takes Web 2.0 to task for the creation of corporate Monopsonies, where we have seen Facebook, Amazon, eBay, YouTube, Google and so on become corporate titans of Web 2.0 capitalism. He argues that using these corporate Monopsonies is dangerous and irresponsible, calling for open-source and public versions of these types of services.

Thus, it is clear to see why Google was reluctant to join this conference. Frank Pasquale forcefully called on Google to be more transparent. Given what was discussed above, as well as Google’s central status in our day-to-day knowledge-seeking life, Pasquale leaves us with questions to ponder: should its page-rank algorithm be public? Should Google be allowed to up-rank or down-rank links based their relationship to the company? Should Google be able to simply remove pages from its listings? Should Google be forced to let us know when they do these things? ~nathan

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Furl | Newsvine

by nathan jurgenson

Web_2_imageFor many (especially youths and young adults), attempting to quit or never start Facebook is a difficult challenge. We are compelled to document ourselves and our lives online partly because services like Facebook have many benefits, such as keeping up with friends, scheduling gatherings (e.g., protests) and so on. Additionally, and to the point of this post, the digital documentation of ourselves also means that we exist. There is common adage that if something is not on Google, it does not exist. As the world is increasingly digital, this becomes increasingly true. Especially for individuals. One adolescent told her mother, “If you’re not on MySpace, you don’t exist.”

Christopher Lasch’s Culture of Narcissism argues that we are increasingly afraid of being nothing or unimportant so we develop narcissistic impulses to become real. The explosion of new ways to document ourselves online allows new outlets for importance, existence and perhaps even immortality that living only in the material world does not allow. The simple logic is that increased digital documentation of ourselves means increased digital existence. More than just social networking sites, we document ourselves on Twitter, YouTube, Flickr, and even increasingly with services that track, geographically, where one is at all times, often via one’s smart phone (e.g., Loopt, Fire Eagle, Google Latitude, etc).

So what?

Neon_Internet_Cafe_open_24_hoursIn this world where we can document our lives endlessly, we might become fixated on our every behavior. How it will appear to others, how it will help us with our jobs, friends, relationships, etc. Simply, self-presentation is a strategic game. Erving Goffman discussed this using a dramaturgical model where we are like actors on a stage performing ourselves. The new technologies described here mean that more and more areas of our life become part of this perforce because new parts of our lives are now able to be documented (e.g., our every-moment geographic locations). More and more areas of our life are lived subservient to the performance and identity we want to convey.

In this way, a hyper-fixatedness on our own subjectivity to create its own digital simulation (e.g., Facebook) can, to some degree, dictate how we live, becoming like characters on a “reality” show always performing for the camera. With digital documentation technologies we can become increasingly subservient to subjectivity and identity via its documentation if we are seduced by the importance and immortality that digital existence promises. ~nathan

by nathan jurgenson

cable1Lately, we have been doing lots of work, for others. For free.

Millions of users of sites like Facebook and MySpace are clicking away at their profiles, adding detailed information about themselves and others. “We” are uploading content to sites like Flickr, YouTube, the microblogging service Twitter and many others, and our labor creates vast databases about ourselves –what I previously described as a sort of mass exhibitionism.

Facebook’s profit model is built upon an ownership of its user’s labor, specifically, the intimate detail of our lives and self-presentations. This is an example a larger trend of “prosumption,” that is, the simultaneous role of being a producer of what one consumes. In the material world we are doing this more often by scanning and bagging our own groceries, checking ourselves onto planes and into hotels, etc. The websites mentioned above are part of the user-generated and social turn the Internet has taken in the last few years –what has come to be known as Web 2.0. And prosumption generally, and especially on Web 2.0, is the mechanism by which we become unpaid workers (“crowd sourcing”), producing valuable information for the benefit of businesses. This is the almost endlessly efficient business model of Web 2.0 capitalism.

Karl Marx argued for taking control of the means of production, and on Web 2.0, to some degree, we have. But what remains in the hands of the few, the businesses, is the profit-potential. Facebook’s reach is ever-growing and the company is valued at $15 billion dollars as of 2007, precisely due to the data that users donate to the site.

Perhaps many do not mind giving away their labor because they enjoy the services provided, such as the richly social Facebook platform. However, we should also ask why the personal data of ourselves, that we are producing, does not belong to us? Given the successes of non-profit/open source software and applications (e.g., Linux, Firefox, etc), shouldn’t we be calling for a non-profit/open source social networking platform (i.e., an open source Facebook-like platform) where businesses do not own the highly personal data about ourselves and our socializing? What other ways can we think of that removes the link between our data (and labor) and corporate profit? ~nathan