Popular Posts

Tuesday, December 02, 2008

MySpace: A Place For 'Cretins'

, December 2, 2008

MySpace: A Place For 'Cretins'


Michael Wolff, author of the new Rupert Murdoch bio "The Man Who Owns
the News," stirred up some controversy this week in an interview with BusinessWeek's
Jon Fine, during which he classified MySpace users as low class. "If
on MySpace now, you're a (bleep) cretin. And you're not only a (bleep)
cretin, but you're poor," Wolff said, adding: "Nobody who has beyond an
eighth grade level of education is on MySpace. It is for backwards

To his credit, Fine didn't agree. He pointed out that bands have
to be on MySpace. MySpace Music has become "a powerful driver" for them
and for the site. "And second of all," Fine said, "If I am to accept
your reasoning -- even
though I don't -- as the success of The Sun (a News
Corp.-owned British tabloid) will tell you, there are lot of cretins
out there and you can make a lot of money off cretins."

Cnet's Caroline McCarthy reads between the lines: "MySpace encourages
glitter text," while Facebook "mandates that members must use their
real names," so one is going to attract a classier crowd than the
other. But, as Fine notes, it
doesn't really matter what kind of audience your site caters to, as
long as it makes money. And MySpace is still "the flagship property of
the top destination for display ads (Fox Interactive Media) on the Web.
Facebook, meanwhile, is
still seen as an experimental ad medium."
Read the whole story...

Power.com: All Your Friends In One Place?


Does Power.com have the power to unseat the likes of MySpace and
Facebook as the top social networking site? Probably not, but the Rio
de Janeiro-based company, with its tools for synchronizing social
networking features and services, will
be useful to those overextended users with multiple social networking
accounts. Power.com currently allows you to view and manage your
Facebook, MySpace, Hi5, MSN Messenger, Orkut, and YouTube accounts all
from one location. It hopes to
soon add LinkedIn, Twitter, Flickr, Hotmail, Yahoo, Gmail, AOL, Skype
and others soon. The company raised $2 million in funding last year and
looks set to add another $6 million this year.

According to the company's press release, here's how it works:
Users' "Power start page shows them all of their friends, messages, and
content -- from all their social networks, instant messengers, and
email accounts -- in one place ...
Once users log on to Power.com, they are automatically logged on
everywhere that matters. They go from Power.com to their page on any
one of their social networks with one click."

As BusinessWeek's Robert Hof points out, it's kind
of like Meebo, which lets you sign onto all of your instant messaging
accounts from one place, on steroids. "We're taking down the boundaries
between social sites," says CEO Steve
Vachani, who tells Hof that he doesn't see the efforts by Facebook,
Google, MySpace and others to take their profile information to other
sites as open enough to be all-inclusive. That said, while Power.com is
easy to set up, "putting all
this information together can get a little dizzying, especially when
single services such as MySpace and Facebook are already looking mighty
cluttered all by themselves." -
Read the whole story...

Monday, November 17, 2008

Don't Panic!

Hi Kids, Dr. Media wanted to send this ditty your way, since it comes from a fellow who used to be on the biz side  and decided to become a consultant, plus he gets NYT interviews, way to go! However, his point applies not just to biz folks but especially to media makers of all kinds, DON'T PANIC.
Get the baseball cap that says that and remember it. In times like these you must beilve in your vision, and work to make it happen. If fear undermines your drive, you might as well give it up. This DOES NOT mean to be a pollyanna, it means to think like a producer, and be hard nosed as well as optomistic, inspite of the downturn.

In the Hunt - Recession Advice for Entrepreneurs - Stay Calm - NYTimes.com
In the Hunt
In Tough Times, Tackle Anxiety First

With the economy in deep trouble, all sorts of people — consultants and authors among them — are offering all sorts of advice. Most of it is solid, if obvious: Find a good law firm, protect your credit, bill early and often, and perhaps the most self-evident, don’t hire unqualified workers.

Jeffrey Hull, a former corporate manager who now blends careers as an executive coach and psychotherapist, suggests a survival strategy that goes to the heart of the matter: Don’t panic.

“What I’m seeing most these days are small-business owners who are not only trying to shift gears in their business in the midst of a downturn,” he said, “but shifting their energy to stay out of fear mode, which is even more fundamental — and tougher to do.”

Mr. Hull said that when he joined Cor Business Inc., a start-up in Harrison, N.Y., as a partner five years ago, most of the clients were Fortune 500 companies like MasterCard, AT&T and Avaya. The main challenge for Cor Business back then was persuading executives to act on its recommendations, not helping them cope with angst.

This year, however, many of those customers tightened their budgets and gave bigger coaching roles to their own human resources departments or big firms like Drake Beam Morin. That, Mr. Hull said, was his first brush with fear. How were he and his partners, Morgan and Julie McKeown, who founded Cor Business in 2001, going to stanch the bleeding?

The answer was obvious, he said, though they did not recognize it at first. “We sat down and felt very negative,” he said. “We did brainstorming.” That was when they realized that almost all their growth this year had been with small businesses.

Early on, blinded by what he now calls an elitist mentality, the partners had been somewhat dismissive of the newcomers. But more and more entrepreneurs were knocking on their door, as the weakening economy prodded them to overcome their aversion to hiring outside experts. This year, firms with revenue of $1 million to $10 million account for close to half of Cor Business’s projected billings of nearly $2 million. Two years ago, firms of that size represented just 20 percent of Cor’s billings.

Better yet, said Mr. Hull, 49, small-business owners tend to be “more creative and more flexible” than sprawling, bureaucratic organizations. “They are great listeners, and act much more quickly,” he said.

Mr. Hull acknowledged that he had his own to-do list — what he calls his six-step program for “shifting yourself out of fear-based operating mode and getting back on track towards success.” The first step, he said, is to confess that you are, in fact, afraid. “Stress, worry and apprehension are all elements of fear,” he said. “It’s scary right now being in business, but that is what entrepreneurship is all about.”

The second step is to respond to that fear with calm deliberation rather than rash acts that may lead businesses into deeper trouble. “I hammer that distinction into clients,” he said.

The third is for businesses to refocus their efforts on undertakings they are good at. The fourth is to “reframe the story” by looking for new opportunities in difficult times.

The fifth is for entrepreneurs to maintain a balance between their work and personal lives. The sixth is to seek out a critic who will give unvarnished feedback about potential blind spots.

He said he used those principles in his coaching. One client, the co-owner of a real estate investment company in New Jersey, was thinking about scratching a plan to buy a warehouse and turn it into a supermarket, fearing that it would fail and he would be stuck with a lot of debt. “I asked him if he was focusing on what might not work or what could work,” Mr. Hull said. “He realized he had done a lot of research. He knew that people had to buy food, even in a recession. He turned his energy focus from negative to positive. I believe he is going to go through with the deal.”

The same client worried about his firm’s sluggish growth, Mr. Hull said. With revenue stuck at about $5 million, the client felt intimidated when he bumped into people like Donald Trump at an industry conference. “It was a classic entrepreneurial conundrum,” Mr. Hull said. “You reach a certain level of success and stop growing because you’re reluctant to change your ways.”

He urged the two owners of the investment company to start acting like a bigger company by holding regular, structured meetings instead of communicating with each other and their staff members by frequent workplace chats. He also suggested that they assume responsibilities that played to their strengths (one was a schmoozer and a visionary, the other an introvert and numbers cruncher), rather than working out every decision together.

“They agreed, he said. “They’re practicing it.”

In another case, the son of the founder of a manufacturing company in Manhattan feared that many of his employees, unhappy with his plan to move his factory to another borough, would desert him.

“I had him refocus on the truth,” Mr. Hull said. “He had asked each of them if they would come with him, and they all had said yes. He had lost sight of his homework.”

Most successful entrepreneurs will often recall their missteps, almost with pride, and tell of how they turned them into opportunities. Reinventing yourself is a mark of the entrepreneurial personality — and it is never too late.

Mr. Hull himself started his first business, a consulting firm, with a partner in 1995 when he was 35, after 15 years as a corporate human resources manager, including six years as the director of the department at Booz Allen Hamilton. Though the company was successful, the partners dissolved it in 2000 “to think about what we really wanted to do with our lives.” The other partner went to medical school, and Mr. Hull earned his doctorate in psychology in 2003, joining Cor Business that year. He opened his private counseling practice, Life-Shifting Inc., in Manhattan in 2004.

Asked whether he and his partners feel that cold stab of apprehension that he warns his clients about as revenue at Cor Business stagnates this year after six years of strong growth, Mr. Hull responded, “How could we not, in the midst of the worst economy in my lifetime?” Then he added: “How do I deal with it? By trying to practice what I preach. You have to stay optimistic.”

Saturday, October 18, 2008

Welcome To The New World of Distribution

Hi folks, this one is especially for all you media makers. Peter Broderick's article from Indiewire on the state of distribution,"old"vs :"new". Well stated and a more detailed commentary than Mark Gills comments Dr. Media commented on in an ealier blog. Good Stuff.See the chart simplistic but sums it up.

FIRST PERSON | Peter Broderick: "Welcome To The New World of Distribution," Part 1

Welcome to the New World of Distribution. Many filmmakers are
emigrating from the Old World, where they have little chance of
succeeding. They are attracted by unprecedented opportunities and the
freedom to shape their own destiny. Life in the New World requires them
to work harder, be more tenacious, and take more risks. There are
daunting challenges and no guarantees of success. But this hasn't
stopped more and more intrepid filmmakers from exploring uncharted
territory and staking claims.

Before the discovery of the New World, the Old
World of Distribution reigned supreme. It is a hierarchical realm where
filmmakers must petition the powers that be to grant them distribution.
Independents who are able to make overall deals are required to give
distributors total control of the marketing and distribution of their
films. The terms of these deals have gotten worse and few filmmakers
end up satisfied.

All is not well for companies and filmmakers in what I call the Old World of Distribution. At Film Independent's Film Financing Conference, Mark Gill vividly described "the ways the independent film business is in trouble" in his widely read and discussed keynote.
Mark listed the companies and divisions that have been shut down or are
teetering on the brink of bankruptcy, noted that five others are in
"serious financial peril," and said that ten independent film
financiers may soon "exit the business." Mark made a persuasive case
that "the sky really is falling... because the accumulation of bad news
is kind of awe-inspiring." While he doesn't expect that the sky will
"hit the ground everywhere," he warned "it will feel like we just
survived a medieval plague. The carnage and the stench will be

Mark's keynote focused on the distributors, production companies,
studio specialty divisions, and foreign sales companies that dominate
independent film in the Old World. Mark has many years of experience in
this world. He was President of Miramax Films, then head of Warner Independent Pictures, and is now CEO of The Film Department. He sees things from the perspective of a seasoned Old World executive.

I see things from the filmmaker's perspective. For the past 11
years, I have been helping filmmakers maximize revenues, get their
films seen as widely as possible, and launch or further their careers.
From 1997 until 2002, I experienced the deteriorating state of the Old
World of Distribution as head of IFC's Next Wave Films.
After the company closed, I discovered the New World of Distribution in
its formative stages. A few directors had already gotten impressive
results by splitting up their rights and selling DVDs directly from
their websites.

Filmmakers started asking me to advise them on distribution, and, before I knew it, I was a "distribution strategist"
working with independents across the country and around the globe.
Since late 2002, I have consulted with more than 500 filmmakers. While
some have taken traditional paths in the Old World, many more have
blazed trails in the new one. I've learned from their successes and
failures and had the opportunity to share these lessons with other
filmmakers, who then have been able to go further down these trails. It
has been very exciting to be able to participate in the building of the
New World, where the old rules no longer apply.

Many of the rulers of the Old World continue to look backwards.
Having spent their entire careers in this realm, played by its rules
and succeeded, they can't see past the limits of their experience. For
them, the Old World is the known world, which they refer to as "the
film business." They explain away the serious problems facing the Old
World by citing the film glut, higher marketing costs, mediocre films,
and the historically cyclical nature of the industry. They appear to
believe that everything will be just fine with enough discipline and
patience--if fewer, better films are made, costs are controlled, and
they can hold out until the next upturn.

Many of these executives seem unaware of the larger structural
changes threatening their world. They recognize that video-on-demand
and digital downloads will become more significant revenue streams but
seem confident that they can incorporate them into their traditional
distribution model. These executives do not understand the fundamental
importance of the internet or its disruptive power. By enabling
filmmakers in the New World to reach audiences directly and
dramatically reducing their distribution costs, it empowers them to
keep control of their "content'.

The Old World executives who do acknowledge the New World can be as
dismissive as record industry executives were when they first noticed
the internet. Their usual condescending response is the internet may
work for "little" films with "niche" audiences. After admitting that
the internet represents added competition for eyeballs, they are quick
to point out that little money is currently being made from digital
downloads or online advertising.

Notable successes in the New World represent the shape of things to
come. Several filmmakers have each made more than one million dollars
selling their films directly from their websites. Other filmmakers have
begun raising money online. During 10 days of internet fundraising, Robert Greenwald attracted $385,000 in contributions for his documentary "Iraq for Sale."

Arin Crumley and Susan Buice built awareness for their feature "Four Eyed Monsters" through a series of video podcasts. They then made their film available for free on YouTube and MySpace, where it was viewed over a million times. Arin and Susan made money through shared ad revenues and Spout.com sign-ups, and then snagged a deal with IFC for domestic television and home video distribution. Wayne Wang will follow in their footsteps when he premieres his new feature "The Princess of Nebraska" on YouTube October 17th.

The power of the internet was also demonstrated by the remarkably successful documentary, "The Secret."
During the first stage of its release, "The Secret" could be streamed
or purchased at the film's website, but was not available in theaters,
on television, in stores, or on Amazon. During the next stage, the book was launched by Simon & Schuster
in bookstores and online. After the book shot to the top of the
bestseller list, "The Secret" DVD was finally made available in retail
stores and on Amazon. Over 2 million DVDs were sold during the first
twelve months of its release.

The chart above illustrates the essential differences between Old and New World Distribution.

Here are ten guiding principles of New World distribution:

1. GREATER CONTROL - Filmmakers retain overall control of
their distribution, choosing which rights to give distribution partners
and which to retain. If filmmakers hire a service deal company or a
booker to arrange a theatrical run, they control the marketing
campaign, spending, and the timing of their release. In the OW (Old
World), a distributor that acquires all rights has total control of
distribution. Filmmakers usually have little or no influence on key
marketing and distribution decisions.

2. HYBRID DISTRIBUTION - Filmmakers split up their rights,
working with distribution partners in certain sectors and keeping the
right to make direct sales. They can make separate deals for: retail
home video, television, educational, nontheatrical, and VOD, as well as
splitting up their digital rights. They also sell DVDs from their
websites and at screenings, and may make digital downloads available
directly from their sites. In the OW, filmmakers make overall deals,
giving one company all their rights (now known or ever to be dreamed
up) for as long as 25 years.

3. CUSTOMIZED STRATEGIES - Filmmakers design creative
distribution strategies customized to their film's content and target
audiences. They can begin outreach to audiences and potential
organizational partners before or during production. They often ignore
traditional windows, selling DVDs from their websites before they are
available in stores, sometimes during their theatrical release, and
even at festivals. Filmmakers are able to test their strategies
step-by-step, and modify them as needed. In the OW, distribution plans
are much more formulaic and rigid.

4. CORE AUDIENCES - Filmmakers target core audiences. Their
priority is to reach them effectively, and then hopefully cross over to
a wider public. They reach core audiences directly both online and
offline, through websites, mailing lists, organizations, and
publications. In the OW, many distributors market to a general
audience, which is highly inefficient and more and more expensive.

Notable exceptions, Fox Searchlight and Bob Berney, have demonstrated how effective highly targeted marketing can be. "Napoleon Dynamite" first targeted nerds, "Passion of the Christ" began with evangelicals, and "My Big Fat Greek Wedding"
started with Greek Americans. Building on their original base, each of
these films was then able to significantly expand and diversify their

5. REDUCING COSTS - Filmmakers reduce costs by using the
internet and by spending less on traditional print, television, and
radio advertising. While four years ago a five-city theatrical service
deal cost $250,000 - $300,000, today comparable service deals can cost
half that or even less. In the OW, marketing costs have risen

6. DIRECT ACCESS TO VIEWERS - Filmmakers use the internet to reach audiences directly. The makers of the motorcycle-racing documentary, "Faster,"
used the web to quickly and inexpensively reach motorcycle fans around
the world. They pulled off an inspired stunt at the Cannes Film
Festival, which generated international coverage and widespread
awareness among fans. This sparked lucrative DVD sales first from the
website and then in retail stores. In the OW, filmmakers only have
indirect access to audiences through distributors.

7. DIRECT SALES - Filmmakers make much higher margins on
direct sales from their websites and at screenings than they do through
retail sales. They can make as much as $23 profit on a $24.95 website
sale (plus $4.95 for shipping and handling). A retail sale of the same
DVD only nets $2.50 via a typical 20% royalty video deal. If filmmakers
sell an educational copy from their websites to a college or university
for $250 (an average educational price), they can net $240. Direct
sales to consumers provide valuable customer data, which enables
filmmakers to make future sales to these buyers. They can sell other
versions of a film, the soundtrack, books, posters, and t-shirts. In
the OW, filmmakers are not permitted to make direct sales, have no
access to customer data, and have no merchandising rights.

8. GLOBAL DISTRIBUTION - Filmmakers are now making their
films available to viewers anywhere in the world. Supplementing their
deals with distributors in other countries, they sell their films to
consumers in unsold territories via DVD or digital download directly
from their websites. For the first time, filmmakers are aggregating
audiences across national boundaries. In the OW, distribution is
territory by territory, and most independent films have little or no
foreign distribution.

9. SEPARATE REVENUE STREAMS - Filmmakers limit
cross-collateralization and accounting problems by splitting up their
distribution rights. All revenues from sales on their websites come
directly to them or through the fulfillment company they've hired to
store and ship DVDs. By separating the revenues from each distribution
partner, filmmakers prevent expenses from one distribution channel
being charged against revenues from another. This makes accounting
simpler and more transparent. In an OW overall deal, all revenues and
all expenses are combined, making monitoring revenues much more

10. TRUE FANS - Filmmakers connect with viewers online and at
screenings, establish direct relationships with them, and build core
personal audiences. They ask for their support, making it clear that
DVD purchases from the website will help them break even and make more
movies. Every filmmaker with a website has the chance to turn visitors
into subscribers, subscribers into purchasers, and purchasers into true
fans who can contribute to new productions. In the OW, filmmakers do
not have direct access to viewers.

(c) 2008 Peter Broderick

Monday, October 13, 2008

A Profile of Online Profiles - By the Numbers Blog - NYTimes.com

Hi, well here you go, if you missed it, a report by a Reapleaf, reviewed by Blow of the NYT talking about on line social media usage. Beyond the usual questions about methodology and verification--how do you know who's telling the truth if you don't interview them directly, but lets set that aside for the moment.
This studies most interesting references are to the extent to which people lie about things online. This would certainly make it hard to know what one was talking about now wouldn't it, however, if we accept that this study says there more men athan women utilizing these social media for relationship maintenance, where as men use it for business as opposed to personal relationships. Well sounds like it's all about relationships however you cut it. Also, most interestly, the conclusion, based on what in depth data I don't know, is that men do transactions, while women do relationships, oh really, could have concluded that with out a study couldn't we---Men are from Mars, women are from Venus, of course he didn't do any real  research either.
Dr. Media says, its about time we started to look at these social media, however if we really wantto begin to understand them lets do some real research.

A Profile of Online Profiles - By the Numbers Blog - NYTimes.com
September 9, 2008, 3:06 pm
A Profile of Online Profiles
By Charles M. Blow

I recently created a Facebook account. My kids thought it was hysterical. They said that I was too old. I’m only 38, but as far as they are concerned, Moses was my best friend in kindergarten.

Being a numbers guy, this got me interested in procuring hard data on social network users…and their behavioral traits while logged on. Here is some of what I found:

1. GENDER: According to a RapLeaf study released in July of 49.3 million people, 20 percent more females used social networks than men (this surprised me). The biggest disparity was for people under 25. In my age range, 35 to 40, men outnumbered women (see chart above).

According to an April Study by RapLeaf, men use social networking more for business and women more for socializing. From the report:

“Men tend to be more transactional and less relationship building when it comes to their friends on social networks. Women tend to have slightly more friends on average.”

2. BEST “HANDLES”: When it came to dating sites, things really got interesting. In April, The Times of London reported on a study by Dr. Monica Whitty, “a lecturer in cyber-psychology,” which revealed the names or “handles” that garnered the most numerous responses among online daters. Here’s what it said:

“Playful and flirtatious names such as “fun2bwith” or “i’msweet” were ranked top by both men and women daters as those they would most like to contact. Physical descriptors such as “cutie” or “blueeyes” were close behind. ‘These names suggest an outgoing or fun nature, or clarify the user’s positive physical appearance,’ said Dr Monica Whitty.”

But, there seemed to be some gender imbalances in the names:

“However she advised female lonely hearts to avoid screen names which attempt to be classy, or show how clever they are. Males daters said they would be less likely to contact screen names such as ‘wellread’ or ‘welleducated,’ although the study found women were more drawn to names that suggested men were cultured. ‘Less flirtatious names may be more appealing to women because they are wary of men who might be using the site to find one-night stands rather than long-term relationships,’ Dr Whitty said.”

3. LYING According to a study by entitled “Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles” that was published this year in the Personality and Social Psychology Bulletin, there is quite a bit of lying going on in online profiles. And, men lie more than women. Shocker!

It also turns out that people online are more accepting of some lies than others. From the study:

“Participants believed that lying about relationship information is less socially acceptable than lying about any other category. … Men considered it more acceptable than women to lie about their social status … [and] found it more acceptable than women to lie about their occupation, education and marginally about their relationship status.”

Below are some graphs from his report. Note how almost all women understate their weight and most men overstate their height. Typical.

Thursday, October 09, 2008

KMWorld.com: : Now, everything is fragmented

Hi gang, been a while, apparently my blogger glitched and my posts haven't been being posted, which I just discovered, oh well I will catch you up.
In this interesting missive, Snowden, taking off from Dave Weinberger’s book Everything is Miscellaneous,argues that everything is fragmented. This can be true, however, it depends on how one sees it. I understand his technical commentary, however allow me to give you a psychological perpective.Fragmentation is a perception from the POV of one who assumes the previous organizational model was "truth". Liberation of the mind and information to free itself from an old model and organize  itself in new ways, or more appropriately to allow information to be rearranged in new ways, is a radical way to see this event.This is how new ways of thinking, acting, designing, relating and communicating emerge, via the process of falling apart, the same holds for people.This process is not always plesant or joyful, but it is effective.Think about falling in love, think about falling out of love, how does that happen?
See my future  book, Futureself(  for the answer to that one. A hint though is it has to do with the gap between  who we are, who we think we, and who we would like to be, our Personal Mythology.
Think of  fragmentation as the liberation of mind and the breaking down of outmoded modela and an oppotunity for invention.

KMWorld.com: : Now, everything is fragmented
Now, everything is fragmented
By Dave Snowden - Posted May 1, 2008

I used the phrase "everything is fragmented" for the first time last year at KMWorld & Intranets in San Jose. I was picking up on the title of Dave Weinberger’s useful book Everything is Miscellaneous. Dave dealt with the shift from hierarchical taxonomies to the free form tagging of social computing. I wanted to build on that by pointing to the shift during the life span of knowledge management from the "chunked" material of case studies and best-practice documents to the unstructured, fragmented and finely granular material that pervades the blogosphere. So when I was asked to contribute this column to KMWorld magazine, it seemed an appropriate title; it allows me to talk about not only trends in technology but also social issues, the scientific use of narrative, and to fire off the odd invective about over-constrained and over-controlled systems.

So what do I mean by the idea of fragmentation? Well, it’s simple really: The more you structure material, the more you summarize (either as an editor or using technology), the more you make material specific to a context or time, the less utility that material has as things change. For years now I have asked this question at conferences around the world: Faced with an intractable problem, do you go and draw down best practice from your company’s knowledge management system, or do you go and find eight or nine people you know and trust with relevant experience and listen to their stories?

With the odd exception (generally IT managers who have just spent a few million dollars putting a best-practice system in and think people should use it), everyone goes for the stories. So why for the last decade and more have we focused on chunking up best practice? These days I add a few references to the way I and others use blogs to link and connect to insight and learning. Increasingly unstructured material, blended in unexpected ways, provides a richer source of knowledge.

Over the last decade as I have worked on homeland security, we have had the chance to run some experiments that show that raw field intelligence has more utility over longer periods of time than intelligence reports written at a specific time and place. In other experiments, we have demonstrated that narrative assessment of a battlefield picks up more weak signals (those things that after the event you wished you had paid attention to) than analytical structured thinking.

I think there are two reasons for those findings. First, we live in a world subject to constant change, and it’s better to blend fragments at the time of need than attempt to anticipate all needs. We are moving from attempting to anticipate the future to creating an attitude and capability of anticipatory awareness. Second, we are homo sapiens at least in part because we were first homo narrans: the storytelling ape. Dealing with anecdotal material from multiple sources and creating our own stories in turn has been a critical part of our evolutionary development.

The free flow of the blogosphere, ad hoc collaboration, Facebook and many other tools work because they conform with the patterns of expectation that arise from our evolutionary uncertainty. Have you ever heard anyone ask Wikipedia or the blogosphere, "How do we create a knowledge sharing culture?" No, but when I visit the knowledge management practitioners in organizations around the world, it is the dominant question. It’s not natural to chunk up material, to make it context specific; it is natural to share, blend and create fragmented material based on thoughts and reflections as we carry out tasks or engage in social interaction.

The big problem for the knowledge and information management functions in an organization is that their governance structures were developed in an earlier, more ordered time when we focused on transaction systems for accounting and process. The essence of such systems is to remove ambiguity; the evolutionary pressure of natural human knowledge exchange is to embrace ambiguity. Narrative, social computing, the open source movement are all comfortable with ambiguity, embrace it and use it. Organizations need to do the same, but the old patterns of control persist beyond their natural utility.

How we do this, what prejudices and difficulties we have to overcome to achieve this change, will be the theme of this column over the months. How can we use social computing within a corporate environment when we don’t have millions of participants? What is the relation between the formal transaction systems and this new fragmented world? Above all, how do we manage necessary uncertainty?

Is Google making us stupid?

Hi , I 've been out of pocket for a while,but have been meaning to respond to this little ditty.by Carr, who also thinks he can tell us what the Internet is doing to our brains,even though by his own admission he has absolutely no empirical evidence for any of his conclsuions. I love it.
The topic of what we are doing to our ourselves , our brains , our relationships, etc., with the emergence of the Internt is indeed a fundamental question and it should be. How about we do some RESEARCH, and find out. Carr is not alone in his curiouisty. I spoke with Chris Li of Forester, @ her book party for her well researched  book GROUDSWELL, and she refered me to ONE social scientist researcher whom she was familiar with,who was looking into to the impact of social media.
I would like to point out that going back to the argument represented in Plato's writings, the question was whether or not writing would destroy our ability to remember, well, the verdict may still be out on that one, but you can read about it on the Internet, I mean library.
Dr.Media says, have fun rotting your brains, and expanding your mind.

The Atlantic Online | July/August 2008 | Is Google Making Us Stupid? | Nicholas Carr
July/August 2008 Atlantic Monthly

What the Internet is doing to our brains

by Nicholas Carr
Is Google Making Us Stupid?

Illustration by Guy Billout

"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”

I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets’reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.)

For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.

I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e. I’m just seeking convenience, but because the way I THINK has changed?”

Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”

Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits , conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consortium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report:

It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.

Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.

Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.

Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.

But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”

Also see:

Living With a Computer
(July 1982)
"The process works this way. When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen..." By James Fallows

“You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler , Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.”

The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.”

As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”

The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.

The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.

The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.

When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration.

The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, TheNew York Times decided to devote the second and third pages of every edition to article abstracts , its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.

Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.

About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.

More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”

Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”

Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.

The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.

Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?

Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.

Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).

The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.

So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.

If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake:

I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.”

As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”

I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

A New Battle Is Beginning in Branding for the Web By STEVE LOHR

Hi Gang, Dr. Media back form vacation and ready to continue bringing you interesting info with brilliant commentary, of course.
Below an article that addresses the ongoing advertising/branding challenge.What is most interesting here is how the conflict between old tech methods and new tech demands is being met. The cool item here ,for those of you who are interested is, how a company like Microsoft , can attempt to take a term of currency-mesh, and try to own it by adding a branding term. The implications of this are profound.It means that large corporations can try to own language. The DNA of thought is language. The expression of the unique individual is in his or her language , as art as poem, as commentary, as gossip.
Here come the thought police?? What do you think?

September 1, 2008
A New Battle Is Beginning in Branding for the Web

To marketers large and small, the Web is a wide open frontier, an unlimited billboard with boundless branding opportunities.

For the empirical proof, look at the filings with the government for new trademarks that, put simply, are brand names.

Applications surged in the dot-com years, peaking in 2000 and then falling sharply for two years, before rising to a record last year of more than 394,000.

Recently, a new front has opened in the Internet branding wars.

It lies beyond putting trademarks on new businesses, Web site addresses and online logos. Now, companies want to slap a brand on still vaguely defined products and services in the uncharted ephemera of cyberspace — the computing cloud, as it has come to be known.

Cloud computing usually refers to Internet services or software that the user accesses through a Web browser on a personal computer, cellphone or other device. The digital service is delivered remotely, from somewhere off in the computing cloud, in the fashion of Google’s Internet search service.

Dell has tried to trademark the term cloud computing itself. But in August, the United States Patent and Trademark Office sent a strong signal that cloud computing cannot be trademarked.

It issued an initial refusal to Dell, which filed its application 18 months ago, when the term was less widely used in industry conversations and marketing.

Dell had passed early steps toward approval, but the office turned it down, after protests from industry experts that cloud computing had become a broadly descriptive term, and not one linked to a single company. Dell can appeal, but that seems unlikely.

In recent years, patents — not trademarks — have been the main focus of intellectual property experts and the courts, especially around the issue of whether patents on software and business methods have become counterproductive, inhibiting innovation.

But some legal experts say trademark issues may take on a higher profile, fueled by the escalating value of brands in general and trademark holders increasingly trying to assert their rights, especially on the Internet.

“Trademark is the sleeping giant of intellectual property,” said Paul Goldstein, a professor at the Stanford law school.

Microsoft, for example, is developing a technology that is intended to synchronize the data on all of a person’s computing devices, even synchronizing it with family members and work colleagues as well, automatically reaching across the cloud.

When Microsoft announced the concept this year, it said the technology would be called Live Mesh. Just what it is and how it may work remains unclear, but Microsoft filed for a trademark on Live Mesh in June, an application that awaits judgment from the Patent and Trademark Office.

Mesh and mesh networking are widely used terms for technology that connects devices.

“This is the challenge for our examiners,” said Lynne G. Beresford, commissioner for trademarks in the Patent and Trademark Office. “With emerging marks in a field that is changing quickly, you have to make a determination about what the common understanding is.”

That challenge, legal experts say, is one of several for trademark policy and practice in the Internet age. Instant communication, aggressive business tactics and an unsettled legal environment, they say, mean that trademark disputes on the Internet will increase in number and intensity.

The first round of trademark conflict on the Internet, focused on cybersquatting, has subsided. Cybersquatters were early profiteers who bought up the Web addresses, or domain names, of well-known trademarked brands, and then tried to charge the companies huge amounts of money to buy them.

In 1999, Congress passed a bill against cybersquatting that allowed companies to sue anyone who, with “a bad faith intent to profit,” buys the domain name of a well-known brand. The same year, the Internet Corporation for Assigned Names and Numbers, a nonprofit oversight agency, established a system for resolving domain name disputes.

The new areas of conflict, according to legal experts, include trademark owners trying to assert their rights to stifle online criticism of their products, and to stop trademarked brands from being purchased as keywords in Internet search advertising.

Early court rulings in keyword cases point to the uncertain legal setting and the international differences in trademark law. In the United States, lawyers say, the initial rulings have tended to allow companies to buy the trademarked brand names of rivals as keywords in search. Ford, for example, can bid on and buy “Toyota,” so that a person typing Toyota as a search term would see a link to Ford’s Web site in the paid-for links on the right hand side of Google’s Web page.

In the United States, that practice has not been interpreted as causing any fundamental consumer confusion. Google also argues that because any bidder can make an offer for any word — Google supplies no list — it is not a user of trademarks. “We are not using keywords, we are not selling keywords, we are selling ad space,” said Terri Chen, Google’s senior trademark counsel.

But in a French court ruling in 2005, Google was enjoined from allowing others to buy as a keyword the trademark brand of a French luxury goods maker, Louis Vuitton. For countries other than the United States, Canada, the United Kingdom and Ireland, Google has a trademark complaint system, so holders can generally prevent their brands from being purchased as keywords by others.

The speed of Internet communication and heightened competition to claim and establish brands have drastically changed trademark tactics over the years. Compare the positioning and pre-emptive moves around cloud computing with the gradual pace of building one of the most valued brands in the world, Microsoft’s Windows.

The use of personal computer windows and graphical user windowing systems were around long before Microsoft announced its plans for a Windows operating system in 1983. The first version was introduced in 1985, and Microsoft did not file for a trademark until 1990. Its application was initially rejected as “merely descriptive.”

But, as so often, Microsoft persevered. It kept investing in advertising, branding and product development. It presented the Patent and Trademark Office with surveys showing people had come to associate the term Windows with Microsoft, and in 1995 the trademark examiners finally agreed.

With its cloud computing project Live Mesh, Microsoft is taking a far faster, more focused approach. It is employing Live, which it uses in other Internet offerings, like Windows Live and Xbox Live, as half of a two-word trademark — or composite mark, in legal terms. “Mesh networking is the generic category, but Live Mesh is Microsoft’s implementation and acts as a source identifier,” said Russell Pangborn, Microsoft’s director for trademarks.

One thing that has been undeniably transformed by computing and the Internet is the trademark office itself. Ms. Beresford, a professed “trademark nerd,” recalled that when she joined the office in 1979, searches for the same or “confusingly similar” trademarks began in the “search room.” The applications and registration documents were kept in wooden cabinets, filed alphabetically.

Trademarked images were kept in separate drawers and grouped into visual categories, she recalled, like “grotesque humans” (the Pillsbury doughboy) and “human body parts” (the Yellow Pages’ walking fingers).

Examining attorneys, Ms. Beresford noted, were issued rubber covers for their index fingers for going through files faster and with fewer paper cuts. The technology tools have been upgraded considerably since then. The work is now done mainly on computers, searching the Web and specialized trademark databases. Eighty-five percent of the office’s 390 examining attorneys work primarily from home.

The search room, Ms. Beresford observed, has “gone the way of the buggy whip.”

Sunday, August 03, 2008

The Trolls Among Us

I wanted to blog this to your attention and compliment Schwartz his article ,and the Times for encouraging this, type of work. While it may not be that interesting to most, as a media psychologist, Dr. Media says this is just the tip of the iceberg. What this really alludes to is the ability of anyone to represent what ever their personal mythology is in their actions without being held responsible, or more importantly, needing to understand themselves.. What does this mean, anyone can pretend to be powerful,rich, evil, sexy, you name it, and there is no way of knowing with out a reference to the fleshworld. This is cool, because it allows people to express their unconscious fantasies.Sometimes these fantasies are not pleasant.What a great opportunity for self knowledge?

Malwebolence - The World of Web Trolling - NYTimes.com
August 3, 2008

The Trolls Among Us

One afternoon in the spring of 2006,
for reasons unknown to those who knew him, Mitchell Henderson, a
seventh grader from Rochester, Minn., took a .22-caliber rifle down
from a shelf in his parents’ bedroom closet and shot himself in the
head. The next morning, Mitchell’s school assembled in the gym to begin
mourning. His classmates created a virtual memorial on MySpace
and garlanded it with remembrances. One wrote that Mitchell was “an
hero to take that shot, to leave us all behind. God do we wish we could
take it back. . . . ” Someone e-mailed a clipping of Mitchell’s
newspaper obituary to MyDeathSpace.com,
a Web site that links to the MySpace pages of the dead. From
MyDeathSpace, Mitchell’s page came to the attention of an Internet
message board known as /b/ and the “trolls,” as they have come to be
called, who dwell there.

/b/ is the designated “random” board of 4chan.org,
a group of message boards that draws more than 200 million page views a
month. A post consists of an image and a few lines of text. Almost
everyone posts as “anonymous.” In effect, this makes /b/ a panopticon
in reverse — nobody can see anybody, and everybody can claim to speak
from the center. The anonymous denizens of 4chan’s other boards —
devoted to travel, fitness and several genres of pornography — refer to
the /b/-dwellers as “/b/tards.”

Measured in terms of depravity, insularity and traffic-driven
turnover, the culture of /b/ has little precedent. /b/ reads like the
inside of a high-school bathroom stall, or an obscene telephone party
line, or a blog with no posts and all comments filled with slang that
you are too old to understand.

Something about Mitchell Henderson struck the denizens of /b/ as
funny. They were especially amused by a reference on his MySpace page
to a lost iPod.
Mitchell Henderson, /b/ decided, had killed himself over a lost iPod.
The “an hero” meme was born. Within hours, the anonymous multitudes
were wrapping the tragedy of Mitchell’s death in absurdity.

Someone hacked Henderson’s MySpace page and gave him the face of a
zombie. Someone placed an iPod on Henderson’s grave, took a picture and
posted it to /b/. Henderson’s face was appended to dancing iPods,
spinning iPods, hardcore porn scenes. A dramatic re-enactment of
Henderson’s demise appeared on YouTube,
complete with shattered iPod. The phone began ringing at Mitchell’s
parents’ home. “It sounded like kids,” remembers Mitchell’s father,
Mark Henderson, a 44-year-old I.T. executive. “They’d say, ‘Hi, this is
Mitchell, I’m at the cemetery.’ ‘Hi, I’ve got Mitchell’s iPod.’ ‘Hi,
I’m Mitchell’s ghost, the front door is locked. Can you come down and
let me in?’ ” He sighed. “It really got to my wife.” The calls
continued for a year and a half.

In the late 1980s, Internet users adopted the word “troll” to
denote someone who intentionally disrupts online communities. Early
trolling was relatively innocuous, taking place inside of small,
single-topic Usenet groups. The trolls employed what the M.I.T.
professor Judith Donath calls a “pseudo-naïve” tactic, asking stupid
questions and seeing who would rise to the bait. The game was to find
out who would see through this stereotypical newbie behavior, and who
would fall for it. As one guide to trolldom puts it, “If you don’t fall
for the joke, you get to be in on it.”

Today the Internet is much more than esoteric discussion forums. It
is a mass medium for defining who we are to ourselves and to others.
Teenagers groom their MySpace profiles as intensely as their hair;
escapists clock 50-hour weeks in virtual worlds, accumulating gold for
their online avatars. Anyone seeking work or love can expect to be
Googled. As our emotional investment in the Internet has grown, the
stakes for trolling — for provoking strangers online — have risen.
Trolling has evolved from ironic solo skit to vicious group hunt.

“Lulz” is how trolls keep score. A corruption of “LOL” or “laugh out
loud,” “lulz” means the joy of disrupting another’s emotional
equilibrium. “Lulz is watching someone lose their mind at their
computer 2,000 miles away while you chat with friends and laugh,” said
one ex-troll who, like many people I contacted, refused to disclose his
legal identity.

Another troll explained the lulz as a quasi-thermodynamic exchange
between the sensitive and the cruel: “You look for someone who is full
of it, a real blowhard. Then you exploit their insecurities to get an
insane amount of drama, laughs and lulz. Rules would be simple: 1. Do
whatever it takes to get lulz. 2. Make sure the lulz is widely
distributed. This will allow for more lulz to be made. 3. The game is
never over until all the lulz have been had.”

/b/ is not all bad. 4chan has tried (with limited success) to police
itself, using moderators to purge child porn and eliminate calls to
disrupt other sites. Among /b/’s more interesting spawn is Anonymous, a
group of masked pranksters who organized protests at Church of Scientology branches around the world.

But the logic of lulz extends far beyond /b/ to the anonymous
message boards that seem to be springing up everywhere. Two female Yale
Law School students have filed a suit against pseudonymous users who
posted violent fantasies about them on AutoAdmit, a college-admissions
message board. In China, anonymous nationalists are posting death
threats against pro-Tibet activists, along with their names and home
addresses. Technology, apparently, does more than harness the wisdom of
the crowd. It can intensify its hatred as well.

Jason Fortuny might be the closest thing this movement of
anonymous provocateurs has to a spokesman. Thirty-two years old, he
works “typical Clark Kent I.T.” freelance jobs — Web design,
programming — but his passion is trolling, “pushing peoples’ buttons.”
Fortuny frames his acts of trolling as “experiments,” sociological
inquiries into human behavior. In the fall of 2006, he posted a hoax ad
on Craigslist,
posing as a woman seeking a “str8 brutal dom muscular male.” More than
100 men responded. Fortuny posted their names, pictures, e-mail and
phone numbers to his blog, dubbing the exposé “the Craigslist
Experiment.” This made Fortuny the most prominent Internet villain in
America until November 2007, when his fame was eclipsed by the Megan Meier
MySpace suicide. Meier, a 13-year-old Missouri girl, hanged herself
with a belt after receiving cruel messages from a boy she’d been
flirting with on MySpace. The boy was not a real boy, investigators
say, but the fictional creation of Lori Drew, the mother of one of
Megan’s former friends. Drew later said she hoped to find out whether
Megan was gossiping about her daughter. The story — respectable
suburban wife uses Internet to torment teenage girl — was a media

Fortuny’s Craigslist Experiment deprived its subjects of more than
just privacy. Two of them, he says, lost their jobs, and at least one,
for a time, lost his girlfriend. Another has filed an
invasion-of-privacy lawsuit against Fortuny in an Illinois court. After
receiving death threats, Fortuny meticulously scrubbed his real address
and phone number from the Internet. “Anyone who knows who and where you
are is a security hole,” he told me. “I own a gun. I have an escape
route. If someone comes, I’m ready.”

While reporting this article, I did everything I could to verify the
trolls’ stories and identities, but I could never be certain. After
all, I was examining a subculture that is built on deception and
delights in playing with the media. If I had doubts about whether
Fortuny was who he said he was, he had the same doubts about me. I
first contacted Fortuny by e-mail, and he called me a few days later.
“I checked you out,” he said warily. “You seem legitimate.” We met in
person on a bright spring day at his apartment, on a forested slope in
Kirkland, Wash., near Seattle. He wore a T-shirt and sweat pants,
looking like an amiable freelancer on a Friday afternoon. He is thin,
with birdlike features and the etiolated complexion of one who works in
front of a screen. He’d been chatting with an online associate about
driving me blindfolded from the airport, he said. “We decided it would
be too much work.”

A flat-screen HDTV dominated Fortuny’s living room, across from a
futon prepped with neatly folded blankets. This was where I would sleep
for the next few nights. As Fortuny picked up his cat and settled into
an Eames-style chair, I asked whether trolling hurt people. “I’m not
going to sit here and say, ‘Oh, God, please forgive me!’ so someone can
feel better,” Fortuny said, his calm voice momentarily rising. The cat
lay purring in his lap. “Am I the bad guy? Am I the big horrible person
who shattered someone’s life with some information? No! This is life.
Welcome to life. Everyone goes through it. I’ve been through horrible
stuff, too.”

“Like what?” I asked. Sexual abuse, Fortuny said. When Jason was 5,
he said, he was molested by his grandfather and three other relatives.
Jason’s mother later told me, too, that he was molested by his
grandfather. The last she heard from Jason was a letter telling her to
kill herself. “Jason is a young man in a great deal of emotional pain,”
she said, crying as she spoke. “Don’t be too harsh. He’s still my son.”

In the days after the Megan Meier story became public, Lori Drew and
her family found themselves in the trolls’ crosshairs. Their personal
information — e-mail addresses, satellite images of their home, phone
numbers — spread across the Internet. One of the numbers led to a
voice-mail greeting with the gleeful words “I did it for the lulz.”
Anonymous malefactors made death threats and hurled a brick through the
kitchen window. Then came the Megan Had It Coming blog. Supposedly
written by one of Megan’s classmates, the blog called Megan a “drama
queen,” so unstable that Drew could not be blamed for her death.
“Killing yourself over a MySpace boy? Come on!!! I mean yeah your fat
so you have to take what you can get but still nobody should kill
themselves over it.” In the third post the author revealed herself as
Lori Drew.

This post received more than 3,600 comments. Fox and CNN debated its
authenticity. But the Drew identity was another mask. In fact, Megan
Had It Coming was another Jason Fortuny experiment. He, not Lori Drew,
Fortuny told me, was the blog’s author. After watching him log onto the
site and add a post, I believed him. The blog was intended, he says, to
question the public’s hunger for remorse and to challenge the
enforceability of cyberharassment laws like the one passed by Megan’s
town after her death. Fortuny concluded that they were unenforceable.
The county sheriff’s department announced it was investigating the
identity of the fake Lori Drew, but it never found Fortuny, who is not
especially worried about coming out now. “What’s he going to sue me
for?” he asked. “Leading on confused people? Why don’t people
fact-check who this stuff is coming from? Why do they assume it’s true?”

Fortuny calls himself “a normal person who does insane things on the
Internet,” and the scene at dinner later on the first day we spent
together was exceedingly normal, with Fortuny, his roommate Charles and
his longtime friend Zach trading stories at a sushi restaurant nearby
over sake and happy-hour gyoza. Fortuny flirted with our waitress,
showing her a cellphone picture of his cat. “He commands you to kill!”
he cackled. “Do you know how many I’ve killed at his command?” Everyone

Fortuny spent most of the weekend in his bedroom juggling several
windows on his monitor. One displayed a chat room run by Encyclopedia
Dramatica, an online compendium of troll humor and troll lore. It was
buzzing with news of an attack against the Epilepsy Foundation’s Web
site. Trolls had flooded the site’s forums with flashing images and
links to animated color fields, leading at least one photosensitive
user to claim that she had a seizure.

WEEV: the whole posting flashing images to epileptics thing? over the line.

HEPKITTEN: can someone plz tell me how doing something the admins intentionally left enabled is hacking?

WEEV: it’s hacking peoples unpatched brains. we have to draw a moral line somewhere.

Fortuny disagreed. In his mind, subjecting epileptic users to
flashing lights was justified. “Hacks like this tell you to watch out
by hitting you with a baseball bat,” he told me. “Demonstrating these
kinds of exploits is usually the only way to get them fixed.”

“So the message is ‘buy a helmet,’ and the medium is a bat to the head?” I asked.

“No, it’s like a pitcher telling a batter to put on his helmet by
beaning him from the mound. If you have this disease and you’re on the
Internet, you need to take precautions.” A few days later, he wrote and
posted a guide to safe Web surfing for epileptics.

On Sunday, Fortuny showed me an office building that once housed Google programmers, and a low-slung modernist structure where programmers wrote Halo 3,
the best-selling video game. We ate muffins at Terra Bite, a coffee
shop founded by a Google employee where customers pay whatever price
they feel like. Kirkland seemed to pulse with the easy money and
optimism of the Internet, unaware of the machinations of the troll on
the hill.

We walked on, to Starbucks.
At the next table, middle-schoolers with punk-rock haircuts feasted
noisily on energy drinks and whipped cream. Fortuny sipped a
white-chocolate mocha. He proceeded to demonstrate his personal cure
for trolling, the Theory of the Green Hair.

“You have green hair,” he told me. “Did you know that?”

“No,” I said.

“Why not?”

“I look in the mirror. I see my hair is black.”

“That’s uh, interesting. I guess you understand that you have green
hair about as well as you understand that you’re a terrible reporter.”

“What do you mean? What did I do?”

“That’s a very interesting reaction,” Fortuny said. “Why didn’t you
get so defensive when I said you had green hair?” If I were certain
that I wasn’t a terrible reporter, he explained, I would have laughed
the suggestion off just as easily. The willingness of trolling
“victims” to be hurt by words, he argued, makes them complicit, and
trolling will end as soon as we all get over it.

On Monday we drove to the mall. I asked Fortuny how he could troll
me if he so chose. He took out his cellphone. On the screen was a
picture of my debit card with the numbers clearly legible. I had left
it in plain view beside my laptop. “I took this while you were out,” he
said. He pressed a button. The picture disappeared. “See? I just
deleted it.”

The Craigslist Experiment, Fortuny reiterated, brought him troll
fame by accident. He was pleased with how the Megan Had It Coming blog
succeeded by design. As he described the intricacies of his plan —
adding sympathetic touches to the fake classmate, making fake Lori Drew
a fierce defender of her own daughter, calibrating every detail to the
emotional register of his audience — he sounded not so much a
sociologist as a playwright workshopping a set of characters.

“You seem to know exactly how much you can get away with, and you
troll right up to that line,” I said. “Is there anything that can be
done on the Internet that shouldn’t be done?”

Fortuny was silent. In four days of conversation, this was the first time he did not have an answer ready.

“I don’t know,” he said. “I have to think about it.”

Sherrod DeGrippo, a 28-year-old Atlanta native who goes by
the name Girlvinyl, runs Encyclopedia Dramatica, the online troll
archive. In 2006, DeGrippo received an e-mail message from a well-known
band of trolls, demanding that she edit the entry about them on the
Encyclopedia Dramatica site. She refused. Within hours, the aggrieved
trolls hit the phones, bombarding her apartment with taxis, pizzas,
escorts and threats of rape and violent death. DeGrippo, alone and
terrified, sought counsel from a powerful friend. She called Weev.

Weev, the troll who thought hacking the epilepsy site was immoral,
is legendary among trolls. He is said to have jammed the cellphones of
daughters of C.E.O.’s and demanded ransom from their fathers; he is
also said to have trashed his enemies’ credit ratings. Better
documented are his repeated assaults on LiveJournal, an online diary
site where he himself maintains a personal blog. Working with a group
of fellow hackers and trolls, he once obtained access to thousands of
user accounts.

I first met Weev in an online chat room that I visited while staying
at Fortuny’s house. “I hack, I ruin, I make piles of money,” he
boasted. “I make people afraid for their lives.” On the phone that
night, Weev displayed a misanthropy far harsher than Fortuny’s.
“Trolling is basically Internet eugenics,” he said, his voice pitching
up like a jet engine on the runway. “I want everyone off the Internet.
Bloggers are filth. They need to be destroyed. Blogging gives the
illusion of participation to a bunch of retards. . . . We need to put
these people in the oven!”

I listened for a few more minutes as Weev held forth on the Federal
Reserve and about Jews. Unlike Fortuny, he made no attempt to reconcile
his trolling with conventional social norms. Two days later, I flew to
Los Angeles and met Weev at a train station in Fullerton, a sleepy
bungalow town folded into the vast Orange County grid. He is in his
early 20s with full lips, darting eyes and a nest of hair falling back
from his temples. He has a way of leaning in as he makes a point,
inviting you to share what might or might not be a joke.

As we walked through Fullerton’s downtown, Weev told me about his
day — he’d lost $10,000 on the commodities market, he claimed — and
summarized his philosophy of “global ruin.” “We are headed for a
Malthusian crisis,” he said, with professorial confidence. “Plankton
levels are dropping. Bees are dying. There are tortilla riots in
Mexico, the highest wheat prices in 30-odd years.” He paused. “The
question we have to answer is: How do we kill four of the world’s six
billion people in the most just way possible?” He seemed excited to
have said this aloud.

Ideas like these bring trouble. Almost a year ago, while in the
midst of an LSD-and-methamphetamine bender, a longer-haired,
wilder-eyed Weev gave a talk called “Internet Crime” at a San Diego
hacker convention. He expounded on diverse topics like hacking the
Firefox browser, online trade in illegal weaponry and assassination
markets — untraceable online betting pools that pay whoever predicts
the exact date of a political leader’s demise. The talk led to two
uncomfortable interviews with federal agents and the decision to shed
his legal identity altogether. Weev now espouses “the ruin lifestyle” —
moving from condo to condo, living out of three bags, no name, no
possessions, all assets held offshore. As a member of a group of
hackers called “the organization,” which, he says, bring in upward of
$10 million annually, he says he can wreak ruin from anywhere.

We arrived at a strip mall. Out of the darkness, the coffinlike
snout of a new Rolls Royce Phantom materialized. A flying lady winked
on the hood. “Your bag, sir?” said the driver, a blond kid in a suit
and tie.

“This is my car,” Weev said. “Get in.”

And it was, for that night and the next, at least. The car’s plush
chamber accentuated the boyishness of Weev, who wore sneakers and jeans
and hung from a leather strap like a subway rider. In the front seat
sat Claudia, a pretty college-age girl.

I asked about the status of Weev’s campaign against humanity. Things
seemed rather stable, I said, even with all this talk of trolling and

“We’re waiting,” Weev said. “We need someone to show us the way. The messiah.”

“How do you know it’s not you?” I asked.

“If it were me, I would know,” he said. “I would receive a sign.”

Zeno of Elea, Socrates and Jesus, Weev said, are his all-time
favorite trolls. He also identifies with Coyote and Loki, the trickster
gods, and especially with Kali, the Hindu goddess of destruction. “Loki
was a hacker. The other gods feared him, but they needed his tools.”

“I was just thinking of Kali!” Claudia said with a giggle.

Over a candlelit dinner of tuna sashimi, Weev asked if I would
attribute his comments to Memphis Two, the handle he used to troll
Kathy Sierra, a blogger. Inspired by her touchy response to online
commenters, Weev said he “dropped docs” on Sierra, posting a fabricated
narrative of her career alongside her real Social Security number and
address. This was part of a larger trolling campaign against Sierra,
one that culminated in death threats. Weev says he has access to
hundreds of thousands of Social Security numbers. About a month later,
he sent me mine.

Weev, Claudia and I hung out in Fullerton for two more nights,
always meeting and saying goodbye at the train station. I met their
friend Kate, who has been repeatedly banned from playing XBox Live for
racist slurs, which she also enjoys screaming at white pedestrians.
Kate checked my head for lice and kept calling me “Jew.” Relations have
since warmed. She now e-mails me puppy pictures and wants the names of
fun places for her coming visit to New York. On the last night, Weev
offered to take me to his apartment if I wore a blindfold and left my
cellphone behind. I was in, but Claudia vetoed the idea. I think it was
her apartment.

Does free speech tend to move toward the truth or away from
it? When does it evolve into a better collective understanding? When
does it collapse into the Babel of trolling, the pointless and eristic
game of talking the other guy into crying “uncle”? Is the effort to
control what’s said always a form of censorship, or might certain rules
be compatible with our notions of free speech?

One promising answer comes from the computer scientist Jon Postel,
now known as “god of the Internet” for the influence he exercised over
the emerging network. In 1981, he formulated what’s known as Postel’s
Law: “Be conservative in what you do; be liberal in what you accept
from others.” Originally intended to foster “interoperability,” the
ability of multiple computer systems to understand one another,
Postel’s Law is now recognized as having wider applications. To build a
robust global network with no central authority, engineers were
encouraged to write code that could “speak” as clearly as possible yet
“listen” to the widest possible range of other speakers, including
those who do not conform perfectly to the rules of the road. The human
equivalent of this robustness is a combination of eloquence and
tolerance — the spirit of good conversation. Trolls embody the opposite
principle. They are liberal in what they do and conservative in what
they construe as acceptable behavior from others. You, the troll says,
are not worthy of my understanding; I, therefore, will do everything I
can to confound you.

Why inflict anguish on a helpless stranger? It’s tempting to blame
technology, which increases the range of our communications while
dehumanizing the recipients. Cases like An Hero and Megan Meier
presumably wouldn’t happen if the perpetrators had to deliver their
messages in person. But while technology reduces the social barriers
that keep us from bedeviling strangers, it does not explain the initial
trolling impulse. This seems to spring from something ugly — a
destructive human urge that many feel but few act upon, the ambient
misanthropy that’s a frequent ingredient of art, politics and, most of
all, jokes. There’s a lot of hate out there, and a lot to hate as well.

So far, despite all this discord, the Internet’s system of civil
machines has proved more resilient than anyone imagined. As early as
1994, the head of the Internet Society warned that spam “will destroy
the network.” The news media continually present the online world as a
Wild West infested with villainous hackers, spammers and pedophiles.
And yet the Internet is doing very well for a frontier town on the
brink of anarchy. Its traffic is expected to quadruple by 2012. To say
that trolls pose a threat to the Internet at this point is like saying
that crows pose a threat to farming.

That the Internet is now capacious enough to host an entire
subculture of users who enjoy undermining its founding values is yet
another symptom of its phenomenal success. It may not be a bad thing
that the least-mature users have built remote ghettos of anonymity
where the malice is usually intramural. But how do we deal with cases
like An Hero, epilepsy hacks and the possibility of real harm being
inflicted on strangers?

Several state legislators have recently proposed cyberbullying
measures. At the federal level, Representative Linda Sánchez, a
Democrat from California, has introduced the Megan Meier Cyberbullying
Prevention Act, which would make it a federal crime to send any
communications with intent to cause “substantial emotional distress.”
In June, Lori Drew pleaded not guilty to charges that she violated
federal fraud laws by creating a false identity “to torment, harass,
humiliate and embarrass” another user, and by violating MySpace’s terms
of service. But hardly anyone bothers to read terms of service, and
millions create false identities. “While Drew’s conduct is immoral, it
is a very big stretch to call it illegal,” wrote the online-privacy
expert Prof. Daniel J. Solove on the blog Concurring Opinions.

Many trolling practices, like prank-calling the Hendersons and
intimidating Kathy Sierra, violate existing laws against harassment and
threats. The difficulty is tracking down the perpetrators. In order to
prosecute, investigators must subpoena sites and Internet service
providers to learn the original author’s IP address, and from there,
his legal identity. Local police departments generally don’t have the
means to follow this digital trail, and federal investigators have
their hands full with spam, terrorism, fraud and child pornography.
But even if we had the resources to aggressively prosecute trolls,
would we want to? Are we ready for an Internet where law enforcement
keeps watch over every vituperative blog and backbiting comments
section, ready to spring at the first hint of violence? Probably not.
All vigorous debates shade into trolling at the perimeter; it is next
to impossible to excise the trolling without snuffing out the debate.

If we can’t prosecute the trolling out of online anonymity, might
there be some way to mitigate it with technology? One solution that has
proved effective is “disemvoweling” — having message-board
administrators remove the vowels from trollish comments, which gives
trolls the visibility they crave while muddying their message. A
broader answer is persistent pseudonymity, a system of nicknames that
stay the same across multiple sites. This could reduce anonymity’s
excesses while preserving its benefits for whistle-blowers and overseas
dissenters. Ultimately, as Fortuny suggests, trolling will stop only
when its audience stops taking trolls seriously. “People know to be
deeply skeptical of what they read on the front of a supermarket
tabloid,” says Dan Gillmor, who directs the Center for Citizen Media.
“It should be even more so with anonymous comments. They shouldn’t
start off with a credibility rating of, say, 0. It should be more like

Of course, none of these methods will be fail-safe as long as
individuals like Fortuny construe human welfare the way they do. As we
discussed the epilepsy hack, I asked Fortuny whether a person is
obliged to give food to a starving stranger. No, Fortuny argued; no one
is entitled to our sympathy or empathy. We can choose to give or
withhold them as we see fit. “I can’t push you into the fire,” he
explained, “but I can look at you while you’re burning in the fire and
not be required to help.” Weeks later, after talking to his friend
Zach, Fortuny began considering the deeper emotional forces that drove
him to troll. The theory of the green hair, he said, “allows me to find
people who do stupid things and turn them around. Zach asked if I
thought I could turn my parents around. I almost broke down. The idea
of them learning from their mistakes and becoming people that I could
actually be proud of . . . it was overwhelming.” He continued: “It’s
not that I do this because I hate them. I do this because I’m trying to
save them.”

Weeks before my visit with Fortuny, I had lunch with “moot,” the
young man who founded 4chan. After running the site under his pseudonym
for five years, he recently revealed his legal name to be Christopher
Poole. At lunch, Poole was quick to distance himself from the excesses
of /b/. “Ultimately the power lies in the community to dictate its own
standards,” he said. “All we do is provide a general framework.” He was
optimistic about Robot9000, a new 4chan board with a combination of
human and machine moderation. Users who make “unoriginal” or “low
content” posts are banned from Robot9000 for periods that lengthen with
each offense.

The posts on Robot9000 one morning were indeed far more substantive
than /b/. With the cyborg moderation system silencing the trolls, 4chan
had begun to display signs of linearity, coherence, a sense of
collective enterprise. It was, in other words, robust. The anonymous
hordes swapped lists of albums and novels; some had pretty good taste.
Somebody tried to start a chess game: “I’ll start, e2 to e4,” which
quickly devolved into riffage with moves like “Return to Sender,” “From
Here to Infinity,” “Death to America” and a predictably indecent
checkmate maneuver.

Shortly after 8 a.m., someone asked this:

“What makes a bad person? Or a good person? How do you know if you’re a bad person?”

Which prompted this:

“A good person is someone who follows the rules. A bad person is someone who doesn’t.”

And this:

“you’re breaking my rules, you bad person”

There were echoes of antiquity:

“good: pleasure; bad: pain”

“There is no morality. Only the right of the superior to rule over the inferior.”

And flirtations with postmodernity:

“good and bad are subjective”

“we’re going to turn into wormchow before the rest of the universe even notices.”

Books were prescribed:

“read Kant, JS Mill, Bentham, Singer, etc. Noobs.”

And then finally this:

“I’d say empathy is probably a factor.”