Popular Posts

Saturday, October 18, 2008

Welcome To The New World of Distribution

Hi folks, this one is especially for all you media makers. Peter Broderick's article from Indiewire on the state of distribution,"old"vs :"new". Well stated and a more detailed commentary than Mark Gills comments Dr. Media commented on in an ealier blog. Good Stuff.See the chart simplistic but sums it up.

FIRST PERSON | Peter Broderick: "Welcome To The New World of Distribution," Part 1

Welcome to the New World of Distribution. Many filmmakers are
emigrating from the Old World, where they have little chance of
succeeding. They are attracted by unprecedented opportunities and the
freedom to shape their own destiny. Life in the New World requires them
to work harder, be more tenacious, and take more risks. There are
daunting challenges and no guarantees of success. But this hasn't
stopped more and more intrepid filmmakers from exploring uncharted
territory and staking claims.

Before the discovery of the New World, the Old
World of Distribution reigned supreme. It is a hierarchical realm where
filmmakers must petition the powers that be to grant them distribution.
Independents who are able to make overall deals are required to give
distributors total control of the marketing and distribution of their
films. The terms of these deals have gotten worse and few filmmakers
end up satisfied.

All is not well for companies and filmmakers in what I call the Old World of Distribution. At Film Independent's Film Financing Conference, Mark Gill vividly described "the ways the independent film business is in trouble" in his widely read and discussed keynote.
Mark listed the companies and divisions that have been shut down or are
teetering on the brink of bankruptcy, noted that five others are in
"serious financial peril," and said that ten independent film
financiers may soon "exit the business." Mark made a persuasive case
that "the sky really is falling... because the accumulation of bad news
is kind of awe-inspiring." While he doesn't expect that the sky will
"hit the ground everywhere," he warned "it will feel like we just
survived a medieval plague. The carnage and the stench will be

Mark's keynote focused on the distributors, production companies,
studio specialty divisions, and foreign sales companies that dominate
independent film in the Old World. Mark has many years of experience in
this world. He was President of Miramax Films, then head of Warner Independent Pictures, and is now CEO of The Film Department. He sees things from the perspective of a seasoned Old World executive.

I see things from the filmmaker's perspective. For the past 11
years, I have been helping filmmakers maximize revenues, get their
films seen as widely as possible, and launch or further their careers.
From 1997 until 2002, I experienced the deteriorating state of the Old
World of Distribution as head of IFC's Next Wave Films.
After the company closed, I discovered the New World of Distribution in
its formative stages. A few directors had already gotten impressive
results by splitting up their rights and selling DVDs directly from
their websites.

Filmmakers started asking me to advise them on distribution, and, before I knew it, I was a "distribution strategist"
working with independents across the country and around the globe.
Since late 2002, I have consulted with more than 500 filmmakers. While
some have taken traditional paths in the Old World, many more have
blazed trails in the new one. I've learned from their successes and
failures and had the opportunity to share these lessons with other
filmmakers, who then have been able to go further down these trails. It
has been very exciting to be able to participate in the building of the
New World, where the old rules no longer apply.

Many of the rulers of the Old World continue to look backwards.
Having spent their entire careers in this realm, played by its rules
and succeeded, they can't see past the limits of their experience. For
them, the Old World is the known world, which they refer to as "the
film business." They explain away the serious problems facing the Old
World by citing the film glut, higher marketing costs, mediocre films,
and the historically cyclical nature of the industry. They appear to
believe that everything will be just fine with enough discipline and
patience--if fewer, better films are made, costs are controlled, and
they can hold out until the next upturn.

Many of these executives seem unaware of the larger structural
changes threatening their world. They recognize that video-on-demand
and digital downloads will become more significant revenue streams but
seem confident that they can incorporate them into their traditional
distribution model. These executives do not understand the fundamental
importance of the internet or its disruptive power. By enabling
filmmakers in the New World to reach audiences directly and
dramatically reducing their distribution costs, it empowers them to
keep control of their "content'.

The Old World executives who do acknowledge the New World can be as
dismissive as record industry executives were when they first noticed
the internet. Their usual condescending response is the internet may
work for "little" films with "niche" audiences. After admitting that
the internet represents added competition for eyeballs, they are quick
to point out that little money is currently being made from digital
downloads or online advertising.

Notable successes in the New World represent the shape of things to
come. Several filmmakers have each made more than one million dollars
selling their films directly from their websites. Other filmmakers have
begun raising money online. During 10 days of internet fundraising, Robert Greenwald attracted $385,000 in contributions for his documentary "Iraq for Sale."

Arin Crumley and Susan Buice built awareness for their feature "Four Eyed Monsters" through a series of video podcasts. They then made their film available for free on YouTube and MySpace, where it was viewed over a million times. Arin and Susan made money through shared ad revenues and Spout.com sign-ups, and then snagged a deal with IFC for domestic television and home video distribution. Wayne Wang will follow in their footsteps when he premieres his new feature "The Princess of Nebraska" on YouTube October 17th.

The power of the internet was also demonstrated by the remarkably successful documentary, "The Secret."
During the first stage of its release, "The Secret" could be streamed
or purchased at the film's website, but was not available in theaters,
on television, in stores, or on Amazon. During the next stage, the book was launched by Simon & Schuster
in bookstores and online. After the book shot to the top of the
bestseller list, "The Secret" DVD was finally made available in retail
stores and on Amazon. Over 2 million DVDs were sold during the first
twelve months of its release.

The chart above illustrates the essential differences between Old and New World Distribution.

Here are ten guiding principles of New World distribution:

1. GREATER CONTROL - Filmmakers retain overall control of
their distribution, choosing which rights to give distribution partners
and which to retain. If filmmakers hire a service deal company or a
booker to arrange a theatrical run, they control the marketing
campaign, spending, and the timing of their release. In the OW (Old
World), a distributor that acquires all rights has total control of
distribution. Filmmakers usually have little or no influence on key
marketing and distribution decisions.

2. HYBRID DISTRIBUTION - Filmmakers split up their rights,
working with distribution partners in certain sectors and keeping the
right to make direct sales. They can make separate deals for: retail
home video, television, educational, nontheatrical, and VOD, as well as
splitting up their digital rights. They also sell DVDs from their
websites and at screenings, and may make digital downloads available
directly from their sites. In the OW, filmmakers make overall deals,
giving one company all their rights (now known or ever to be dreamed
up) for as long as 25 years.

3. CUSTOMIZED STRATEGIES - Filmmakers design creative
distribution strategies customized to their film's content and target
audiences. They can begin outreach to audiences and potential
organizational partners before or during production. They often ignore
traditional windows, selling DVDs from their websites before they are
available in stores, sometimes during their theatrical release, and
even at festivals. Filmmakers are able to test their strategies
step-by-step, and modify them as needed. In the OW, distribution plans
are much more formulaic and rigid.

4. CORE AUDIENCES - Filmmakers target core audiences. Their
priority is to reach them effectively, and then hopefully cross over to
a wider public. They reach core audiences directly both online and
offline, through websites, mailing lists, organizations, and
publications. In the OW, many distributors market to a general
audience, which is highly inefficient and more and more expensive.

Notable exceptions, Fox Searchlight and Bob Berney, have demonstrated how effective highly targeted marketing can be. "Napoleon Dynamite" first targeted nerds, "Passion of the Christ" began with evangelicals, and "My Big Fat Greek Wedding"
started with Greek Americans. Building on their original base, each of
these films was then able to significantly expand and diversify their

5. REDUCING COSTS - Filmmakers reduce costs by using the
internet and by spending less on traditional print, television, and
radio advertising. While four years ago a five-city theatrical service
deal cost $250,000 - $300,000, today comparable service deals can cost
half that or even less. In the OW, marketing costs have risen

6. DIRECT ACCESS TO VIEWERS - Filmmakers use the internet to reach audiences directly. The makers of the motorcycle-racing documentary, "Faster,"
used the web to quickly and inexpensively reach motorcycle fans around
the world. They pulled off an inspired stunt at the Cannes Film
Festival, which generated international coverage and widespread
awareness among fans. This sparked lucrative DVD sales first from the
website and then in retail stores. In the OW, filmmakers only have
indirect access to audiences through distributors.

7. DIRECT SALES - Filmmakers make much higher margins on
direct sales from their websites and at screenings than they do through
retail sales. They can make as much as $23 profit on a $24.95 website
sale (plus $4.95 for shipping and handling). A retail sale of the same
DVD only nets $2.50 via a typical 20% royalty video deal. If filmmakers
sell an educational copy from their websites to a college or university
for $250 (an average educational price), they can net $240. Direct
sales to consumers provide valuable customer data, which enables
filmmakers to make future sales to these buyers. They can sell other
versions of a film, the soundtrack, books, posters, and t-shirts. In
the OW, filmmakers are not permitted to make direct sales, have no
access to customer data, and have no merchandising rights.

8. GLOBAL DISTRIBUTION - Filmmakers are now making their
films available to viewers anywhere in the world. Supplementing their
deals with distributors in other countries, they sell their films to
consumers in unsold territories via DVD or digital download directly
from their websites. For the first time, filmmakers are aggregating
audiences across national boundaries. In the OW, distribution is
territory by territory, and most independent films have little or no
foreign distribution.

9. SEPARATE REVENUE STREAMS - Filmmakers limit
cross-collateralization and accounting problems by splitting up their
distribution rights. All revenues from sales on their websites come
directly to them or through the fulfillment company they've hired to
store and ship DVDs. By separating the revenues from each distribution
partner, filmmakers prevent expenses from one distribution channel
being charged against revenues from another. This makes accounting
simpler and more transparent. In an OW overall deal, all revenues and
all expenses are combined, making monitoring revenues much more

10. TRUE FANS - Filmmakers connect with viewers online and at
screenings, establish direct relationships with them, and build core
personal audiences. They ask for their support, making it clear that
DVD purchases from the website will help them break even and make more
movies. Every filmmaker with a website has the chance to turn visitors
into subscribers, subscribers into purchasers, and purchasers into true
fans who can contribute to new productions. In the OW, filmmakers do
not have direct access to viewers.

(c) 2008 Peter Broderick

Monday, October 13, 2008

A Profile of Online Profiles - By the Numbers Blog - NYTimes.com

Hi, well here you go, if you missed it, a report by a Reapleaf, reviewed by Blow of the NYT talking about on line social media usage. Beyond the usual questions about methodology and verification--how do you know who's telling the truth if you don't interview them directly, but lets set that aside for the moment.
This studies most interesting references are to the extent to which people lie about things online. This would certainly make it hard to know what one was talking about now wouldn't it, however, if we accept that this study says there more men athan women utilizing these social media for relationship maintenance, where as men use it for business as opposed to personal relationships. Well sounds like it's all about relationships however you cut it. Also, most interestly, the conclusion, based on what in depth data I don't know, is that men do transactions, while women do relationships, oh really, could have concluded that with out a study couldn't we---Men are from Mars, women are from Venus, of course he didn't do any real  research either.
Dr. Media says, its about time we started to look at these social media, however if we really wantto begin to understand them lets do some real research.

A Profile of Online Profiles - By the Numbers Blog - NYTimes.com
September 9, 2008, 3:06 pm
A Profile of Online Profiles
By Charles M. Blow

I recently created a Facebook account. My kids thought it was hysterical. They said that I was too old. I’m only 38, but as far as they are concerned, Moses was my best friend in kindergarten.

Being a numbers guy, this got me interested in procuring hard data on social network users…and their behavioral traits while logged on. Here is some of what I found:

1. GENDER: According to a RapLeaf study released in July of 49.3 million people, 20 percent more females used social networks than men (this surprised me). The biggest disparity was for people under 25. In my age range, 35 to 40, men outnumbered women (see chart above).

According to an April Study by RapLeaf, men use social networking more for business and women more for socializing. From the report:

“Men tend to be more transactional and less relationship building when it comes to their friends on social networks. Women tend to have slightly more friends on average.”

2. BEST “HANDLES”: When it came to dating sites, things really got interesting. In April, The Times of London reported on a study by Dr. Monica Whitty, “a lecturer in cyber-psychology,” which revealed the names or “handles” that garnered the most numerous responses among online daters. Here’s what it said:

“Playful and flirtatious names such as “fun2bwith” or “i’msweet” were ranked top by both men and women daters as those they would most like to contact. Physical descriptors such as “cutie” or “blueeyes” were close behind. ‘These names suggest an outgoing or fun nature, or clarify the user’s positive physical appearance,’ said Dr Monica Whitty.”

But, there seemed to be some gender imbalances in the names:

“However she advised female lonely hearts to avoid screen names which attempt to be classy, or show how clever they are. Males daters said they would be less likely to contact screen names such as ‘wellread’ or ‘welleducated,’ although the study found women were more drawn to names that suggested men were cultured. ‘Less flirtatious names may be more appealing to women because they are wary of men who might be using the site to find one-night stands rather than long-term relationships,’ Dr Whitty said.”

3. LYING According to a study by entitled “Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles” that was published this year in the Personality and Social Psychology Bulletin, there is quite a bit of lying going on in online profiles. And, men lie more than women. Shocker!

It also turns out that people online are more accepting of some lies than others. From the study:

“Participants believed that lying about relationship information is less socially acceptable than lying about any other category. … Men considered it more acceptable than women to lie about their social status … [and] found it more acceptable than women to lie about their occupation, education and marginally about their relationship status.”

Below are some graphs from his report. Note how almost all women understate their weight and most men overstate their height. Typical.

Thursday, October 09, 2008

KMWorld.com: : Now, everything is fragmented

Hi gang, been a while, apparently my blogger glitched and my posts haven't been being posted, which I just discovered, oh well I will catch you up.
In this interesting missive, Snowden, taking off from Dave Weinberger’s book Everything is Miscellaneous,argues that everything is fragmented. This can be true, however, it depends on how one sees it. I understand his technical commentary, however allow me to give you a psychological perpective.Fragmentation is a perception from the POV of one who assumes the previous organizational model was "truth". Liberation of the mind and information to free itself from an old model and organize  itself in new ways, or more appropriately to allow information to be rearranged in new ways, is a radical way to see this event.This is how new ways of thinking, acting, designing, relating and communicating emerge, via the process of falling apart, the same holds for people.This process is not always plesant or joyful, but it is effective.Think about falling in love, think about falling out of love, how does that happen?
See my future  book, Futureself(  for the answer to that one. A hint though is it has to do with the gap between  who we are, who we think we, and who we would like to be, our Personal Mythology.
Think of  fragmentation as the liberation of mind and the breaking down of outmoded modela and an oppotunity for invention.

KMWorld.com: : Now, everything is fragmented
Now, everything is fragmented
By Dave Snowden - Posted May 1, 2008

I used the phrase "everything is fragmented" for the first time last year at KMWorld & Intranets in San Jose. I was picking up on the title of Dave Weinberger’s useful book Everything is Miscellaneous. Dave dealt with the shift from hierarchical taxonomies to the free form tagging of social computing. I wanted to build on that by pointing to the shift during the life span of knowledge management from the "chunked" material of case studies and best-practice documents to the unstructured, fragmented and finely granular material that pervades the blogosphere. So when I was asked to contribute this column to KMWorld magazine, it seemed an appropriate title; it allows me to talk about not only trends in technology but also social issues, the scientific use of narrative, and to fire off the odd invective about over-constrained and over-controlled systems.

So what do I mean by the idea of fragmentation? Well, it’s simple really: The more you structure material, the more you summarize (either as an editor or using technology), the more you make material specific to a context or time, the less utility that material has as things change. For years now I have asked this question at conferences around the world: Faced with an intractable problem, do you go and draw down best practice from your company’s knowledge management system, or do you go and find eight or nine people you know and trust with relevant experience and listen to their stories?

With the odd exception (generally IT managers who have just spent a few million dollars putting a best-practice system in and think people should use it), everyone goes for the stories. So why for the last decade and more have we focused on chunking up best practice? These days I add a few references to the way I and others use blogs to link and connect to insight and learning. Increasingly unstructured material, blended in unexpected ways, provides a richer source of knowledge.

Over the last decade as I have worked on homeland security, we have had the chance to run some experiments that show that raw field intelligence has more utility over longer periods of time than intelligence reports written at a specific time and place. In other experiments, we have demonstrated that narrative assessment of a battlefield picks up more weak signals (those things that after the event you wished you had paid attention to) than analytical structured thinking.

I think there are two reasons for those findings. First, we live in a world subject to constant change, and it’s better to blend fragments at the time of need than attempt to anticipate all needs. We are moving from attempting to anticipate the future to creating an attitude and capability of anticipatory awareness. Second, we are homo sapiens at least in part because we were first homo narrans: the storytelling ape. Dealing with anecdotal material from multiple sources and creating our own stories in turn has been a critical part of our evolutionary development.

The free flow of the blogosphere, ad hoc collaboration, Facebook and many other tools work because they conform with the patterns of expectation that arise from our evolutionary uncertainty. Have you ever heard anyone ask Wikipedia or the blogosphere, "How do we create a knowledge sharing culture?" No, but when I visit the knowledge management practitioners in organizations around the world, it is the dominant question. It’s not natural to chunk up material, to make it context specific; it is natural to share, blend and create fragmented material based on thoughts and reflections as we carry out tasks or engage in social interaction.

The big problem for the knowledge and information management functions in an organization is that their governance structures were developed in an earlier, more ordered time when we focused on transaction systems for accounting and process. The essence of such systems is to remove ambiguity; the evolutionary pressure of natural human knowledge exchange is to embrace ambiguity. Narrative, social computing, the open source movement are all comfortable with ambiguity, embrace it and use it. Organizations need to do the same, but the old patterns of control persist beyond their natural utility.

How we do this, what prejudices and difficulties we have to overcome to achieve this change, will be the theme of this column over the months. How can we use social computing within a corporate environment when we don’t have millions of participants? What is the relation between the formal transaction systems and this new fragmented world? Above all, how do we manage necessary uncertainty?

Is Google making us stupid?

Hi , I 've been out of pocket for a while,but have been meaning to respond to this little ditty.by Carr, who also thinks he can tell us what the Internet is doing to our brains,even though by his own admission he has absolutely no empirical evidence for any of his conclsuions. I love it.
The topic of what we are doing to our ourselves , our brains , our relationships, etc., with the emergence of the Internt is indeed a fundamental question and it should be. How about we do some RESEARCH, and find out. Carr is not alone in his curiouisty. I spoke with Chris Li of Forester, @ her book party for her well researched  book GROUDSWELL, and she refered me to ONE social scientist researcher whom she was familiar with,who was looking into to the impact of social media.
I would like to point out that going back to the argument represented in Plato's writings, the question was whether or not writing would destroy our ability to remember, well, the verdict may still be out on that one, but you can read about it on the Internet, I mean library.
Dr.Media says, have fun rotting your brains, and expanding your mind.

The Atlantic Online | July/August 2008 | Is Google Making Us Stupid? | Nicholas Carr
July/August 2008 Atlantic Monthly

What the Internet is doing to our brains

by Nicholas Carr
Is Google Making Us Stupid?

Illustration by Guy Billout

"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”

I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets’reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.)

For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.

I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e. I’m just seeking convenience, but because the way I THINK has changed?”

Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”

Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits , conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consortium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report:

It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.

Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.

Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.

Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.

But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”

Also see:

Living With a Computer
(July 1982)
"The process works this way. When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen..." By James Fallows

“You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler , Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.”

The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.”

As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”

The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.

The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.

The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.

When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration.

The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, TheNew York Times decided to devote the second and third pages of every edition to article abstracts , its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.

Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.

About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.

More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”

Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”

Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.

The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.

Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?

Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.

Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).

The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.

So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.

If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake:

I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.”

As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”

I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

A New Battle Is Beginning in Branding for the Web By STEVE LOHR

Hi Gang, Dr. Media back form vacation and ready to continue bringing you interesting info with brilliant commentary, of course.
Below an article that addresses the ongoing advertising/branding challenge.What is most interesting here is how the conflict between old tech methods and new tech demands is being met. The cool item here ,for those of you who are interested is, how a company like Microsoft , can attempt to take a term of currency-mesh, and try to own it by adding a branding term. The implications of this are profound.It means that large corporations can try to own language. The DNA of thought is language. The expression of the unique individual is in his or her language , as art as poem, as commentary, as gossip.
Here come the thought police?? What do you think?

September 1, 2008
A New Battle Is Beginning in Branding for the Web

To marketers large and small, the Web is a wide open frontier, an unlimited billboard with boundless branding opportunities.

For the empirical proof, look at the filings with the government for new trademarks that, put simply, are brand names.

Applications surged in the dot-com years, peaking in 2000 and then falling sharply for two years, before rising to a record last year of more than 394,000.

Recently, a new front has opened in the Internet branding wars.

It lies beyond putting trademarks on new businesses, Web site addresses and online logos. Now, companies want to slap a brand on still vaguely defined products and services in the uncharted ephemera of cyberspace — the computing cloud, as it has come to be known.

Cloud computing usually refers to Internet services or software that the user accesses through a Web browser on a personal computer, cellphone or other device. The digital service is delivered remotely, from somewhere off in the computing cloud, in the fashion of Google’s Internet search service.

Dell has tried to trademark the term cloud computing itself. But in August, the United States Patent and Trademark Office sent a strong signal that cloud computing cannot be trademarked.

It issued an initial refusal to Dell, which filed its application 18 months ago, when the term was less widely used in industry conversations and marketing.

Dell had passed early steps toward approval, but the office turned it down, after protests from industry experts that cloud computing had become a broadly descriptive term, and not one linked to a single company. Dell can appeal, but that seems unlikely.

In recent years, patents — not trademarks — have been the main focus of intellectual property experts and the courts, especially around the issue of whether patents on software and business methods have become counterproductive, inhibiting innovation.

But some legal experts say trademark issues may take on a higher profile, fueled by the escalating value of brands in general and trademark holders increasingly trying to assert their rights, especially on the Internet.

“Trademark is the sleeping giant of intellectual property,” said Paul Goldstein, a professor at the Stanford law school.

Microsoft, for example, is developing a technology that is intended to synchronize the data on all of a person’s computing devices, even synchronizing it with family members and work colleagues as well, automatically reaching across the cloud.

When Microsoft announced the concept this year, it said the technology would be called Live Mesh. Just what it is and how it may work remains unclear, but Microsoft filed for a trademark on Live Mesh in June, an application that awaits judgment from the Patent and Trademark Office.

Mesh and mesh networking are widely used terms for technology that connects devices.

“This is the challenge for our examiners,” said Lynne G. Beresford, commissioner for trademarks in the Patent and Trademark Office. “With emerging marks in a field that is changing quickly, you have to make a determination about what the common understanding is.”

That challenge, legal experts say, is one of several for trademark policy and practice in the Internet age. Instant communication, aggressive business tactics and an unsettled legal environment, they say, mean that trademark disputes on the Internet will increase in number and intensity.

The first round of trademark conflict on the Internet, focused on cybersquatting, has subsided. Cybersquatters were early profiteers who bought up the Web addresses, or domain names, of well-known trademarked brands, and then tried to charge the companies huge amounts of money to buy them.

In 1999, Congress passed a bill against cybersquatting that allowed companies to sue anyone who, with “a bad faith intent to profit,” buys the domain name of a well-known brand. The same year, the Internet Corporation for Assigned Names and Numbers, a nonprofit oversight agency, established a system for resolving domain name disputes.

The new areas of conflict, according to legal experts, include trademark owners trying to assert their rights to stifle online criticism of their products, and to stop trademarked brands from being purchased as keywords in Internet search advertising.

Early court rulings in keyword cases point to the uncertain legal setting and the international differences in trademark law. In the United States, lawyers say, the initial rulings have tended to allow companies to buy the trademarked brand names of rivals as keywords in search. Ford, for example, can bid on and buy “Toyota,” so that a person typing Toyota as a search term would see a link to Ford’s Web site in the paid-for links on the right hand side of Google’s Web page.

In the United States, that practice has not been interpreted as causing any fundamental consumer confusion. Google also argues that because any bidder can make an offer for any word — Google supplies no list — it is not a user of trademarks. “We are not using keywords, we are not selling keywords, we are selling ad space,” said Terri Chen, Google’s senior trademark counsel.

But in a French court ruling in 2005, Google was enjoined from allowing others to buy as a keyword the trademark brand of a French luxury goods maker, Louis Vuitton. For countries other than the United States, Canada, the United Kingdom and Ireland, Google has a trademark complaint system, so holders can generally prevent their brands from being purchased as keywords by others.

The speed of Internet communication and heightened competition to claim and establish brands have drastically changed trademark tactics over the years. Compare the positioning and pre-emptive moves around cloud computing with the gradual pace of building one of the most valued brands in the world, Microsoft’s Windows.

The use of personal computer windows and graphical user windowing systems were around long before Microsoft announced its plans for a Windows operating system in 1983. The first version was introduced in 1985, and Microsoft did not file for a trademark until 1990. Its application was initially rejected as “merely descriptive.”

But, as so often, Microsoft persevered. It kept investing in advertising, branding and product development. It presented the Patent and Trademark Office with surveys showing people had come to associate the term Windows with Microsoft, and in 1995 the trademark examiners finally agreed.

With its cloud computing project Live Mesh, Microsoft is taking a far faster, more focused approach. It is employing Live, which it uses in other Internet offerings, like Windows Live and Xbox Live, as half of a two-word trademark — or composite mark, in legal terms. “Mesh networking is the generic category, but Live Mesh is Microsoft’s implementation and acts as a source identifier,” said Russell Pangborn, Microsoft’s director for trademarks.

One thing that has been undeniably transformed by computing and the Internet is the trademark office itself. Ms. Beresford, a professed “trademark nerd,” recalled that when she joined the office in 1979, searches for the same or “confusingly similar” trademarks began in the “search room.” The applications and registration documents were kept in wooden cabinets, filed alphabetically.

Trademarked images were kept in separate drawers and grouped into visual categories, she recalled, like “grotesque humans” (the Pillsbury doughboy) and “human body parts” (the Yellow Pages’ walking fingers).

Examining attorneys, Ms. Beresford noted, were issued rubber covers for their index fingers for going through files faster and with fewer paper cuts. The technology tools have been upgraded considerably since then. The work is now done mainly on computers, searching the Web and specialized trademark databases. Eighty-five percent of the office’s 390 examining attorneys work primarily from home.

The search room, Ms. Beresford observed, has “gone the way of the buggy whip.”