Herring,
Susan C. (2002). Cyber Violence: Recognizing and Resisting Abuse in Online
Environments. Asian Women 14 (Summer): 187-212
Cyber Violence:
Recognizing and Resisting Abuse in Online Environments
Susan
C. Herring
Indiana
University, Bloomington USA
herring@indiana.edu
Introduction
Hundreds
of millions of people currently use the Internet to enhance their lives and
those of others. Yet a growing segment of the online population abuses the
Internet for antisocial purposes, to stalk, harass and prey on other users,
often with distressing effects. Internet-mediated aggression is a global
phenomenon, and, disturbingly, it is on the rise,1 lending prima facie credence to the dystopian view that
computer-mediated communication exacerbates bad behavior.2 To make matters worse, the tide of
online violence is rising at a time when the Internet has moved from being a
luxury to a necessity of daily life for educated people throughout the
industrialized world. 'Cyber violence' thus stands to have negative impacts on
a very large scale.
This
paper is concerned with abusive online behaviors, which I group under the label
'cyber violence'. As a first step towards understanding this phenomenon, I
define and identify major cyber violence types, illustrating each with case
studies gathered from the Internet, and summarizing recommendations for
responding to each. The advantages of defining and classifying cyber violence
include 1) making the behaviors easier to recognize and name when they occur,
2) allowing strategies of resistance and redressive action to be articulated
that are tailored to the demands of each, 3) distinguishing cyber violence from
other, less serious forms of annoying online behavior, and 4) revealing
underlying relationships between cyber violence and other phenomena, such as
pornography, that might otherwise go undetected, but that help us to situate
cyber violence within a broader perspective.
Defining
Cyber Violence
I define
cyber violence as online behavior that constitutes or leads to assault against
the well-being (physical, psychological, emotional) of an individual or group.
What distinguishes cyber violence from traditional off-line forms of violence
is that in the former case, some significant portion of the behavior takes
place online, although it might then carry over into offline contexts. Cyber
violence thus may, but need not, have a physical component, and much of the
harm caused by cyber violence—as indeed by offline violence—is
psychological and/or emotional (which is not to say less real or destructive).
Finally, cyber violence may be targeted at individuals or groups, the latter
being more characteristic targets of cyber violence than of offline, physical
violence, due to the ease with which a single perpetrator can gather
information about and make contact with large numbers of people on the
Internet. This is another aspect of online violence that can cause it to have
widespread effects.
Violence
and Gender
Violence
is related to gender. Research has shown that men are disproportionately the
perpetrators, and women disproportionately the victims, of violence in the
physical world (Cyber-stalking.net, 2002). Cyber violence shows a similar
pattern. Women were the victims in 84% of online harassment cases, and men the
perpetrators in 64% of cases3
reported to the organization Working to Halt Online Abuse in 2000-2001 (WHO@,
2002). For many female Internet users, online harassment is a fact of life. One
out of five adult female Internet users reported having been harassed online as
of 1994 (Brail, 1994), and as many as one out of three female children reported
having been harassed online in 2001 alone (Thomas, 2002). Among children, girls
are targeted at almost twice the rate of boys (Finkelhor et al., 2000).
Males are also victims of violence (particularly of violence
perpetrated by other males), and females also commit acts of violence, both
online and offline. However, to ignore the larger gender pattern associated
with violence is to miss a basic insight into the social reality of violence as
a means of control and intimidation. That is, it tends to be perpetrated
downward along a power hierarchy, thereby reinforcing societal gender
asymmetries.
Classifying
Cyber Violence
One
obstacle to taking effective action against cyber violence is that it tends to
be viewed as less serious, less "real" than violence in the off-line
world. This is due in part to the relative novelty of the phenomenon (and of
cyberspace as a whole); cyber violence does not conform to our familiar
prototype of violence in a number of respects. As shown in Figure 1, violence
can be situated along a continuum from more to less prototypical.
More prototypical violence <----------------------------------------> Less prototypical violence |
|
Off-line |
Online |
Physical |
Virtual |
Action |
Symbols |
Intentional
harm |
Harm not intended |
Targeted
against an individual |
Untargeted, diffuse |
Perpetrator
is socially marginal |
Perpetrator is an average person |
Figure
1. Dimensions of violence
A prototype
is a mental representation of a complex concept in terms of its default or
"most typical" realization. When we think of violence, we typically
think of off-line behavior before we think of the Internet; we think of
physical aggression before we think of deception or mental cruelty; and we
think of action before we think of symbolic behavior via words or images. As
the old saying goes, "Sticks and stones may break my bones, but names can
never hurt me." Our prototype probably also involves an intentional
perpetrator—non-intentional harm is usually characterized by other terms,
such as 'accident'—with a specific target or targets, although untargeted
violence (known by terms such as 'rampage' and 'mayhem' when it occurs offline)
is also possible. Last but not least, we expect perpetrators of violence to be
socially marginal types—possibly with a history of violent or criminal
behavior—rather than average, well-adjusted individuals (cf. 'white
collar crime', which often goes undetected because of this assumption).4 Cyber violence is less prototypical than
physical violence in where and how it takes place, in allowing perpetrators to
deny their intent to harm more easily (see below), and in enabling
"normal" people to perpetrate widely-targeted harm, without requiring
that the perpetrator be in an extreme emotional state (or risk his or her life)
to carry it out.
Because
cyber violence differs from our prototypical associations of violence, it may
be difficult at first to recognize it for what it is, and accordingly, harder
to resist and punish it. Thus a necessary first step in fighting cyber violence
is to identify and name its manifestations.
Types of
Cyber Violence
Four basic
types of cyber violence are distinguished here:
1. Online contact leading to off-line abuse
2. Cyber stalking
3. Online harassment
4. Degrading representations
The
numbering of the types is intended to suggest their distance from
"prototypical" violence, with (1) being closest, and (4) most
distant, from the real-world, physical prototype. Each type is defined and
discussed below.
1. Online contact leading to off-line
abuse
The first
type of cyber violence involves online misrepresentation (usually someone
represents themselves as being nicer or more socially desirable than they
actually are) leading to abusive off-line contact, including, but not limited
to, financial fraud, theft, unwanted sexual contact, and/or beating. The
misrepresentation aspect invokes issues of deception and betrayal of trust;
accordingly, this phenomenon is sometimes discussed in terms of those issues
(and in relation to issues of 'cyber trust' more generally).
An
example is the Katie Tarbox case, which received considerable publicity in the
United States several years ago:
Example 1:
Katherine
Tarbox was thirteen when she met twenty-three-year-old "Mark" in an
online chat room and became close to him through a series of evolving email
messages. When she finally met him in person, however, he turned out to be a
41-year-old named Frank Kufrovich with a history of pedophilia. He molested the
girl in a hotel room. (Tarbox,
2000)
This case
has several features that are characteristic of prototypical forms of violence:
in addition to deception, it involves physical abuse (sexual molestation), a
(male) perpetrator with a history of abusive behavior, and a highly vulnerable
victim, a 13-year old girl. It is the sort of story that gives parents
nightmares, and that gets widespread play in the popular media, in part because
its familiar components trigger archetypal fears of predation.5
The second case is also an instance of online
misrepresentation leading to offline abuse, although the resultant harm is less
serious:
Example 2:
“I
once met a lady-friend on the Internet. We went out a few times and I fell in
love with her. Unfortunately I did not realize that she was lying to me the
whole time. The more money I lent to her the more she lied. Now she is in jail
and I have less money than I had before. I wish that I had checked out her past
before I fell in love with her and subsequently lent her money.” (http://www.whoisshe.com/)
The
victim in this case is a male, and the perpetrator a female, reversing the
typical gendered power dynamic associated with violence. Moreover, the offline
harm is financial (fraud) and emotional (deception), rather than physical.
Nonetheless, the case fits the definition of cyber violence in that it involves
online behavior that leads to assault against the well-being of an individual.
It also features a perpetrator with criminal tendencies.
In
contrast, most people would probably not consider the third case, an early
classic of Internet deception, an example of cyber violence:
Example 3:
A male psychiatrist in his fifties carried out a long-term
impersonation of Joan, a young, disabled woman, on a computer network.
Communicating only via computer or mail, the impostor developed intimate
friendships with several women. Upon learning the truth, however, the women
felt betrayed; some mourned the loss of their relationship with Joan. (Van Gelder, 1985)
Although
the women who had become close with 'Joan' (including, in some cases, having
cyber sex6 with 'her'—'Joan' was bisexual)
were intentionally deceived, the offline harm they experienced was restricted
to emotional reactions to events that had taken place online. Moreover,
although the perpetrator was an older male, he was socially respectable (a
psychiatrist), and his alleged motive was not to harm, but rather to explore
the potential of the new medium to experience for himself what it was like to
be a woman. For many Internet users and scholars, the psychiatrist's behavior
is to be expected in a text-only environment that invites play with gender
identity (e.g., Bruckman, 1993; Danet, 1998; McRae, 1996). The implicit moral
position underlying this view is that behavior that can not effectively be
prevented must therefore be tolerated.
Such
ambiguous cases notwithstanding, there are clear limits on when deception
should be considered actionable abuse. It seems self-evident that disappointing
online romances are not inherently violent. Although disillusionment can feel
like deception, and rejection can feel like abuse, both are within the realm of
normal risks associated with romantic intimacy, online and offline. This does
not preclude the possibility that a romance turned sour can develop violent
aspects, of course.
Recommended
responses
How can Internet users avoid online
misrepresentation leading to offline abuse? How should victims of such behavior
respond? Most of the advice currently available places the onus on users to be
wary: they should not trust that people they meet on the Internet are who and
what they say they are; they should take precautions if they meet Internet
acquaintances face-to-face, and report them to the authorities if they are the
victims of abuse.
Because violation of trust is often at issue in this form of
abuse, some sources recommend doing (or hiring someone to do) a background
check to verify the identity, history, and reputation of online acquaintances
before getting involved with them emotionally, sexually, financially, or
otherwise (http://www.whoisshe.com/).
Some discussion forums operationalize this principle in the form of
“trust metrics" according to which people new to the forum can not
join in unless some number of old and trusted members are willing to vouch for
them. Some evidence suggests that groups that employ trust metrics experience
less disruptive and abusive behaviors as a result (Levien, 2000).
People who choose to meet offline with someone they have
previously only encountered online are advised to take precautions to protect their
personal (physical) safety. Suggestions include to meet for the first time in a
public, well-lit place, in daylight, in the presence of a friend, or if alone,
to inform a close friend of the meeting place and check in with him or her by
phone while there (Egan, 2000).
If, despite these precautions, abuse occurs, the abusers
should be reported. This is especially important if there is evidence that they
are repeat abusers; failure to report them makes it more likely that they will
continue their pattern of abuse with others. Online anti-cyber violence
organizations take in reports and sometimes intervene to stop abusers,
including reporting events to law enforcement authorities in cases where laws
have been broken (WHO@, 2002). Katie Tarbox reported her abuser to the police;
he was prosecuted and sentenced to 18 months in jail for pedophilia (Tarbox,
2000). In cases where the abusive behaviors do not meet the criteria for legal
intervention, as in the case of 'Joan' in example 3 and other "cyber lotharios",
Bell and de la Rue (n.d.) recommend that victims make information about the
abusers and their behavior publicly available, as a means of warning others
away from them.
2. Cyberstalking
The
second type of cyber violence is cyberstalking, defined as online monitoring or
tracking of someone’s actions with illegitimate intent. Because it
involves using the Internet to gather personal information about its target,
cyberstalking is a violation of privacy, and is sometimes discussed in the
context of Internet surveillance. To the extent that the target is aware that
s/he is being stalked, cyberstalking functions as a form of intimidation:
"I know who you are; I know where you (and your children) live, and
… (implied: I will use that information to harm you)". An implicit
threat of harm can be sufficient to intimidate; in other cases, cyberstalking
leads to explicit threats of physical violence (including death threats) and/or
actual off-line contact.
Example
4 is a case involving online stalking accompanied by explicit threats. The case
in example 5 combines online with offline stalking.
Example 4:
Duwayne
I. Comfort, a University of San Diego graduate, stalked female students on the
Internet. Comfort used a professor's credit card to buy information about
the women via the Internet. Comfort sent about 100 messages that included
death threats, graphic sexual descriptions and references to the women's daily
activities. (http://www.unc.edu/courses/law357c/cyberprojects/spring00/cyberstalking/)
Example 5:
Assistant
Professor Pamela Gilbert from the University of Wisconsin was stalked by a
university lecturer, Tim—a guy she had a couple of dates with. Tim posted
her picture on sex sites, told mutual colleagues that Pamela was involved in
Satanism, hounded her online, and got one of his students to follow her to
"gather research" for a supposed book he was writing about her life.
The student became suspicious when Tim started talking about a gun. (Gilbert, 1997)
In both
of the above cases, the victims—strangers in the first case, and an
acquaintance (a former lover) in the second case—were contacted directly
by the stalkers. In both cases as well, the stalkers behaved in a threatening
manner, leading the victims to fear that their lives were in danger.
A less clear-cut example, albeit one that raises similar
issues, is the much-publicized Jake Baker case summarized in example 6.
Example 6:
Jake
Baker sent email messages to an Arthur Gonda in Ontario, Canada from November
1994 to January 1995. In these messages, he discussed his interest in
violent sexual acts. One of these emails contained a graphic fictional
account of the rape, torture, and murder of a female classmate of his at the
University of Michigan. (http://www.unc.edu/courses/law357c/cyberprojects/spring00/cyberstalking/)
Jake Baker
was tried in court and found innocent of criminal wrongdoing, in part because
his stories were not addressed to their ostensible target (the young woman did
not know of their existence prior to being informed about them by the
university). Debate also centered around the amount of risk posed by Baker to
the young women—how likely was he to act out his violent fantasies? The
lawyers for the defense argued that Baker's writings were creative acts of
imagination that bore no necessary relation to his future behavior. In short,
no intent to harm could be established.7 Jake Baker's violent and degrading depictions of women were
thus construed as victimless, a problematic position that is considered further
in reference to pornography in section 4.8
Many
cyberstalking cases involve frustrated romantic and/or sexual interest.
Cyberstalkers (both male and female) often target former lovers, or individuals
about whom they entertain sexual fantasies. However, unwanted sexual
solicitations in and of themselves do not constitute cyberstalking,
particularly when they occur in isolation and do not persist after the
solicited individual has said "no". Sexual come-ons are generally
considered to be a fact of Internet life, especially in chat environments, and
especially for users with female-sounding login names (Bruckman, 1993; Herring,
2001). Although such behavior is not unproblematic, it falls outside the
definition of cyber violence presented here.
Recommended
responses
How can Internet users avoid
cyberstalking? Recommendations on this topic tend to focus on restricting
access to one's self, and on reporting stalkers to the authorities as a means
of obtaining additional protection and/or to force the stalker to desist.
A key issue in cyberstalking is privacy. To avoid being
stalked in the first place, users are cautioned not to give out personal
information such as phone numbers or addresses to strangers on the Internet
(although it is increasingly easy for others to locate such information on the
basis of a name or email address alone). WHO@ also recommends
"ego-surfing"—searching under one's own name on the Web to find
out what information is publicly accessible, and requesting to have it removed
when it is too personally revealing. Targets of cyberstalking are sometimes
advised to “get off the Internet”—this may prevent further
abuse if the stalker does not know much about the target's real identity, but
it is not a very satisfactory solution, in that it restricts the target's
access to necessary resources. Many cyberstalking victims change their email
address and login identity, some change their home phone number, and others
move their residence to an unlisted address (Cyber-stalking.net, 2002).
As in other types of online abuse, victims are advised to
save all evidence of stalking, and if the stalker does not desist, report him
or her to the appropriate authorities. It may be possible to obtain a
restraining order against the stalker, and some stalkers (such as Duwayne
Comfort in example 4) have been arrested and convicted.
3. Online
harassment
Online
harassment is computer-mediated words, gestures, and/or actions that tend to
annoy, alarm and abuse another person (cf. Black’s Law Dictionary, 1990).
A crucial component of harassment is that the behavior is repeated—a
single instance of abuse, such as an insulting email message, does not
generally constitute harassment—and persistent, even after the harasser
has been told to desist. The nature of the harm caused by online harassment is
diverse, and can include disruption, insult/offense, and defamation of
character. In the first, the target's resources (such as time and energy) are
wasted; in the latter two, the target's self-image and reputation are attacked.
Example
7 is a case of online harassment that involves both types of harm.
Example 7:
Author
Jayne Hitchcock exposed an Internet scam by a group of people calling
themselves the Woodside Literary Agency. In retaliation, the agency
launched a series of email bombs to her, her husband, and her
lawyer. Then, the harassers forged posts in her name to hundreds of
newsgroups. The posts indicated that Jayne was interested in having people
call or stop by her house to share their sexual fantasies with her. Her
home address and phone number were included. (WHO@, 2002)
Jayne
Hitchcock's case involves elements of stalking (the perpetrators knew where she
lived, and she changed residences to escape them) in addition to harassing
behaviors (email bombing her email account [disruption] and sending forged
posts that defamed her character). This is an example of retaliatory
harassment—she was harassed because she did something online (in this
case, exposed a scam) that angered someone else.
In
contrast, the harassment in examples 8 and 9 was unprovoked. Both incidents took
place in online chat environments.
Example 8:
A
participant on LambdaMOO in the guise of an evil clown named Mr. Bungle used a
"voodoo doll"—a computer program that creates an effigy of
another user—to force legba and a nondescript female character named
Starsinger to perform violent sex acts on themselves in public. (Dibbell, 1993)
Example 9:
Two
female friends chatting together on IRC were repeatedly sexually propositioned
and verbally abused by two male chat channel administrators. When the women
protested, the administrators kicked them off the channel. (Herring, 1999)
These two
cases are clear examples of cyber violence (and non-prototypical examples of
violence in the traditional sense), in that they involve non-physical (verbal)
abuse that takes place entirely online. Although specific individuals were
abused, the targets appear to have been selected opportunistically (women who
just happened to be in the environment at the time). The perpetrators, a young
male college student in New York City ("Mr. Bungle") and two young
male IRC administrators, were 'average' (or in the case of the IRC
administrators, more statusful than average) Internet users, and the former
claimed he was merely experimenting with the new medium, effectively
denying—like the psychiatrist who pretended to be 'Joan'—any intent
to harm.
Examples 8 and 9 involve harassment in that the targets were
repeatedly insulted and abused, even after they protested the offending behavior.
In addition, actions symbolic of physical violence (virtual rape; 'kicking off'
the chat channel) were performed against them. Although these actions did not
have the physical consequences that the analogous real-world actions would have
had, they were nonetheless distressing to the victims.9 It is notable that in all three
harassment episodes, the female victims were abused in sexual terms, although
their prior behavior did not involve any sexual component.10 However, harassment need not be sexual in
nature in order to fit the definition of 'online harassment'.
Some Internet users maintain that harassment that takes
place in chat rooms and discussion forums is not a problem that should be
legislated, but rather is "only words", a manifestation of free
expression. Users who do not like such behavior should "just press the
delete key", or avoid the online environments in which it occurs. Such
views are often advanced by people with libertarian politics, who feel that
"unruly behavior" online is a small price to pay for individual
freedom (Pfaffenberger, 1997). In contrast, the position taken in this paper is
that online harassment constitutes abuse, and has harmful effects both on
individual victims and on groups of users. As such, it should not simply be
tolerated.
A
number of common annoying online behaviors are not considered harassment (and
hence not cyber violence) according to the definition given here, although in
many cases the issues they raise are closely related. Hate speech targets groups but can also be defended
as expression of views, rather than as simple abuse. Trolling indiscriminately targets naïve
users, albeit usually not repeatedly so. Flaming tends to be issue-specific rather than a
means of generalized harassment. Spam is not personally targeted; moreover, it is ostensibly not
intended to offend or intimidate, but rather to persuade users to take certain
actions, such as visiting recommended Web sites. However, any of these
behaviors directed towards specific individuals and repeated, as an annoyance,
after the recipient has made it clear that the behaviors are unwanted, would be
considered online harassment.
Recommended
responses
Since online harassment is often
essentially unprovoked, short of choosing one's words carefully or avoiding
communicating with strangers on the Internet, there may be little one can do to
avoid it. However, targets of online harassment can choose to respond so as to
minimize the harm to themselves and others.
Spertus (1996) identifies two categories of response, which
she terms "technical" and "social". Technical means for
fighting online harassment include blocking unwanted messages, for example all
messages from a known harasser, using protocols such as email filters and 'kill
files' (on Usenet) and 'ignore' commands (on MOOs and IRC). The advantage of
filters is that they give users control over what they will see, while
preserving others' free speech. However, filters are less than ideal, in that
they are reactive rather than proactive (some harassing content must come
through before the filter can be set), and in that they do not prevent others
in a forum from seeing the harassing content; only the individual user is
excluded, which can actually make matters worse.
Social
means for fighting online harassment include sharing information about the
harasser publicly, in such a way as to damage the harasser's credibility and
reputation, making it less likely that he or she will be in a position to
harass others in the future (Spertus, 1996). This may also be achieved through
reporting the harassment to the perpetrator's system administrator or employer,
or in cases where laws are broken, to the police (but cf. Gilboa, 1996, who
could not get the police to take her complaints seriously).
The target's first goal, however, should be to get the
harassment to stop. Online anti-abuse organizations such as WHO@ recommend
responding once to the harasser with a polite request that the contact be
discontinued, and ignoring him or her thereafter. Such organizations will
sometimes intervene to stop harassment, including referring cases to law
enforcement authorities. Anti-defamation cases have been brought to trial and
judgments obtained against the perpetrators. However, with the exception of
"defamation lawsuits against message board
posters by companies wanting to silence their online critics" (Benner,
2002), legal action has thus far rarely (if at all) been taken against
harassers who operate entirely within online chat rooms and discussion forums.
4. Degrading
representations
The last
category of cyber violence is degrading representations, defined as online
representation of women in words or images that invites disrespect and/or
harmful behavior towards women in general. Issues raised by such
representations include degradation and objectification—in this case of
women, although other groups could conceivably be represented in degrading ways
as well. Such behavior is not a prototypical form of violence,11 in that the representations are not
created to harm a specific target; moreover, the harm that arises from them may
be indirect and diffuse. Such cases are often discussed in terms of freedom of
expression, offensiveness, and standards of decency. They are included as a
form of cyber violence in that they constitute and can lead to assault against
the well-being of women, both individually and as a group.
The
first case of degrading representations involves textual representation in the
form of a list.
Example 10:
An
e-mail message was distributed across the Internet in November of 1995 by four
male undergraduates at Cornell University. The message contained a list
entitled, "Top 75 reasons why women (bitches) should not have freedom of
speech." The misogynistic and often violent reasons included:
·
Stupid says
as stupid does (and is).
·
When men
whistle at them in the street, they should just shut up and obey anyway.
·
If she can't
speak, she can't cry rape. (http://joc.mit.edu/cornell/75.reasons.txt)
This list
is degrading in that it represents women as stupid, subordinate to men (like
dogs), and deserving of (sexualized) violence. Although its authors claimed it
was a "joke", it taps in to, and arguably reinforces, sexist and
misogynistic cultural attitudes. Many women were offended and angered by the
message.
The
second case involves both text and images. The "Babes on the Web"
case was discussed extensively on the Internet in the mid-1990s.
Example 11:
In
1995, an American named Rob Toups put up a Web site called "Babes on the
Web". The site consisted of
unauthorized links to photographs of professional and academic women on the
Web, who were rated by Toups on the basis of their sexual attractiveness. Some
women whose pictures had been linked to the site without their knowledge were
surprised to receive crude propositions from men who had seen the
pictures. (Spertus, 1996)
Toups'
site was degrading in that it sexualized and objectified women, rating them in
term of their physical appearance, without their permission (and in some cases,
despite their requests that he remove the links to their photographs). Toups
compounded the offensiveness by adding text to his site to the effect that he
was "exercising his First Amendment rights", and if women didn't like
it, they could "go cry to NOW [the National Organization for Women]".
The site also included an intimidating photograph of Toups pointing a large
shotgun at the viewer. A number of women protested the site (Spertus, 1996).
In
retrospect, "Babes on the Web" seems mild compared to the fare served
up on the World Wide Web today. In particular, hard-core pornographic
representations are more readily available now than in 1995. Pornography is
defined as "material that combines sex and/or
genital exposure with abuse or degradation in a manner that appears to endorse,
condone, or encourage such behavior" (Russell, 1993). The
overwhelming majority of pornographic representations (with the exception of
those intended for a male homosexual audience) depict women.
Example 12:
25%
of Internet users aged 10-17 were exposed to unwanted pornographic images in
the past year. 8% of the images involved violence, in addition to sex and
nudity. For example, a fifteen-year-old boy searching for information on wolves
for a class project came across a bestiality site showing a women having sexual
intercourse with a wolf.
(Finkelhor et al., 2000)
This
image is degrading in that it portrays women as sexually depraved (having sex with
an animal). Finkelhor et al. (2000) found that a higher percentage of young
Internet users reported being "very or extremely upset" at receiving
unwanted pornographic images than at any other category of abusive online
behavior. The children were also less likely to report such abuse to
authorities, presumably out of a sense of embarrassment or guilt.
In
principle, Web pornography that is sought out and enjoyed is not cyber
violence, even (arguably) when it portrays violence against women. In the short
term, such behavior is victimless—unless the consumer of porn then goes
out and treats women badly as a result of having viewed pornographic images. In
practice, however, it is difficult to draw a clear line between unwanted
(potentially harmful) exposure to pornographic representations and the
availability of pornography on the Web in general. Pornographic materials,
including graphically-explicit hardcore images, have become ubiquitous to the
point that it is difficult to prevent exposure to them. An innocent typographic
error in a URL can lead to a porn site (some sites count on this), and searches
for many common terms (such as 'girl' or 'wolf') pull up porn sites in large
numbers (Finkelhor et al., 2000). Moreover, even non-violent pornography objectifies
women sexually, thereby contributing to wider societal attitudes that make
women more likely targets of violence (especially sexual violence) than men
(Birsch, 1996). In these respects, pornography can be seen as related to cyber
violence, even when it is consumed voluntarily.
Not
all sexually explicit materials online constitute
cyber violence. Non-degrading words or images that
arouse people sexually (known as 'erotica') are not harmful as long as they are
produced and consumed by consenting adults. Included in this category are logs
of cyber sex interactions, live video images broadcast via CUseeMe to multiple
participants (Kibby & Costello, 2001), and erotic images exchanged between
lovers. The problem remains of how to limit access to such materials by
children.
Recommended
responses
It is currently fashionable to advocate
the use of technological filters to limit exposure to degrading
representations, especially to prevent access to porn sites by children.12 Such a position takes the view that the
offending materials should be allowed to remain available to those who wish to
access them. However, filters work imperfectly, sometimes blocking legitimate
materials and failing to block objectionable ones.
In cases where the materials are considered to have little
or no redeeming value, users, individually and in groups, can protest the
offending materials, complaining directly to the individual(s) responsible. In
the case of the "Babes on the Web" site, several women complained to
Toups, put up anti-Toups sites, and linked to one another's sites in a
spontaneous, grassroots form of protest. Under this pressure, Toups quietly
removed the site later the same year.
Finally, one can complain about offending materials to the
perpetrator's Internet Service Provider or the organization hosting the Web
site, in an attempt to get them to remove the materials, and perhaps to punish
the perpetrators. In the "75 reasons" case, complaints to Cornell
University administrators caused them to take action against the young men who
had produced and distributed the list. However, the increasing pervasiveness of
pornography on the Web may make it less likely that an individual site would be
removed today as the result of complaints than might have been the case several
years ago.
Further
Issues
This
paper has attempted to define cyber violence and classify four of its major
types. The argument has been advanced
that cyber violence is more difficult to recognize and resist than
offline forms of violence, because it diverges from the violence prototype in
several important respects. Identifying these points of divergence is necessary
in order to understand what cyber violence is and is not, and ultimately, to
determine what an appropriate societal response to it should be.
Of
course, lack of familiarity with cyber violence is not the only obstacle to
recognizing and resisting it. The Internet itself fosters and abets abusive
behavior by rendering perpetrators more anonymous and less fearful of
retribution than they would be in physical space. At the same time,
computer-mediated abuse typically leaves a trace (an email message, a routing
path pointing back to an IP number, etc.), such that most perpetrators who are
reported are eventually identified (WHO@, 2002). Nonetheless, the perception of
anonymity appears to be a disinhibiting factor that leads otherwise normal
individuals to give expression to their aggressive impulses in situations where
they might not otherwise do so.
Ideologies of Internet communication play a role as well,
notably in defining what counts as socially (un)acceptable behavior online.
Libertarian views promoting individual freedom of expression can be used to
justify harassment (Herring, 1999), and probably contribute to people's
willingness to put up with pervasive behaviors such as flaming, spamming, hate
speech, and sexual come-ons (to say nothing of Web pornography, which is
actively defended by free speech advocates13). In contrast, discourses that construct cyber violence as
a problem often invoke ideologies of personal safety (in the case of stalking
and harassment; e.g., Magid, 2000) and community standards of decency (in the
case of degrading representations; e.g., Biegel, 1996). In an important sense,
cyber violence must be legitimized discursively and ideologically before it can
effectively be fought.
Finally,
societal norms and expectations related to gender and violence can be an
obstacle in the fight against cyber violence, just as they are in the fight
against physical violence. Violence, especially when it has a sexual component,
tends to be underreported due to feelings of guilt or embarrassment (Finkelhor
et al., 2000; Thomas, 2002). Females, in particular, are taught to believe that
they are somehow to blame if they are sexually aggressed; this social
conditioning makes them less likely to report acts of sexual aggression. A
deeper problem is that violence against females is so widespread, and
manifested in such diverse forms, that it is considered "normal" by
many females and males. Thus most teenage girls, if asked if they have ever
been sexually harassed, are likely to respond "no", but when asked
specific questions, are able to report numerous harassment incidents, which
they take to be simply "the ways things are". Finkelhor et al. (2000)
observed a similar response pattern from the 10-17 year old children they
interviewed for their study of online sexual abuse. The naturalization of
violence—especially, of violence against females—must be challenged
before cyber violence can be identified and resisted.
A
Comprehensive Response
The
contextual factors identified in the previous section contribute to the
pervasiveness of cyber violence, and inhibit attempts to resist it. While these
factors are not inherently immutable, changing them will require time and
concerted, collective effort. In the meantime, steps can be taken to reduce the
incidence of cyber violence and its harmful effects.
Finkelhor et al. (2000) propose a comprehensive response to
online abuse that involves active intervention at all stages of the abuse
process:
Although it comes first in the list, reducing the quantity
of offensive online behavior is the ultimate goal, and one that requires effort
on multiple, simultaneous levels—societal, cybersocietal, and
individual—to achieve. Mobilizing people to demand a "cleaner"
online environment requires identifying and justifying certain behaviors as
unacceptable. The present paper is one proposal towards meeting that goal.
Currently available recommendations focus mostly on the
second and third stages of the process, avoidance of/protection from abusive
behaviors, and reporting of such behaviors as can not be avoided. The present
paper has suggested that in order to do this effectively, it is necessary to
distinguish among different cyber violence types, as each raises unique issues
and challenges—trust in the case of online misrepresentation, privacy in
the case of cyberstalking, damage to resources and reputation in the case of
online harassment, and degradation in the case of pornography—calling for
different avoidance and response tactics.
Last, as the incidence of cyber violence increases, online
organizations such as WHO@, SafetyEd International, WiredPatrol (formerly
CyberAngels) and CyberTipline are stepping up their efforts to assist targets
of online abuse. Working together with search engines such as Yahoo! and
Internet Service Providers such as AmericaOnline, these organizations provide
in situ intervention to stop abuse, referrals to legal counsel and law
enforcement agencies, information and advice, and statistics about online
abuse. They, too, must make judgment calls about what is and is not abuse, in
order to allocate their resources effectively.
Interestingly, WHO@ also excludes hate speech, flaming,
spamming and trolling from their definition of online abuse, as was proposed
here. This raises a larger question of how to deal with behaviors that fall
outside the definition of cyber violence proper, but that are nonetheless
widespread and problematic. Viewing related but "less serious"
behaviors together with cyber violence sheds light on the larger
forces—technological, ideological , and societal—that shape the
online environment as a social space in which "bad behavior" occurs.
At the same time, there is a need to set the bounds of what constitutes
actionable abuse. Defining cyber violence is a first step towards meeting this
goal.
Missing from the response strategy proposed by Finkelhor et
al. (2000) is mention of a need for public policy, including legislation on
online abuse, a point which other anti-online-abuse advocates raise forcefully
(WHO@, 2002). Ten years ago, almost no such policies or legislation existed,
constituting a lack of deterrence, and making it difficult for victims of cyber
violence to seek redress. Classification of behaviors plays an important role in
the legal realm, e.g., in lawmaking and law enforcement.
The present paper is not intended to determine what the
specific legal consequences of any type of cyber violence should be, nor indeed
to suggest that all forms of cyber violence constitute punishable crimes. Some
of the examples discussed here have the status of criminal activity, others of
currently legal but socially unacceptable behavior, and still others of
behavior that is unacceptable to some, but tolerable (or enjoyable) to others.
Still others are ambiguous cases that do not clearly fall into one category or
the other. Ultimately, how a society responds to acts of cyber violence should
include a consideration of the extent and the seriousness of the harm produced,
and of community standards of (in)acceptability regarding classes of abusive
behavior. I leave this as a problem for further discussion.
Notes
1 For example, a participant at the UNESCO Chair Symposium on
'Women's Rights, Cyber Rights' held at Sookmyung Women's University in Seoul,
Korea on May 31, 2002 noted that a Korean Internet Service Provider which
recently implemented a system for users to report abusive behavior online is
receiving one million reports of such behavior per month (Shim, 2002). In the United States, WHO@ (2002)
reports that it receives 100 reports of online harassment per week, 95% of
which are legitimate cases.
2 See, e.g., Kiesler et al. (1984).
3 Women were the perpetrators in 29% of
the cases, and in 7% of cases, the gender of the perpetrator is unknown (WHO@,
2002).
4 Of course, all of these stereotypes about
violence can be (and regularly are) broken in the real world.
5 Similarly horror-inducing in this
archetypical sense is the case of a young woman, Amy Boyer, who was murdered in
1999 by a former classmate who had never spoken to her, but who followed her
every move through information obtained on the Internet (http://www.unc.edu/courses/law357c/cyberprojects/spring00/cyberstalking/cyberstalk/cases.html).
This is a case of cyberstalking in the classification system presented here.
6 Cyber sex is online sexual activity
involving the exchange of erotic words or images for purposes of mutual
arousal. At issue in the case of 'Joan' (and even more ambiguously in the cases
reported in McRae, 1996) is whether the deception that can arise when someone
discovers that their cyber sex partner is a different gender (or sexual
orientation, or race, or age, etc.) than they were led to believe can be
considered a form of abuse, and the individuals who misrepresented themselves
held morally accountable.
7 This outcome underscores the
interpretive challenges posed by definitions of violence based in whole or in
part on the notion of perpetrator 'intent'.
8 The Jake Baker case becomes additionally
problematic when compared with the cyberstalking case that resulted in the
murder of Amy Boyer, described in note 5. In both, college-age males posted
materials on the Internet describing their violent fantasies (including murder)
about a (former) female classmate. On the basis of the Internet evidence, Jake
Baker appeared no less likely to act than Amy Boyer's murderer.
9 See discussions of each case in Dibbell
(1993) and Herring (1999).
10 This is consistent with a larger
societal pattern, reflected in the vocabulary of the English language,
according to which females are demeaned through their sexuality (Schultz,
1990).
11 But cf. Catharine MacKinnon and Andrea
Dworkin's critique of pornography as violence against women (1988).
12 Cf. the recently-overturned Children's
Internet Protection Act (CIPA), which would have withheld US government funding
from libraries that did not use filtering software to restrict access to
pornographic materials on the Web (American Library Association, 2002).
13 See, e.g., Strossen (1995), and for a
counter position, Russell (1995).
References
American Library Association. 2002. The
Children's Internet Protection Act. http://www.ala.org/cipa/
Bell, Vicki and Denise de la Rue. n.d. Gender harassment on the Internet. http://www.gsu.edu/~lawppw/lawand.papers/harass.html
Biegel, Stuart. 1996.
Constitutional issues in cyberspace: Focus on 'community standards'. Los
Angeles Daily Journal, February 22, 1996. http://www.gseis.ucla.edu/iclp/feb96.html
Birsch, Douglas. 1996.
Sexually explicit materials and the Internet. CMC Magazine, January 1. http://www.december.com/cmc/mag/1996/jan/birsch.html
Black’s Law Dictionary. 1990 (6th
ed.). p 717. West Group.
Brail, Stephanie. 1994. Take back the
net! On the Issues. Winter, 40-42.
Bruckman, Amy S. 1993. Gender swapping on
the Internet. Proceedings of
INET '93. Reston, VA:
The Internet Society. Available
via anonymous ftp from media.mit.edu in pub/MediaMOO/papers.gender‑swapping.
Cyber-stalking.net. 2002. Statistics. http://www.cyber-stalking.net/statistics.htm
Danet, Brenda. 1998. Text as mask: Gender
and identity on the Internet. In Steven Jones (ed.), Cybersociety 2.0, 129-158. Thousand Oaks, CA: Sage.
Dibbell, Julian. 1993. A rape in
cyberspace, or how an evil clown, a Haitian trickster spirit, two wizards, and
a cast of dozens turned a database into a society. Village Voice, Dec. 21: 36-42.
Egan, Jennifer. 2000. Out in cyberspace. The New York Times Magazine, December 10, 2000.
Finkelhor, David, Kimberly Mitchell, and Janis Wolak. 2000. Online victimization: A report on the nation's youth. http://www.missingkids.com/download/nc62.pdf
Gilbert, Pamela. 1997. On space, sex and
stalkers. Women and Performance
17. http://www.echonyc.com/~women/Issue17/art-gilbert.html
Gilboa, Netta. 1996. Elites, lamers,
narcs and whores: Exploring the computer underground. In L. Cherny and E.R.
Weise (eds.), Wired_women,
98-113. Seattle: Seal Press.
Herring, Susan C. 1999. The rhetorical
dynamics of gender harassment on-line. The Information Society 15(3): 151-167.
Herring, Susan C. 2001. Gender and power
in online communication. Center for Social Informatics Working Papers. http://www.slis.indiana.edu/csi/WP/WP01-05B.html
Kibby, Marjorie and Brigid Costello.
2001. Between the image and the act: Interactive sex entertainment on
the Internet. Sexualities:
Studies in Culture and Society
4(3): 353-369.
Kiesler, Sara, Jane Siegel and Timothy W.
McGuire. 1984. Social psychological aspects of computer-mediated communication.
American Psychologist
39: 1123-1134.
Levien, Ralph. 2000. Meta: Advogato's trust
metric. http://www.advogato.org/article/38.html
MacKinnon, Catharine and Andrea Dworkin.
1988. Pornography and Civil Rights: A New Day for
Women's Equality. Organizing Against
Pornography.
Magid, Lawrence. 2000. When
kids surf, think safety first. San Jose Mercury News, June 17, 2000.
McRae, Shannon. 1996. Coming apart at the
seams: Sex, text and the virtual body. In L. Cherny and E.R. Weise (eds.), Wired_women, 242-263. Seattle: Seal Press.
Pfaffenberger, Brian. 1997. "If I
want it its OK": Usenet and
the (outer) limits of free speech. The Information Society 12: 365-386.
Russell, Diana E.H. 1995. Nadine Strossen: The pornography industry's wet dream. On
the Issues. Summer 1995. http://www.echonyc.com/~onissues/russell.htm
Schulz, Muriel R. 1990. The
semantic derogation of woman. In Deborah Cameron (ed.), The Feminist
Critique of Language, 134-147. New York: Routledge.
Shim, Young Hee. 2002. Remarks at UNESCO Chair Symposium on
Women's Rights, Cyber Rights. Seoul, South Korea, May 31, 2002.
Spertus, Ellen. 1996. Social and technical means for fighting on-line harassment.
http://www.mit.edu/people/ellens/Gender/glc
Strossen, Nadine. 1995. Defending Pornography: Free Speech, Sex and the Fight
for Women's Rights. NY: Scribner.
Tarbox, Katherine. 2000. Katie.com: My
Story. NY: E.P. Dutton.
Thomas, Karen. 2002. Girls know way
around Net, parents. USA Today,
February 13, 2002.
Van Gelder, Lindsey. 1985. The strange
case of the electronic lover. Ms. Magazine, October
1985: 94-124.
WHO@ (Working to Halt Online Abuse).
2002. http://www.haltabuse.org/
Appendix:
Cyber Violence Resources on the Web