Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

Sunday, December 23, 2012

there are no facts, only interpretations

Statistics is the study of the collection, organization, analysis, and interpretation of data.[1][2] It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments.[1]
A statistician is someone who is particularly well versed in the ways of thinking necessary for the successful application of statistical analysis. Such people have often gained this experience through working in any of a wide number of fields. There is also a discipline called mathematical statistics that studies statistics mathematically.

The word statistics, when referring to the scientific discipline, is singular, as in "Statistics is an art."[3] This should not be confused with the word statistic, referring to a quantity (such as mean or median) calculated from a set of data,[4] whose plural is statistics ("this statistic seems wrong" or "these statistics are misleading").

There is a general perception that statistical knowledge is all-too-frequently intentionally misused by finding ways to interpret only the data that are favorable to the presenter.[14] The famous saying, "There are three kinds of lies: lies, damned lies, and statistics".[15] which was popularized in the USA by Mark Twain and incorrectly attributed by him to Disraeli (1804–1881), has come to represent the general mistrust [and misunderstanding] of statistical science. Harvard President Lawrence Lowell wrote in 1909 that statistics, "...like veal pies, are good if you know the person that made them, and are sure of the ingredients."[citation needed]

If various studies appear to contradict one another, then the public may come to distrust such studies. For example, one study may suggest that a given diet or activity raises blood pressure, while another may suggest that it lowers blood pressure. The discrepancy can arise from subtle variations in experimental design, such as differences in the patient groups or research protocols, which are not easily understood by the non-expert. (Media reports usually omit this vital contextual information entirely, because of its complexity.)

By choosing (or rejecting, or modifying) a certain sample, results can be manipulated. Such manipulations need not be malicious or devious; they can arise from unintentional biases of the researcher. The graphs used to summarize data can also be misleading.

Deeper criticisms come from the fact that the hypothesis testing approach, widely used and in many cases required by law or regulation, forces one hypothesis (the null hypothesis) to be "favored," and can also seem to exaggerate the importance of minor differences in large studies. A difference that is highly statistically significant can still be of no practical significance. (See criticism of hypothesis testing and controversy over the null hypothesis.)

One response is by giving a greater emphasis on the p-value than simply reporting whether a hypothesis is rejected at the given level of significance. The p-value, however, does not indicate the size of the effect. Another increasingly common approach is to report confidence intervals. Although these are produced from the same calculations as those of hypothesis tests or p-values, they describe both the size of the effect and the uncertainty surrounding it.

In statistics, survey methodology is the field that studies the sampling of individuals from a population with a view towards making statistical inferences about the population using the sample. Polls about public opinion, such as political beliefs, are reported in the news media in democracies. Other types of survey are used for scientific purposes. Surveys provide important information for all kinds of research fields, e.g., marketing research, psychology, health professionals and sociology.[1] A survey may focus on different topics such as preferences (e.g., for a presidential candidate), behavior (smoking and drinking behavior), or factual information (e.g., income), depending on its purpose. Since survey research is always based on a sample of the population, the success of the research is dependent on the representativeness of the population of concern (see also sampling (statistics) and survey sampling).

Survey methodology seeks to identify principles about the design, collection, processing, and analysis of surveys in connection to the cost and quality of survey estimates. It focuses on improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession. Part of the task of a survey methodologist is making a large set of decisions about thousands of individual features of a survey in order to improve it.[2]
The most important methodological challenges of a survey methodologist include making decisions on how to:[2]
  • Identify and select potential sample members.
  • Contact sampled individuals and collect data from those who are hard to reach (or reluctant to respond).
  • Evaluate and test questions.
  • Select the mode for posing questions and collecting responses.
  • Train and supervise interviewers (if they are involved).
  • Check data files for accuracy and internal consistency.
  • Adjust survey estimates to correct for identified errors.


A misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes a statistical fallacy.
The false statistics trap can be quite damaging to the quest for knowledge. For example, in medical science, correcting a falsehood may take decades and cost lives.
Misuses can be easy to fall into. Professional scientists, even mathematicians and professional statisticians, can be fooled by even some simple methods, even if they are careful to check everything. Scientists have been known to fool themselves with statistics due to lack of knowledge of probability theory and lack of standardization of their tests.

Discarding unfavorable data

All a company has to do to promote a neutral (useless) product is to find or conduct, for example, 40 studies with a confidence level of 95%. If the product is really useless, this would on average produce one study showing the product was beneficial, one study showing it was harmful and thirty-eight inconclusive studies (38 is 95% of 40). This tactic becomes more effective the more studies there are available. Organizations that do not publish every study they carry out, such as tobacco companies denying a link between smoking and cancer, anti-smoking advocacy groups and media outlets trying to prove a link between smoking and various ailments, or miracle pill vendors, are likely to use this tactic.

Another common technique is to perform a study that tests a large number of dependent (response) variables at the same time. For example, a study testing the effect of a medical treatment might use as dependent variables the probability of survival, the average number of days spent in the hospital, the patient's self-reported level of pain, etc. This also increases the likelihood that at least one of the variables will by chance show a correlation with the independent (explanatory) variable.

Loaded questions

The answers to surveys can often be manipulated by wording the question in such a way as to induce a prevalence towards a certain answer from the respondent. For example, in polling support for a war, the questions:
  • Do you support the attempt by the USA to bring freedom and democracy to other places in the world?
  • Do you support the unprovoked military action by the USA?
will likely result in data skewed in different directions, although they are both polling about the support for the war.

Another way to do this is to precede the question by information that supports the "desired" answer. For example, more people will likely answer "yes" to the question "Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?" than to the question "Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?"

Overgeneralization

Overgeneralization is a fallacy occurring when a statistic about a particular population is asserted to hold among members of a group for which the original population is not a representative sample.

For example, suppose 100% of apples are observed to be red in summer. The assertion "All apples are red" would be an instance of overgeneralization because the original statistic was true only of a specific subset of apples (those in summer), which is not expected to representative of the population of apples as a whole.
A real-world example of the overgeneralization fallacy can be observed as an artifact of modern polling techniques, which prohibit calling cell phones for over-the-phone political polls. As young people are more likely than other demographic groups to have only a cell phone, rather than also having a conventional "landline" phone, young people are more likely to be liberal, and young people who do not own a landline phone are even more likely to be liberal than their demographic as a whole, such polls effectively exclude many voters who are more likely to be liberal.[1]

Thus, a poll examining the voting preferences of young people using this technique could not claim to be representative of young peoples' true voting preferences as a whole without overgeneralizing, because the sample used is not representative of the population as a whole.
Overgeneralization often occurs when information is passed through nontechnical sources, in particular mass media.[2]

Biased samples

Misreporting or misunderstanding of estimated error

If a research team wants to know how 300 million people feel about a certain topic, it would be impractical to ask all of them. However, if the team picks a random sample of about 1000 people, they can be fairly certain that the results given by this group are representative of what the larger group would have said if they had all been asked.

This confidence can actually be quantified by the central limit theorem and other mathematical results. Confidence is expressed as a probability of the true result (for the larger group) being within a certain range of the estimate (the figure for the smaller group). This is the "plus or minus" figure often quoted for statistical surveys. The probability part of the confidence level is usually not mentioned; if so, it is assumed to be a standard number like 95%.

The two numbers are related. If a survey has an estimated error of ±5% at 95% confidence, it also has an estimated error of ±6.6% at 99% confidence. ±x% at 95% confidence is always ±1.32x% at 99% confidence.

The smaller the estimated error, the larger the required sample, at a given confidence level.
at 95.4% confidence:
±1% would require 10,000 people.
±2% would require 2,500 people.
±3% would require 1,111 people.
±4% would require 625 people.
±5% would require 400 people.
±10% would require 100 people.
±20% would require 25 people.
±25% would require 16 people.
±50% would require 4 people.

Most people assume, because the confidence figure is omitted, that there is a 100% certainty that the true result is within the estimated error. This is not mathematically correct.

Many people may not realize that the randomness of the sample is very important. In practice, many opinion polls are conducted by phone, which distorts the sample in several ways, including exclusion of people who do not have phones, favoring the inclusion of people who have more than one phone, favoring the inclusion of people who are willing to participate in a phone survey over those who refuse, etc. Non-random sampling makes the estimated error unreliable.

On the other hand, many people consider that statistics are inherently unreliable because not everybody is called, or because they themselves are never polled[citation needed]. Many people think that it is impossible to get data on the opinion of dozens of millions of people by just polling a few thousands. This is also inaccurate[citation needed]. A poll with perfect unbiased sampling and truthful answers has a mathematically determined margin of error, which only depends on the number of people polled.
However, often only one margin of error is reported for a survey. When results are reported for population subgroups, a larger margin of error will apply, but this may not be made clear. For example, a survey of 1000 people may contain 100 people from a certain ethnic or economic group. The results focusing on that group will be much less reliable than results for the full population. If the margin of error for the full sample was 4%, say, then the margin of error for such a subgroup could be around 13%.
There are also many other measurement problems in population surveys.
The problems mentioned above apply to all statistical experiments, not just population surveys.

 False causality

When a statistical test shows a correlation between A and B, there are usually five possibilities:
  1. A causes B.
  2. B causes A.
  3. A and B both partly cause each other.
  4. A and B are both caused by a third factor, C.
  5. The observed correlation was due purely to chance.
The fifth possibility can be quantified by statistical tests that can calculate the probability that the correlation observed would be as large as it is just by chance if, in fact, there is no relationship between the variables. However, even if that possibility has a small probability, there are still the four others.
If the number of people buying ice cream at the beach is statistically related to the number of people who drown at the beach, then nobody would claim ice cream causes drowning because it's obvious that it isn't so. (In this case, both drowning and ice cream buying are clearly related by a third factor: the number of people at the beach).

This fallacy can be used, for example, to prove that exposure to a chemical causes cancer. Replace "number of people buying ice cream" with "number of people exposed to chemical X", and "number of people who drown" with "number of people who get cancer", and many people will believe you. In such a situation, there may be a statistical correlation even if there is no real effect. For example, if there is a perception that a chemical site is "dangerous" (even if it really isn't) property values in the area will decrease, which will entice more low-income families to move to that area. If low-income families are more likely to get cancer than high-income families (this can happen for many reasons, such as a poorer diet or less access to medical care) then rates of cancer will go up, even though the chemical itself is not dangerous. It is believed[3] that this is exactly what happened with some of the early studies showing a link between EMF (electromagnetic fields) from power lines and cancer.[4]

In well-designed studies, the effect of false causality can be eliminated by assigning some people into a "treatment group" and some people into a "control group" at random, and giving the treatment group the treatment and not giving the control group the treatment. In the above example, a researcher might expose one group of people to chemical X and leave a second group unexposed. If the first group had higher cancer rates, the researcher knows that there is no third factor that affected whether a person was exposed because he controlled who was exposed or not, and he assigned people to the exposed and non-exposed groups at random. However, in many applications, actually doing an experiment in this way is either prohibitively expensive, infeasible, unethical, illegal, or downright impossible. For example, it is highly unlikely that an IRB would accept an experiment that involved intentionally exposing people to a dangerous substance in order to test its toxicity. The obvious ethical implications of such types of experiments limit researchers' ability to empirically test causation.

 Proof of the null hypothesis

In a statistical test, the null hypothesis (H0) is considered valid until enough data proves it wrong. Then H0 is rejected and the alternative hypothesis (HA) is considered to be proven as correct. By chance this can happen, although H0 is true, with a probability denoted alpha, the significance level. This can be compared by the judicial process, where the accused is considered innocent (H0) until proven guilty (HA) beyond reasonable doubt (alpha).

But if data does not give us enough proof to reject H0, this does not automatically prove that H0 is correct. If, for example, a tobacco producer wishes to demonstrate that its products are safe, it can easily conduct a test with a small sample of smokers versus a small sample of non-smokers. It is unlikely that any of them will develop lung cancer (and even if they do, the difference between the groups has to be very big in order to reject H0). Therefore it is likely—even when smoking is dangerous—that our test will not reject H0. If H0 is accepted, it does not automatically follow that smoking is proven harmless. The test has insufficient power to reject H0, so the test is useless and the value of the "proof" of H0 is also null.
This can—using the judicial analogue above—be compared with the truly guilty defendant who is released just because the proof is not enough for a guilty verdict. This does not prove the defendant's innocence, but only that there is not proof enough for a guilty verdict. In other words, "absence of evidence" does not imply "evidence of absence".

Data dredging

Data dredging is an abuse of data mining. In data dredging, large compilations of data are examined in order to find a correlation, without any pre-defined choice of a hypothesis to be tested. Since the required confidence interval to establish a relationship between two parameters is usually chosen to be 95% (meaning that there is a 95% chance that the relationship observed is not due to random chance), there is a thus a 5% chance of finding a correlation between any two sets of completely random variables. Given that data dredging efforts typically examine large datasets with many variables, and hence even larger numbers of pairs of variables, spurious but apparently statistically significant results are almost certain to be found by any such study.

Note that data dredging is a valid way of finding a possible hypothesis but that hypothesis must then be tested with data not used in the original dredging. The misuse comes in when that hypothesis is stated as fact without further validation.

Data manipulation

Informally called "fudging the data," this practice includes selective reporting (see also publication bias) and even simply making up false data.
Examples of selective reporting abound. The easiest and most common examples involve choosing a group of results that follow a pattern consistent with the preferred hypothesis while ignoring other results or "data runs" that contradict the hypothesis.
Psychic researchers have long disputed studies showing people with ESP ability. Critics accuse ESP proponents of only publishing experiments with positive results and shelving those that show negative results. A "positive result" is a test run (or data run) in which the subject guesses a hidden card, etc., at a much higher frequency than random chance
The deception involved in both cases is that the hypothesis is not confirmed by the totality of the experiments - only by a tiny, selected group of "successful" tests.
Scientists, in general, question the validity of study results that cannot be reproduced by other investigators. However, some scientists refuse to publish their data and methods.[5]

Other fallacies

Also, the post facto fallacy assumes that an event for which a future likelihood can be measured had the same likelihood of happening once it has already occurred. Thus, if someone had already tossed 9 coins and each has come up heads, people tend to assume that the likelihood of a tenth toss also being heads is 1023 to 1 against (which it was before the first coin was tossed) when in fact the chance of the tenth head is 1 to 1. This error has led, in the UK, to the false imprisonment of women for murder when the courts were given the prior statistical likelihood of a woman's 3 children dying from Sudden Infant Death Syndrome as being the chances that their already dead children died from the syndrome. This led to statements from Roy Meadow that the chances they had died of Sudden Infant Death Syndrome being millions to one against, convictions were then handed down in spite of the statistical inevitability that a few women would suffer this tragedy. Meadow was subsequently struck off the U.K. Medical Register for giving “erroneous” and “misleading” evidence, although this was later reversed by the courts.


Saturday, July 14, 2012

HELPING STRANGERS

CROSS-CULTURAL DIFFERENCES IN HELPING STRANGERS,
ROBERT V. LEVINE,
ARA NORENZAYAN,
KAREN PHILBRICK

http://en.wikipedia.org/wiki/Helping_behavior

Helping is influenced by economic environment within the culture. In general, frequency of helping behavior is inversely related to the country economic status.

The major explanation for people failing to stop and help a victim is how obsessed with haste they are. People who were in a hurry did not even notice the victim, although, once they arrived at their destination and had time to think about the consequences, they felt some guilt and anxiousness.

Read more: http://www.experiment-resources.com/helping-behavior.html#ixzz1bWhUDhWT

Saturday, June 30, 2012

nothing exists outside the mind



Solipsism (play /ˈsɒlɨpsɪzəm/) is the philosophical idea that only one's own mind is sure to exist. The term comes from the Latin solus (alone) and ipse (self). Solipsism as an epistemological position holds that knowledge of anything outside one's own mind is unsure. The external world and other minds cannot be known, and might not exist outside the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist. As such it is the only epistemological position that, by its own postulate, is both irrefutable and yet indefensible in the same manner. Although the number of individuals sincerely espousing solipsism has been small, it is not uncommon for one philosopher to accuse another's arguments of entailing solipsism as an unwanted consequence, in a kind of reductio ad absurdum. In the history of philosophy, solipsism has served as a skeptical hypothesis.

Wahrnehmung ist Falschnehmung


The earth is as old as we are, no older. How could it be older? Nothing exists except through human consciousness. Before man there was nothing. After man, if he could come to an end, there would be nothing. Outside man there is nothing.

O'Brian (George Orwell in 1984)

Happiness


Happiness economics is the quantitative study of happiness, positive and negative affectwell-beingquality of lifelife satisfaction and related concepts, typically combining economics with other fields such as psychology and sociology. It typically treats such happiness-related measures, rather than wealth, income or profit, as something to be maximized. The field has grown substantially since the late 20th century, for example by the development of methods, surveys and indices to measure happiness and related concepts.

binaural beat mixes

I-Doser is an application for the playback of proprietary audio content. The developer claims the separately purchasable content aims to simulate specific mental states through the use of binaural beats, and much of it is named after prohibited recreational drugs.[1] The I-Doser player has been downloaded more than a million times[2] and is based on the audio technology of a GPL-licensed binaural beat generator, SBaGen.[3] The player can be downloaded for free, but the audio content has to be purchased.

Research into the neurological technology behind I-Doser is sparse. Peer-reviewed studies exist suggesting that some specific binaural beat mixes can affect aspects of mental performance and mood,[4][5] act as analgesic supplements[6] or affect perceptions,[7] but there have been no formal studies of any effects of mixes particular to I-Doser. Researchers from Oregon Health and Science University interviewed about I-Doser have expressed skepticism over its scientific basis, citing a four person controlled study of binaural beats that demonstrated no evidence of brainwave entrainment.[8] Other universities have also stated skepticism.[9]



Dr. Jeffrey D. Thompson, D.C., B.F.A.

Lies

Dead giveaway

By Peter Collett
July 12 2003


So, you think you got away with that little fib? Didn't smile or fidget too much, kept eye contact? Well, think again. The truth is, you can't hide those lies.
It has been estimated that we lie to a third of the people we meet each day. Lying is especially common when people are trying to impress each other, and that's why it's so prevalent in dating and courtship. Robert Feldman, at the University of Massachusetts, found that 60 per cent of the people who took part in one of his studies lied at least once during a 10-minute meeting, and that most of them told two or three lies in that time.
Research on lying shows that there is no difference in the numbers of lies told by men and women, but that there are differences in the types they tell - men are more likely to produce lies designed to make themselves look impressive, while women are more likely to tell lies intended to make others feel good. Women are generally more inclined than men to express positive opinions, both about the things they do and don't like. Consequently, when women are faced with the possibility of upsetting someone - for example when given a present they don't want - they're more likely to try to protect the other person's feelings by telling a white lie. Lying lubricates interpersonal relations; without them, our social life would soon grind to a halt.
Detection Tells
Although lies form a large part of our exchanges with other people, we're actually not very good at telling why someone is deceiving us or telling the truth. This isn't for lack of evidence, because 90 per cent of lies are accompanied by tells which, like a criminal's fingerprints, leave behind traces of deception.
People often pride themselves on their ability to detect if someone is lying to them, especially when they know that person well. How often have you heard a mother announce that her children could never lie to her because she "knows them too well", or a young man claim that his girlfriend could never pull the wool over his eyes because he can "see right through her"? In fact, the research on lie detection suggests that both the mother and the young man are probably mistaken, because people detect only about 56 per cent of the lies they're exposed to, which is slightly above what you'd expect by chance. It's also been discovered that as people get to know each other better, their ability to detect each other's lies doesn't improve - it sometimes gets worse.
This happens for various reasons. One is that as people get to know each other well, they become more confident that they can spot each other's lies. However, their accuracy doesn't necessarily increase - it's usually just their confidence that grows. Moreover, when people get to know each other well, they're more likely to allow their emotions to get in the way of their analytical skills. Finally, as each person gets to know what type of evidence of deceit the other person is looking for, they're able to modify their behaviour to reduce the chances of detection.
Eye Tells
Most people believe that gaze aversion is a sign of lying. They assume that because liars feel guilty, embarrassed and apprehensive, they find it difficult to look their victim in the eye. This is not what happens. First, patterns of gaze are quite unstable - while some liars avert their eyes, others actually increase the amount of time they spend looking at the other person.
As gaze is fairly easy to control, liars can use their eyes to project an image of honesty. Knowing that other people assume gaze aversion is a sign of lying, many liars do the exact opposite - they deliberately increase their gaze to give the impression that they're telling the truth.
Another supposed sign of lying is rapid blinking. It's true that when we become aroused or our mind is racing, there's a corresponding increase in our blinking rate. Our normal rate is about 20 blinks per minute, but it can increase to four or five times that figure when we feel under pressure. When liars are searching for an answer to an awkward question, their thought processes speed up. In this kind of situation, lying is frequently associated with blinking. But we need to remember that there are times when people have a high blinking rate, not because they're lying, but because they're under pressure. Also, there are times when liars show normal rates.
Body Tells
Fidgeting and awkward hand movements are also thought to be signs of deceit - the assumption being that when people are lying they become agitated and this gives rise to nervous movements of the hands. There is a class of gestures called "adaptors" which consists of actions like stroking one's hair, scratching one's head or rubbing the hands together. When people tell lies they sometimes feel guilty or worried about being found out, and these concerns can cause them to produce adaptors. This tends to happen when the stakes are high or when the liar isn't very good at deception. But most of the time the exact opposite happens. Again, because liars are worried about revealing themselves, they tend to inhibit their normal gestural habits. As a result their actions are likely to become more frozen, not more animated.
Movements of the hands, like those of the eyes, tend to be under conscious control, and that's why the hands aren't a reliable source of information about lying. Video research shows that when people are asked to tell a lie they tend to produce more signs of deception in the lower rather than the upper part of the body. Legs and feet are an underrated source of information about lying. It seems that liars focus their efforts at concealment on their hands, arms and face, because they know that's what other people will be watching. Because their feet feel remote, liars don't bother about them - but it's often tiny adjustments of the legs and feet that betray them.
Nose Tells
One gesture that reveals a lie is the "mouth-cover". When this happens, it's as if the liar is taking precautions to cover up the source of their deception, acting on the assumption that if other people can't see their mouth then they won't know where the lie has come from. Mouth-covering actions can range from full-blown versions where the hand completely covers the mouth, to gestures where the hand supports the chin and a finger surreptitiously touches the corner of the mouth.
There is, however, a substitute for touching the mouth, which is touching the nose. By touching their nose, the liar experiences the momentary comfort of covering his mouth, without any risk of drawing attention to what they are really doing. In this role, nose-touching functions as a substitute for mouth-covering. It's a stealth tell - it looks as if someone is scratching their nose, but their real intention is to cover the mouth.
There is also a school of thought that says nose-touching is a sign of deceit quite separate from anything to do with the mouth. Two proponents of this idea are Dr Alan Hirsch, of the Smell and Taste Treatment and Research Foundation in Chicago, and Dr Charles Wolf, of the University of Utah. They made a detailed analysis of Bill Clinton's grand jury testimony in August 1998, when the president denied having had sex with Monica Lewinsky. They discovered that while Clinton was telling the truth he hardly touched his nose at all, but that when he lied about the affair, he touched his nose once every four minutes on average.
Hirsch called this the "Pinocchio syndrome", after the character whose wooden nose becomes longer every time he tells a lie. Hirsch suggested that when people lie their nose becomes engorged with blood, and that this produces a sensation that is alleviated by touching or rubbing the nose.
Masking Tells
When someone knowingly tells a lie they have to hide two things - first the truth, and second any emotions that might arise out of their attempts at concealment. The emotions that liars feel are generally negative - guilt or fear of being found out - but liars can also experience the thrill of pulling the wool over other people's eyes, what Paul Ekman, a psychology professor at the University of California, has called "duping delight".
When people tell small, innocuous lies they usually don't feel any negative emotions. However, when they're telling big lies, and there's a lot at stake, they often experience very powerful negative emotions that need to be concealed if the lie is to remain hidden. A negative emotion can be concealed by turning away the head, by covering the face with the hands, or by masking it with a neutral or a positive emotion.
The strategies of turning away and covering the face don't always work because they tend to draw attention to what the liar is trying to conceal. Masking, on the other hand, enables liars to present an exterior that isn't necessarily connected with lying.
The most commonly used masks are the "straight face" and the smile. The straight face requires the least effort - in order to mask their negative emotions, all the liar needs to do is put their face into repose. The smile is potentially more effective as a mask because it suggests that the person is feeling happy and contented - in other words, experiencing emotions that one doesn't normally associate with lying.
Smiling Tells
If you ask people how to spot a liar, they often mention smiling. They'll tell you that when someone is lying they're more likely to use a smile to mask their true feelings. However, research on lying shows it's the other way round - people who are lying smile less than those who are telling the truth. It seems to be that liars occasionally adjust their behaviour so that it's the opposite to what everyone expects of people who are telling a lie. This doesn't mean that liars have abandoned smiling - it simply shows that they smile less than people who are telling the truth. When dissemblers do smile they often give themselves away by producing a counterfeit smile. There are several identifying features of counterfeit smiles:
Duration. They are sustained for much longer than genuine smiles.
Assembly. They are "put together" more rapidly than genuine smiles. They are also dismantled more quickly.
Location. They tend to be confined to the lower half of the face.
Symmetry. Genuine smiles appear on both sides of the face, whereas counterfeits sometimes appear more strongly on one side of the face (usually the right side).
Talking Tells
Most people believe that liars give themselves away by what they do, rather than what they say or how they say it. In fact, it's the other way round - the best indicators of lying are to be found in people's speech. Aldert Vrij, a psychology professor at Britain's Portsmouth University, suggests that when people try to catch liars they pay too much attention to their non-verbal behaviour and not enough to speech. This, he points out, is reflected in the tendency to overestimate the chances of detecting deceit by watching someone's behaviour, and to underestimate the chances of catching liars by listening to what they say. Several features of speech provide clues to lying:
Circumlocution. Liars often beat about the bush. They tend to give long-winded explanations with lots of digressions, but when they're asked a question they're likely to give a short answer.
Outlining. Liars' explanations are painted with broad brushstrokes, with very little attention to detail. For example, a liar will tell you that they went for a pizza, but probably won't tell you where or what kind.
Smokescreens. Liars often produce answers designed to confuse - they sound as if they make sense, but they don't. Examples include Clinton's response during the Paula Jones harassment case, when he was asked about his relationship with Monica Lewinsky: "That depends on what the meaning of 'is' is."
Negatives. Political lies are frequently couched as denials - remember Clinton's famous: "I did not have sexual relations with that woman, Miss Lewinsky." And during the Watergate scandal, Richard Nixon said, "I am not a crook." He didn't say, "I am an honest man."
Word Choice. Liars use words like "I", "me" and "mine" less frequently than people who are telling the truth.
Disclaimers. Liars are more likely to use disclaimers such as "You won't believe this", "I know this sounds strange, but" and "Let me assure you".
Formality. When people are telling the truth in an informal situation they are more likely to use an elided form - for example, to say "don't" instead of "do not". Someone who is telling a lie in the same situation is more likely to say "do not" instead of "don't". That's because people become more tense and formal when they lie.
Tense. Without realising it, liars have a tendency to increase the psychological distance between themselves and the event they're describing. As we have seen, one way they do this is by their choice of words. Another is by using the past rather than the present tense.
Speed. Telling a lie requires a lot of mental work because, in addition to constructing a credible line, the liar needs to keep the truth separated from the lie. This places demands on the capacities of the liar, which in turn can slow them down. That's why people pause before producing a lie, and why lies tend to be delivered at a slower pace than the truth.