popsych.org Open in urlscan Pro
107.180.118.83  Public Scan

URL: https://popsych.org/
Submission: On September 03 via api from TW — Scanned from DE

Form analysis 1 forms found in the DOM

GET https://popsych.org/

<form method="get" id="searchform" action="https://popsych.org/">
  <label for="s" class="assistive-text">Search</label>
  <input type="text" class="field" name="s" id="s" placeholder="Search">
  <input type="submit" class="submit" name="submit" id="searchsubmit" value="Search">
</form>

Text Content

POP PSYCHOLOGY


THE INTERNET'S BEST EVOLUTIONARY PSYCHOLO-GUY


MAIN MENU

Skip to primary content
Skip to secondary content
 * Home
 * Subscribe
 * UnSubscribe


POST NAVIGATION

← Older posts



LEARNING ABOUT PRIVILEGE MAKES LIBERALS LOOK MORE CONSERVATIVE

Posted on June 6, 2019 by Jesse Marczyk

 

> Not a good representation of poverty when people usually don’t use cash
> anymore

Why are poor people poor? Your answer to that question determines a lot about
your feelings and response towards them. If you think people are poor because
they’re good social investments who happen to be experiencing a patch of bad
luck outside of their control – in other words, that their poverty isn’t really
their fault – your interest in seeing that they receive assistance increases.
(http://popsych.org/who-deserves-healthcare-and-unemployment-benefits/). On the
other hand, if people are perceived to be poor because of undesirable
personality traits – like laziness – and their poverty is their own fault, then
people are less interested in providing them with assistance. (
http://popsych.org/socially-strategic-welfare/) This makes sense in light of the
prospect that people don’t’ help others simply because those other people need
help. A psychological mechanism that encouraged its bearer to aid others at a
personal cost wouldn’t do much to help its bearer succeed on the evolutionary
stage unless those personal costs were later recouped. You help them at time A
so that you get something in return at time B that outweighs the initial helping
costs. If you’re helping someone who needs help because they’re lazy, it’s less
likely they’re going to suddenly find motivation to help you later than if you
helped someone who’s just unlucky.



> God helps those who can help him later

The extent to which people differ in their desire to help the poor, then, likely
varies with the attributions they make for poverty: If people largely believe
poverty isn’t the fault of the poor, they will favor helping the poor more
broadly, while those who believe poverty is the fault of the poor will disfavor
helping them, in general. This divide should go a long way to explaining why, in
the US, Liberals tend to favor social programs for helping the poor more than
conservatives. Indeed, that precise pattern popped up in a recent paper by
Cooley et al (2019) when participants read the following description of a
made-up poor person:

> Kevin, a[n]…American living in New York City, would say his life has been
> defined by poverty. As a child, Kevin was raised by a single mom who struggled
> to balance several part-time jobs simply to pay the bills. Most winters, they
> had no heat; and, it was a daily question whether they would have enough to
> eat. In late 2016, Kevin began to receive welfare assistance. Since then, he
> has not applied for any jobs and instead has cycled between jail cells,
> shelters, emergency rooms and the streets. Although Kevin would like to be
> financially independent, he doesn’t feel he has the skills or ability to
> obtain a well-paying job.

The results found that as political liberalism increased in people, they tended
to both report more sympathy for Kevin, as well as making more external
attributions for the causes of that poverty. Liberals were more interested in
helping because they blamed Kevin less for his circumstances.

If you fancy yourself a liberal, take this time to pat yourself on the back for
caring about Kevin’s plight. Good for you. If you fancy yourself a conservative,
you can also take this time to pat yourself on the back for your realism about
why Kevin is poor.

Now if that was all there was to this study, there might not be too much to talk
about. However, the focus of this paper was more specific than general attitudes
about poverty and political affiliation. Instead, the authors also looked at
Kevin’s race: What happens when Kevin is described as White or Black in that
opening sentence? As it turns out…nothing. While both liberals and conservatives
were modestly more sympathetic towards a Black Kevin’s plight, these differences
weren’t significant. Race didn’t seem to enter the equation when people were
looking at this specific example of a poor person. That should be a good thing,
I would think; people where judging Kevin as Kevin, rather than as a proxy for
his entire race.

Again, if that’s all there was to this study, there might still not be much to
talk about. It’s in the final twist of the experiment that brings it all home:
how do people respond to a white/black Kevin after reading a bit about white
privilege?



> See how everyone’s angry here? That’s called foreshadowing

The experiment (number 2 in the paper) went as follows: 650 Participants would
begin by reading a story. This story was either about the importance of a daily
routine (the neutral control condition) or about white privilege (the
experimental condition). Specifically, the privilege story read:

> In America, there is a long history of White people having more power than
> other racial groups (e.g., Black people). Although many people think of racial
> inequality as decreasing, there are still privileges that are experienced by
> White Americans that are not true for other racial groups. For example, in her
> essay “White Privilege: Unpacking the Invisible Knapsack” Peggy McIntosh, PhD,
> lists different privileges that she experiences as a White person living in
> America.

Four specific examples were provided, including being able to be in the company
of people of your same race most of the time, see your race widely and
positively presented in the media, not being asked to speak on behalf of your
racial group, and not having your race work against you if you need legal or
medical help.

Once participants had read that story, they were then presented with the Kevin
story from above, asked to respond about how much sympathy they felt for him and
how much they blamed him for his situation before finally completing some
demographic measures. This allowed the authors to probe what effect this brief
discussion of white privilege had on people’s responses.

As it turned out, the conservatives didn’t seem to take much away from that
brief lesson on privilege: On a scale of 0 (strongly disagree) to 100 (strongly
agree), Conservatives reported an equal amount of sympathy for Kevin whether he
was white or black (M = 59 for white and M = 61 for Black). As these numbers
mirrored the values reported for Conservatives in the control condition well, we
could conclude that Conservatives didn’t seem to care about the privilege talk.

The liberals were listening, on the other hand. In the experiment condition,
they reported more sympathy for the Black Kevin (M = 76) than the white one (M =
60). So liberals and conservatives seemed to “agree” about how much sympathy
white Kevin deserved, while liberals cared more about black Kevin. Does that
mean the privilege lesson made liberals care about Black Kevin more? Not at all.
When examining the control condition, the most interesting finding was made
clear: when they were simply reading about routines, liberals cared as much
about White Kevin (M = 71) as Black Kevin (M = 74). Comparing the numbers from
the control and experimental groups, we see the following pattern of results
emerge: when not thinking about white privilege, liberals cared more about poor
people than conservatives and neither seemed to care about race. When white
privilege was added to the equation, the only difference that emerged is that
only liberals started to care less about White Kevin and blamed him more for
this problems without showing any increase in care for Black Kevin.



> “At least I didn’t help that poor white guy, which makes me a good person”

In sum, it looked like briefly reading about white privilege made liberals more
conservative in their responses towards poor white people. It was a purely
negative effect, with no apparent benefits for poor black people. Conservatives,
on the other hand, remained consistent, suggesting the privilege talks weren’t
doing any good there either. While it is only speculative, it is not hard to
imagine how these effects might carry into other domains – like gender – or how
they might be made more extreme when the discussion of white privilege isn’t
limited to a short passage but instead begins to take up increasingly larger
portions of social discourse. If this results in less care for certain groups
without a corresponding increase in care for others, it should be a cause for
concern to anyone interested in seeing poverty addressed effectively. It might
also be a concern if your interest is in treating people as individuals, instead
of as proxies for an entire group of people

References: Cooley, E., Brown-Iannuzzi, J., Lei, R., & Cipolli, W.
(2019). Complex Intersections of Race and Class: Among Social Liberals, Learning
About White Privilege Reduces Sympathy, Increases Blame, and Decreases External
Attributions for White People Struggling With Poverty. Journal of Experimental
Psychology: General. http://dx.doi.org/10.1037/xge0000605

 

Posted in Altruism, EvoPsych, Race


IS MAKEUP A VALID CUE OF SOCIOSEXUALITY?

Posted on October 6, 2018 by Jesse Marczyk

> Nothing like a good makeup exam

Being wrong is costly. If I think you’re aggressive when you’re not, I will
behave inappropriately around you and incur costs I need not face. If I think
you can help me when you can’t, I will hinder achieving my goals and give up on
my search to find valuable assistance. Nevertheless, people are wrong
constantly. Being wrong itself isn’t that unusual, as it takes the proper
cognitive faculties, time, and energy to be right. The world is just a messy
place, and there are opportunity costs to gathering, scrutinizing, and
processing information, as well as diminishing returns on that search. Being
wrong is costly, but so is being right, and those costs need to be balanced
against each other, given limited resources. What is unusual is when people are
systematically wrong about something; when they’re wrong in the same particular
direction. If, say, 90% of people believe something incorrectly in the same way,
that’s certainly a strange state of affairs that requires special kinds of
explanations.

As such, if you believe people are systematically wrong about something, there
are two things you should probably do: (1) earnestly assess whether your belief
about them being wrong is accurate – since it’s often more likely you’re wrong
than everyone else is – and then, if they actually are wrong, (2) try to furnish
the proper explanation for that state of affairs and test it.

Putting that in an example I’ve discussed before, some literature claims that
men over-perceive women’s sexual interest (here). In other words, the belief
here is that many men are systematically wrong in the same way; they’re making
the same error. One special explanation provided for this was that men
over-perceiving sexual interest would lead them to approach more women (they
otherwise wouldn’t) and ultimately get more mating opportunities as a result. So
men are wrong because being wrong brings more benefits than costs. There are
some complications with that explanation, however. First, why would we expect
men to not perceive women’s interests more accurately (she’s not interested in
me) but approach them anyway (the odds are low, but I might as well go for it)?
That would lead to the same end point (approaching lots of women) without the
inaccuracy that might have other consequences (like failing to pursue a woman
who’s actually interested because you mistakenly believe a different woman is
interested when she isn’t). The special explanation also falls apart when you
consider that when you ask women about other women’s sexual interest, you get
the same result as the men. So either men and women are over-perceiving women’s
sexual interest, or perhaps they aren’t wrong. Perhaps individual women are
under-reporting their own interest for some social reasons. Maybe the women’s
self-reports are inaccurate (consciously or not), rather than everyone else
being wrong about them. The explanation that one person is wrong, rather than
everyone else is, feels more plausible.

Speaking of women’s sexual interest and people being wrong, let’s talk about a
new paper touting the idea that everyone is wrong about women’s makeup usage.
Specifically, lots of people seem to be using makeup usage as a cue to a woman’s
short-term sexual interest, and the researchers believe they’re all wrong to do
so. That makeup is an invalid cue of sociosexuality.



> Aren’t they all…

This was highlighted in three studies, which I’ll cover quickly. In the first,
69 women were photographed with and without their day-to-day makeup. Raters –
182 of them – judged those pictures in terms of (1) how much makeup they felt
the women were wearing, (2) how attractive the faces were, and (3) how much they
felt the women pictured would be comfortable with and enjoy having casual sex
with different partners; a measure of sociosexuality. The results showed that
male (d = 0.64) and female (d = 0.88) raters judged women with makeup as more
attractive than same women without, and also that the women wearing makeup were
more comfortable with casual sex than without. For those curious, this latter
difference was larger for female raters (d = 1.14) than male ones (d = 0.32).
Putting that into numbers, men rated women wearing makeup as about 0.2 points
more likely to enjoy casual sex on a scale from 1-9; for women, this difference
was closer to 0.5 points. Further, men’s perceptions of women’s interest in
casual sex seemed to be driven less by makeup per se, as much as it was driven
by a woman’s perceived attractiveness (and since makeup made them look more
attractive, they also looked more interested in casual sex). The primary finding
here, however, is that the perception was demonstrated: people (men and women)
use women’s makeup usage as a cue to their sociosexuality.

Also, men were worse at figuring out when women weren’t wearing any makeup,
compared to women likely given a lack of experience with the topic. Here, being
wrong isn’t surprising.

The second study asked the women wearing the makeup themselves to answer
questions about their own sociosexuality (using several items, rather than a
single question). They were also asked about how much time they spent applying
makeup and how much they spent on it on each month. The primary result here was
a reported lack of correlation between women’s scores on the sociosexuality
questions and the time they spent applying makeup. In other words, people
thought makeup was correlated to sexual attitudes and behaviors, but it wasn’t.
People were wrong, but in predictable ways. This ought to require a special kind
of explanation, and we’ll get to that soon.

The final study examined the relationship between people’s perceptions of a
woman’s sociosexuality and her own self-reports of it. Both men and women again
seemed to get it wrong, with negative correlations showing up between perceived
and self-reported sociosexuality. Both went in a consistent direction, though
only the male correlations were significant (male raters about r = -0.33; female
raters r = -0.21). Once attractiveness was controlled for, however, the male
correlation was similarly non-significant and comparable to women’s ratings
(average r = -0.22).

The general pattern of results, descriptively, is that men and women seem to
perceive women wearing makeup as being more interested in casual sex than women
not wearing makeup. However, the women themselves don’t self-report being more
interested in casual sex; if anything, they report being less interested in it
than people perceive. Isn’t it funny how so many people are consistently and
predictably wrong about this? Perhaps. Then again, I think there’s more to say
about the matter which isn’t explored in much detail within the paper.



> “This paper is an invalid cue of the truth”

The first criticism of this research that jumped out at me is that the
researchers only recruited women who used makeup regularly to be photographed,
rated, and surveyed. In that context, they report no relationship between makeup
use and sociosexuality (which we’ll get to in a minute, as that’s another
important matter). Restricting their sample in this way naturally reduces the
variance in the population, which might make it harder to find a real
relationship that actually exists. For instance, if I was curious whether height
is an important factor in basketball skill, I might find different answers to
this question if I surveyed the general population (which contains lots of tall
and short people) than if I only surveyed professional basketball players (who
all tend to be taller than average; often substantially so). To the authors’
credit, they do mention this point…in their discussion, as more of an
afterthought. This suggests to me the point was raised by a reviewer and was
only added to the paper after the fact, as awareness of this sampling issue
would usually encourage researchers to examine the question in advance, instead
of just note that they failed to do so at the end. So, if a relationship exists
between makeup use and interest in casual sex, they might have missed it through
selective sampling.

The second large criticism concerns the actual reported results, and by how much
that finding was missed. I find it noteworthy how the researchers interpret the
correlation between women’s self-reported time applying makeup and their
self-reported sociosexuality. In the statistical sense, the correlation is about
as close to the significant threshold as possible; r = .25, p = 0.051. As the
cut-off for significance is 0.05 or lower, this is a relationship that could
(and likely would) be interpreted as evidence consistent with the possibility
that a link between makeup usage and sociosexuality does exist, if one was
looking for a connection; that makeup use is, potentially, a valid cue of sexual
interests and behaviors. Nevertheless, the authors interpret it as “not
significant” and title their paper accordingly (“Makeup is a FALSE signal of
sociosexuality”, emphasis mine). That’s not wrong, in the statistical sense. It
also feels like a rather bold description for data that is a hair’s breadth away
from reaching the opposite conclusion, and suggests to me the authors had a
rather specific hypothesis going into this. Again, to their credit, the authors
note there is a “trend” there, but that stands in stark contrast to their rather
dramatic title and repeated claims that makeup is an invalid cue. In fact, every
instance of them noting there’s a trend between makeup use and sociosexuality
seems to be followed invariably by a claim that the results suggest there is no
relationship.

Further, there is a point never really discussed at all, which is that women
might under-report their own sociosexuality, as per the original research I
mention, perhaps because they’re wary of incurring social costs from being
viewed as promiscuous. In many domains, I would default to the assumption that
the self-reports are somewhat inaccurate. For example, when I surveyed women
about their self-perceived attractiveness (from 1-10) several years back, not a
single one rated herself below a 6 (out of 10), and the average was higher than
that. Either I had managed recruited a sample of particularly beautiful women
(possible) or people are interested in you believing they’re better than they
actually are (more likely).  After all, if you believe something inaccurately
about a person that’s flattering, while it may be a cost to you, it’s a benefit
to them. So what’s more likely: that everyone believes something that’s wrong
about others, or that some people misrepresent themselves in a flattering light?



> Doesn’t get much more flattering than that

As a final note on explaining these findings, it is worth exploring the
possibility that a woman’s physical attractiveness/makeup use actually is
correlated with relatively higher sociosexuality (despite the author’s claims
this isn’t true). In other words, people aren’t making a perceptual mistake –
the general correlate holds true – but the current sample missed it for whatever
reason (even if just barely). Indeed, there is some evidence that more
attractive women score slightly higher on measures of sociosexuality (N = 226;
Fisher et al, 2016. Ironically, this was published in the same journal 2 years
prior). While short-term encounters do carry some adaptive costs for women, this
small correlation might arise due to more physically-attractive women receiving
offers for short-term encounters that can better offset them. At the very least,
it could be expected that because attractive women carry more value in the
mating market place these offers are, in principle, more numerous. Increasing
numbers of better options should equal greater comfort and interest.

If that is true – that attractiveness does correlate in some small way with
sociosexual orientation – then this could also help explain the (also fairly
small) correlation between makeup usage and perceived sociosexuality: people
view attractive women as more open to short-term encounters, makeup artificially
increases attractiveness, and so people judge women wearing makeup as more open
to short-term encounters than they are.

We can even go one layer deeper: women generally understand that makeup makes
them look more attractive. They also understand that the more attractive they
look, the more positive mating attention they’ll likely receive. Applying
makeup, then, can be an effort to attract mating attention, in much the same way
that I might wear a nice suit if I was going on a job interview. However,
neither the suit nor the makeup is a “smart bomb”, so to speak. I might wear a
suit to attract the attention of specific employers, but just because I’m
wearing a suit that doesn’t mean I want any job (and if Taco Bell wanted to hire
me, I might be choosy and say “No thanks”). Similarly, a woman wearing makeup
might be interested in attracting mating attention from specific sources – and
be perceived as being more sexually motivated, accordingly – without wishing to
send a global signal of sexual interest to all available parties. That latter
part just happens as a byproduct. Nevertheless, in this narrower sense, makeup
usage could rightly be perceived as a sign of sexual signaling; perhaps one that
ends up getting perceived a bit more broadly than intended.

Or perhaps it’s not even perceived more broadly. The question asked of raters in
the study was whether a woman would be comfortable and enjoying having casual
sex with different partners; it’s unspecified as to the nature of those
partners. “Different” doesn’t mean “just anyone”. Women who are interested in
makeup might be slightly more interested in these pursuits, on average…but only
so long as the partners are suitably attractive.

References: Batresa, C., Russella, R., Simpsonb, J., Campbellc, L., Hansend, A.,
&
Cronke, L. (2018). Evidence that makeup is a false signal of
sociosexuality. Personality & Individual Differences, 122, 148-154 

Fisher, C., Hahn, A., DeBruine, L., & Jones, B. (2016). Is women’s sociosexual
orientation related to their physical attractiveness? Personality & Individual
Differences, 101, 396-399

Posted in EvoPsych, Sex and Sexuality


KEEPING MAKING THAT FACE AND IT’LL FREEZE THAT WAY

Posted on September 1, 2018 by Jesse Marczyk

> Just keep smiling and scare that depression away

Time to do one of my favorite things today: talk about psychology research that
failed to replicate. Before we get into that, though, I want to talk a bit about
our emotions to set the stage.

Let’s say we wanted to understand why people found something “funny.” To do so,
I would begin in a very general way: some part(s) of your mind functions to
detect cues in the environment that are translated into psychological
experiences like “humor.” For example, when some part of the brain detects a
double meaning in a sentence (“Did you hear about the fire at the circus? It was
intense”) the output of detecting that double meaning might be the psychological
experience of humor and the physiological display of a chuckle and a grin (and
maybe an eye-roll, depending on how you respond to puns). There’s clearly more
to humor than that, but just bear with me.

This leaves us with two outputs: the psychological experience of something being
funny and the physiological response to those funny inputs. The question of
interest here (simplifying a little) is which is causing which: are you smiling
because you found something funny, or do you find something funny because you’re
smiling?

Intuitively the answer feels obvious: you smile because you found something
funny. Indeed, this is what the answer needs to be, theoretically: if some part
of your brain didn’t detect the presence of humor, the physiological humor
response makes no sense. That said, the brain is not a singular organ, and it is
possible, at least in principle, that the part of your brain that outputs the
conscious experience of “that was funny” isn’t the same piece that outputs the
physiological response of laughing and smiling.



> The other part of the brain hasn’t figured out that hurt yet

In other words, there might be two separate parts of your brain that function to
detect humor independently. One functions before the other (at least sometimes),
and generates the physical response. The second might then use that
physiological output (I am smiling) as an input for determining the
psychological response (That was funny). In that way, you might indeed find
something funny because you were smiling.

This is what the Facial Feedback Hypothesis proposes, effectively: the part of
your brain generating these psychological responses (That was funny) uses a
specific input, which is the state of your face (Am I already smiling?). That’s
not the only input it uses, of course, but it should be one that is used. As
such, if you make people do something that causes their face to resemble a smile
(like holding a pen between their teeth only), they might subsequently find
jokes funnier. That was just the result reported by Strack, Martin, & Stepper
(1988), in fact.

But why should it do that? That’s the part I’m getting stuck on.

Now, as it turns out, your brain might not do that at all. As I mentioned, this
is a post about failures to replicate and, recently, the effect just failed to
replicate across 17 labs (approximately 1,900 participants) in a pre-registered
attempt. You can read more about the details here.  You can also read the
original author’s response here (with all the standard suggestions of, “we
shouldn’t rush to judgment about the effect not really replicating because…”
which I’ll get to in a minute.

What I wanted to do first, however, is think about this effect on more of a
theoretical level, as the replication article doesn’t do so.



> Publish first; add theory later

One major issue with this facial feedback hypothesis is that similar
physiological responses can underpin very different psychological ones. My heart
races not only when I’m afraid, but also when I’m working out, when I’m excited,
or when I’m experiencing love. I smile when I’m happy and when something is
funny (even if the two things tend to co-occur). If some part of your brain is
looking to use the physiological response (heart rate, smile, etc) to determine
emotional state, then it’s facing an under-determination problem. A hypothetical
inner-monologue would go something like this: “Oh, I have noticed I am smiling.
Smiles tend to mean something is funny, so what is happening now must be funny.”
The only problem there is that if I were smiling because I was happy – let’s say
I just got a nice piece of cake – experiencing humor and laughing at the cake is
not the appropriate response.

Even worse, sometimes physiological responses go the opposite direction from our
emotions. Have you ever seen videos of people being proposed to or reuniting
with loved ones? In such situations, crying doesn’t appear uncommon at all.
Despite this, I don’t think some part of the brain would go, “Huh. I appear to
be crying right now. That must mean I am sad. Reuniting with loved ones sure is
depressing and I better behave as such.”

Now you might be saying that this under-determination isn’t much of an issue
because our brains don’t “rely” on the physiological feedback alone; it’s just
one of many sources of inputs being used. But then one might wonder whether the
physiological feedback is offering anything at all.

The second issue is one I mentioned initially: this hypothesis effectively
requires that at least two different cognitive mechanisms are responding to the
same event. One is generating the physiological response and the other the
psychological response. This is a requirement of the feedback hypothesis, and it
raises additional questions: why are two different mechanisms trying to
accomplish what is largely the same task? Why is the emotion-generating system
using the output of the physiological-response system rather than the same set
of inputs? This seems not only redundant, but prone to additional errors, given
the under-determination problem. I understand that evolution doesn’t result in
perfection when it comes to cognitive systems, but this one seems remarkably
clunky.



> Clearly the easiest way to determine emotions. Also, Mousetrap!

There’s also the matter of the original author’s response to the failures to
replicate, which only adds more theoretically troublesome questions. The first
criticism of the replications is that psychology students may differ from
non-psychology students in showing the effect, which might be due to psychology
students knowing more about this kind of experiment going into it. In this case,
awareness of this effect might make it go away. But why should it? If the
configuration of your face is useful information for determining your emotional
state, simple awareness of that fact shouldn’t change the information’s value.
If one realizes that the information isn’t useful and discards it, then one
might wonder when it’s ever useful. I don’t have a good answer for that.

Another criticism focused on the presence of a camera (which was not a part of
the initial study). The argument here is that the camera might have suppressed
the emotional responses that otherwise would have obtained. This shouldn’t be a
groundbreaking suggestion on my part, but smiling is a signal for others; not
you. You don’t need to smile to figure out if you’re happy; you smile to show
others you are. If that’s true, then claiming that this facial feedback effect
goes away in the presence of being observed by others is very strange indeed. Is
information about your facial structure suddenly not useful in that context? If
the effects go away when being observed, that might demonstrate that not only
are such feedback effects not needed, but they’re also potentially not
important. After all, if they were important, why ignore them?

In sum, the facial feedback hypothesis should require the following to be
generally true:

 * (1) One part of our brain should successfully detect and process humor,
   generating a behavioral output: a smile.
 * (2) A second part of our brain also tries to detect and process humor,
   independent of the first, but lacks access to the same input information
   (why?). As such, it uses the outputs of the initial system to produce
   subsequent psychological experiences (that then do what? The relevant
   behavior already seems to be generated so it’s unclear what this secondary
   output accomplishes. That is, if you’re already laughing, why do you need to
   then experience something as funny?)
 * (3) This secondary mechanism has the means to differentiate between similar
   physiological responses in determining its own output
   (fear/excitement/exercise all create overlapping kinds of physical responses,
   happiness sometimes makes us cry, etc. If it didn’t differentiate it would
   make many mistakes, but if it can already differentiate, what does the facial
   information add?).
 * (4) Finally, that this facial feedback information is more or less ignorable
   (consciously or not), as such effects may just vanish when people are being
   observed (which was most of our evolutionary history around things like
   humor) or if they’re aware of their existence. (This might suggest the value
   of the facial information is, in a practical sense, low. If so, why use it?)

As we can see, that seems rather overly convoluted and leaves us with more
questions than it answers. If nothing else, these questions present a good
justification for undertaking deeper theoretical analyses of the “whys” behind a
mechanism before jumping into studying it.

References: Strack, F., Martin, L. L., Stepper, S. (1988). Inhibiting and
facilitating conditions of the human smile: A nonobtrusive test of the facial
feedback hypothesis. Journal of Personality and Social Psychology, 54, 768–777

Wagenmaker, E. et al (2016). Registered replication report: Strack, Martin, &
Stepper, (1988). Perspectives on Psychological Science,
11, https://doi.org/10.1177/1745691616674458

Posted in Cognition, EvoPsych


MAYBE IT’S NOT THE MONEY; MAYBE IT’S WHAT MONEY REPRESENTS

Posted on August 13, 2018 by Jesse Marczyk

> ‘Thank for all for being here. Now pay me”

I’m a big believer in the value of education, which is why I’ve spent so much
time educating people (in forums other than here and about topics other than
psychology as of late, but I’m always scratching that same itch). As anyone who
has been through an education system can tell you, however, not all educators
provide the same amount of value. Some teachers and professors have inspired me
to reach for new heights while others have killed any interest in a subject I
might have had. Some taught me valuable and useful information while others have
provided active misinformation. Naturally, if we have the option, we’d all
prefer the former type of teacher – the good ones. The same holds true for most
parents as well: given the option, they’d prefer their children had access to
the best teachers over the worst ones, all else being equal. This is all working
under the assumption that good teachers provide better opportunities for their
students in the future. I don’t think we’re breaking any new ground here with
these premises and I think they’re all sound. This drives students and parents
to seek out the best teachers they can find.

Quantifying someone’s quality as an educator is difficult, however. This leads
people to fall back on the things they can measure more easily as proxies for
educator quality, like student outcomes. After all, if a student cannot perform
tasks related to what they were just taught, that’s a reasonable indication that
the teacher might not be great at their job. If only matters were that simple
we’d have better teachers. They aren’t, though, since such a measure conflates
student quality with teaching quality. Put the best teacher in a room of
students with an IQ below 80 and you’ll see worse outcomes in terms of student
performance than a poor teacher instructing a class with an IQ above 120.
Teachers can help you reach for the stars; they just can’t bring the stars to
you.

Nevertheless, people do use student outcomes as a proxy for education quality
and, as it turns out, students at private schools tend to outperform those at
public ones. With limited information available, many people might come to
believe that private schools give their children a better education and invest
large amounts of resources to ensure their children go there. Perhaps we could
improve student performance if we could just send more children to private
schools. It’s an interesting suggestion.



> “No poor people allowed…until now”

Let’s get the most important question out there first: why would we expect that
a private education is better than a public education? The reason this question
matters is because the primary difference between these two sources of education
is simply the source of funding: private education is funded privately; public
education publicly. One might wonder what the source of the funding has to do
with the quality of education received, and rightly so. As far as I can tell,
that answer should be that funding source per se is largely irrelevant. If
you’re buying a new phone, the quality of phone you receive shouldn’t be
expected to change on the basis of whether you’re using your money or the
government’s money to make the purchase. The same should hold true of education.

As such, if you’re wondering whether private or public education is better,
you’re not really looking at the right variables. Whatever factors are important
for a good education – class sizes, instructor quality, instruction method, and
so on – should be the same for both domains. So perhaps, then, private
educations are better because more money allows people to purchase better
teachers with better supplies and better methods. As the old saying goes, “you
get what you pay for.” Presumably, this would result in children at private
schools achieving more in terms of learning and outperforming their
public-schooled peers. It might also mean that if public schools just received
more money to purchase more materials, space, or better teachers, you’d see
student performance begin to increase

That said, this logic usually only holds true to a point. There are diminishing
returns on the amount of quality you receive per extra dollar spent. A $5 shirt
might be of lower quality than a $30 shirt, but is that shirt six-times better?
Is that $120 designer shirt four-times better still? At some point, spending
more doesn’t necessarily get you much in the way of a better product.



> Hey you tried paying even more than that?

This brings us nicely to the present paper by Pinata & Ansari (2018) who
examined a sample of approximately 1100 children’s education-related
achievements over time (from birth to age 15). While the paper isn’t
experimental in nature, the authors sought to determine to what extent
children’s enrollment in private schools affected their performance, as records
on their school attendance were available (among other measures). Whether these
children attended any private school (yes/no) as well as how much private school
they attended were used to predict their ninth-grade performance on a number of
standard metrics. These included cognitive, literary, and math skills, as well
as working memory abilities. Just to be thorough, they also asked these children
how competent they felt in a couple academic domains. The authors also assessed
children’s behavioral problems – internal and external – and social skills to
see if private school had an impact on those as well. Finally, a number of
family variables were collected, including factors like birth weight, maternal
employment and vocabulary, and race. In other words, factors unrelated to the
public vs private schooling itself.

Turning to the results, when the authors were just trying to predict cognitive
and academic performance from the amount of private school attended, there was a
noticeable difference. Children who attended any private school tended to
outperform those who only attended public school on most of the measured
variables. The authors then conducted the same analysis, adding in some of those
pesky family variables – like family income – which ended up reducing just about
all of those relationships to non-significance, and this was true regardless of
how long the children had attended private institutions. In other words,
children who attend private school tended to do better than those who attended
public school, but this might have very little to do with the schools per se.

While that finding might be interesting to some for reasons related to their
finances, it interests me for a different reason. Specifically, at no point in
the paper (or the comments/reactions to it) do the authors mention that maybe
the difference in performance has to do with some kind of biologically-inherited
potential. The ability to learn, like all things biological, is partially
inherited. Smart parents tend to have smart children, just like tall parents
tend to have tall children. Instead the focus on this paper (and the commentary)
seems to revolve predominantly around controlling for the monetary factors.  



> Let’s just print more money until everyone’s a genius

Maybe richer parents are able to provide things that poorer parents cannot, and
those things lead to better academic performance. Perhaps that’s true, but it
does seem to gloss over a rather important fact: wealth is not distributed
randomly. Those who are able to achieve higher incomes tend to do so because
they possess certain skills that those who fail to achieve high income lack.
These could be related to intelligence (factors like good working memories and
high IQ) or personality (higher in agreeableness, conscientiousness, or other
important factors). This is a long-winded way of saying that people who can
successfully complete complicated jobs and show up consistently probably out
earn those who mess up everything they touch, frequently miss work, or become
distracted by other goals regularly. Each group also tends to have children
which inherit these tendencies.

We might expect, then, that parents who have lots of money to spend on an
expensive private education are higher-performers, on average; that’s why they
have so much extra cash and value spending it on what they think is a good
education. They’re also the same kind of parents who are likely to have children
who are higher performers, because the children genetically resemble them. This
would certainly explain the present set of findings.

When people have different biological performance ceilings the best teachers
might help students reach those ceilings without changing where they reside.
Past a certain point, then, educator quality may fail to have a noticeable
effect. Let’s put that in a sports example: a great coach may make his players
as good as they can be at basketball and as a team working together, but he
can’t coach them into being taller. No amount of money can buy that ability in a
coach. Conversely, some people are likely to succeed even despite a poor
education simply because they’re capable enough on their own that they don’t
need much additional guidance. A poor teacher to them is simply white noise in
the background they can ignore as they achieve all on their own.



> “Can you please shut up so I can get back to being great?”

All of this is not to say that educators don’t vary in quality, but it could be
the case that the distribution of that quality is at least partially (perhaps
even largely or entirely) independent of money at the moment. Maybe teachers are
being hired on the basis of things that have little to do with their ability to
provide quality education. In higher education this is most certainly the case,
where publications and the ability to bring in grant money look appealing.

There is also the lurking matter of how peer quality influences the education of
other students. A healthy portion of school life for any child involves managing
the social world they attend school in. Children transferring into one school
from another – private or public – find themselves faced with the prospect of
navigating a new social hierarchy, and that goal tends to distract from
education. Similarly, children who find themselves in a school where their peers
don’t value education may not put learning at the top of their to-do list, as it
affords them little social mobility (at least in the short term). It’s also
possible that even poor-performing children will find little motivation to
improve when you surround them by high-performing children if the gap between
them is too wide. Since they can’t improve enough to see social gains from it,
they may disengage from education and pursue other goals.

It’s not like the only thing that can change between schools – public or private
– is educator quality or the amount of money they have for books. Many other
moving parts are at work, so simply shuffling more children into private schools
shouldn’t be expected to just improve outcomes.

References: Pinata, R. & Ansari, A. (2018). Does attendance in private schools
predict student outcomes at age 15? Evidence from a longitudinal
study. Educational Research, DOI: 10.3102/0013189X18785632

Posted in Learning


GETTING OFF YOUR PHONE: BENEFITS?

Posted on June 11, 2018 by Jesse Marczyk

> The videos will almost be as good as being there in person

If you’ve been out to any sort of live event lately – be it a concert or other
similar gathering; something interesting – you’ll often find yourself looking
out over a sea of camera phones (perhaps through a camera yourself) in the
audience. This has often given me a sense of general unease at times, namely for
two reasons: first, I’ve taken such pictures before in the past and, generally
speaking, they come out like garbage. Turns out it’s not the easiest thing in
the world to get clear audio in a video at a loud concert, or even a good
picture if you’re not right next to the stage. But, more importantly, I’ve found
such activities to detract from the experience; either because you’re spending
time on your phone instead of just watching what you’re there to see, or because
it signals an interest to showing other people what you’re doing rather than
just doing it and enjoying yourself. Some might say all those people taking
pictures aren’t quite living for the moment, so to speak.

In fact, it has been suggested (Soares & Storm, 2018) that the act of taking a
picture can actually make your memory for the event worse at times. Why might
this be? There are two candidate explanations that come to mind: first, and
perhaps most intuitively, screwing around on your phone is a distraction. When
you’re busy trying to work the camera and get the right shot, you’re just not
paying attention to what you’re photographing as much. It’s a boring
explanation, but perfectly plausible, just like how texting makes people worse
drivers; their attention is simply elsewhere.

The other explanation is a bit more involved, but also plausible. The basics go
like this: memory is a biologically-costly thing. You need to devote resources
to attending to information, creating memories, maintaining them, and calling
them to mind when appropriate. If we remembered everything we ever saw, for
instance, we would likely be devoting lots of resources to ultimately irrelevant
information (no one really cares how many windows each building you pass on your
way home from work has, so why remember it?), and finding the relevant memory
amidst a sea of irrelevant ones would take more time. Those who store memories
efficiently might thus be favored by selection pressures as they can more
quickly recall important information with less investment. What does that have
to do with taking pictures? If you happen to snap a picture, you now have a
resource you could later consult for details. Rather than store this information
in your head, you can just store it in the picture and consult the picture when
needed. In this sense, the act of taking a picture may serve as a proximate
cue to the brain that information needs to be attended to less deeply and
committed less firmly to memory.



> Too bad it won’t help everyone else forget about your selfies

Worth noting is that these explanations aren’t mutually exclusive: it could both
be true that taking a picture is a cue you don’t need to remember information as
well and that taking pictures is distracting. Nevertheless, both could explain
the same phenomenon, and if you want to test to see if they’re true, you need a
way of differentiating them; a context in which the two make opposing
predictions about what would happen. As a spoiler warning, the research I wanted
to cover today tries to do that, but ultimately fails at the task. Nevertheless,
the information is still interesting, and appreciating why the research failed
at its goal is useful for future designs, some of which I will list at the end.

Let’s begin with what the researchers did: they followed a classic research
paradigm in this realm and had participants take part in a memory task. They
were shown a series of images and then given a test about them to see how much
they remembered. The key differentiating variable here was that some of the time
participants would watch without taking pictures, take a picture of each target
before studying it, or take a picture and delete it before studying the target.
The thinking here was that – if the efficiency explanation was true –
participants who took pictures in a way they knew they wouldn’t be able to
consult later – such as when they are snapchatted or deleted – would instead
commit more of the information to memory. If you can’t rely on the camera to
have the pictures, it’s an unreliable source of memory offloading (the official
term), and so we shouldn’t offload. By contrast, if the mere act of taking the
picture was distracting and interfered with memory in some way because of that,
whether the picture was deleted or not shouldn’t matter. The simple act of
taking the picture should be what causes the memory deficits, and similar
deficits should be observed regardless of whether the picture was saved or
deleted.

Without going too deeply into the specifics, this is basically what the
researchers found: when participants had merely taken a picture – regardless of
whether it was deleted or stored – the memory deficits were similar. People
remembered these images better when they weren’t taking pictures. Does this
suggest that taking pictures is simply an attention problem on forming memories,
rather than an offloading one?



> Maybe the trash can is still a reliable offloading device

Not quite, and here’s why: imagine an experiment where you were measuring how
much participants salivated. You think that the mere act of cooking will get
people to salivate, and so construct two conditions: one in which hungry people
cook and then get to eat the food after, and another in which hungry people cook
the food and then throw it away before they get to eat (and they know in advance
they will be throwing it away). What you’ll find in both cases is that people
will salivate when cooking because the sights and smells of the food are
proximate cues of getting to eat. Some part of their brains are responding to
those cues that signal food availability, even if those cues do not ultimately
correspond to their ability to eat it in the future. The part of the brain that
consciously knows it won’t be getting food isn’t the same part responding to
those proximate cues. While one part of you understands you’ll be throwing the
food away, another part disagrees and thinks, “these cues mean food is coming,”
and you start salivating anyway because of it.

This is basically the same problem the present research ran into. Taking a
picture may be a proximate cue that information is stored somewhere else and so
you don’t need to remember it as well, even if that part of the brain that is
instructed to delete the picture believes otherwise. We don’t have one mind, but
rather a series of smaller minds that may all be working with different
assumptions and sets of information. Like a lot of research, then, the design
here focuses too heavily on what people are supposed to consciously understand,
rather than on what cues the non-conscious parts of the brain are using to
generate behavior.

Indeed, the authors seem to acknowledge as much in their discussion, writing the
following:

> ”Although the present results are inconsistent with an “explicit” form of
> offloading, they cannot rule out the possibility that through learned
> experience, people develop a sort of implicit transactive memory system with
> cameras such that they automatically process information in a way that assumes
> photographed information is going to be offloaded and available later (even if
> they consciously know this to be untrue). Indeed, if this sort of automatic
> offloading does occur then it could be a mechanism by which photo-taking
> causes attentional disengagement”

All things considered, that’s a good passage, but one might wonder why that
passage was saved for the end of their paper, in the discussion section. Imagine
instead that this passage appeared in the introduction:

> “While it is possible that operating a camera taking a picture disrupts
> participants attention and results in a momentary encoding deficit, it is also
> completely possible that the mere act of taking picture is a proximate cue
> used by the brain to determine how thoroughly (largely irrelevant) information
> needs to be encoded. Thus, our experiment doesn’t actually differentiate
> between these alternative hypotheses, but here’s what we’re doing anyway…”

Does your interest in the results of the paper go up or down at that point?
Because that would effectively be the same thing the discussion section said. As
such, it seems probable that the discussion passage may well represent an
addition made to the paper after the fact, per a reviewer request. In other
words, the researchers probably didn’t think the idea through as fully as they
might like.  With that in mind, here are a few other experimental conditions
they could have run which would have been better at the task of separating the
hypotheses:

 * Have participants do something with a phone that isn’t taking a picture to
   distract themselves. If this effect isn’t picture specific, but people simply
   remember less when they’ve been messing around on a phone (like typing out a
   word, then looking at the picture), then the attention hypothesis would look
   better, especially if the impairments to memory are effectively identical.
 * Have an experimenter take the pictures instead of the participant. That way
   participants would not be distracted by using a phone at all, but still have
   a cue that the information might be retrievable elsewhere. However, the
   experimenter could also be viewed as a source of information themselves, so
   there could be another condition where an experimenter is simply present
   doing something that isn’t taking a picture. If an experimenter taking a
   picture results in worse memory as well, then it might be something about the
   knowledge of a picture in general causing the effect.
 * Better yet, if messing around with the phone is only temporarily disrupting
   encoding, then having participants take a picture of the target briefly and
   then wait a period (say, a minute) before viewing the target for the 15
   seconds proper should help differentiate the two hypotheses. If the mere act
   of taking a picture in the past (whether deleted or not) causes participants
   to encode information less thoroughly because of proximate cues for efficient
   offloading, then this minor time delay shouldn’t alleviate those memory
   deficits. By contrast, if messing with the phone is just distracting people
   momentarily, the time delay should help counteract the effect.

These are all productive avenues that could be explored in the future for
creating conditions where these hypotheses make different predictions,
especially the first and third ones. Again, both could be true, and that could
show up in the data, but these designs give the opportunity for that to be
observed.

And, until the research is conducted, do yourself a favor and enjoy your
concerts instead of viewing them through a small phone screen. (The caveat here
is that it’s unclear whether such results would generalize, as in real life
people decide what to take pictures of, rather than taking pictures of things
they probably don’t really care about).

References: Soares, J. & Storm, B. (2018). Forgot in a flash: A further
investigation of the photo-taking-impairment effect. Journal of Applied Research
in Memory & Cognition, 7, 154-160

Posted in EvoPsych, Morality


THE DISTRUST OF ATHEISTS

Posted on May 17, 2018 by Jesse Marczyk

> Atheists are good friends because they keep it real

There’s an interesting finding rolling around about what kinds of people
Americans would vote for as a president. When asked:

> “If your party nominated a generally well-qualified person for president who
> happened to be [blank], would you vote for that that person?”

Answers varied a bit depending on the blank: 96% of Americans would vote for a
Black president (while only 4% would not); 95% would vote for a woman.
Characteristics like that don’t really dissuade people, at least in the
abstract. Other groups don’t fare as well: only 68% of people said they would
vote for a gay/lesbian candidate, and 58% a Muslim. But bottoming out the list?
Atheists. A mere 54% of people said they would vote for an atheist. This is also
a finding that changes a bit – but not all that much – between political
affiliations. At the low point, 48% of Republicans would vote for an atheist,
while at its peak, 58% of Democrats would. An appreciable difference, but not
night and day (larger differences exist for Mormon, gay/lesbian and Muslim
candidates, coming in at 18%, 26%, and 22%, respectively).

At the outset – and this is a point that will become important later – it is
worth noting that the answers to these questions might not tell you how people
would feel about any particular atheist, woman, Muslim, etc. They are not asking
whether people would vote for a specific atheist; they are asking about voting
for an atheist in the abstract sense of the word, so they are relying on
stereotype information. It is also worth noting that people have become much
more tolerant over time: in 1958, only 18% said they would vote for an atheist,
so getting up to over half (and up to 70% in the younger generation) is good
progress. Of course, only 38% said they would vote for a black person during
that same year which, as we just saw, has changed dramatically to near 100% by
2012. Atheists haven’t made similar gains, in terms of degree.

This is a very interesting finding that begs for a proper explanation. What is
it about atheists that puts people off so much? While I can’t provide a
comprehensive or definitive answer at the moment, there is some research I
wanted to discuss today that helps shed some light on the issue.



> Spoilers…

The basic premise of this research is effectively that – to some (perhaps large)
degree – religion per se isn’t what people are necessarily concerned about when
they’re providing their answers to questions like our voting one. Instead, what
concerns people are other, more-relevant factors that religiosity just so
happens to correlate with. So people are really concerned with trait X in a
candidate, but are using religiosity as a means of indirectly assessing the
presence of trait X. In case that all sounds a bit too abstract, let’s make it
concrete and think about the trait Moon, Krems, and Cohen (2018) examined:
trust.

When considering who you’d like to support politically or interact with
socially, trust is an important factor. If you know you can trust someone, this
increases the types of cooperation you can safely engage in with them. When you
cannot trust someone, for instance, interactions with them need to be relatively
immediate for the sake of safety: I give you the money now and I get my product
now. If they aren’t trustworthy, you should be less inclined to give them money
now for the promise of your product in a day, week, month, year, or beyond, as
they might just take your money and run. By contrast, someone who is trustworthy
can offer cooperation over the longer term. The same logic applies to a leader.
If you cannot trust a leader to work in your interests, why follow them and
offer your support?

As it turns out, religious people are perceived to be more trustworthy than the
nonreligious. Why might this be the case? One ostensibly obvious explanation
that might jump out at you is that religious people tend to believe in deities
that punish people for misbehavior. If someone believes they will be punished
for breaking a promise, they should be less likely to break that promise, all
else being equal. This is one explanation for the trust finding, then, but
there’s an issue: it’s quite easy to just say you believe in a punishing deity
when you actually do not. Since that signal is so cheap to produce, it wouldn’t
be trustworthy.

This is where religion in particular might help, as membership in a religious
group often involves some degree of costly investment: visits to houses of
worship, following rituals that are a real pain to complete, and any other
similar behavior. Those who are unwilling to endure those immediate costs for
group membership demonstrate that they’re just talk. Their commitment doesn’t
run deep enough for them to be willing to suffer for them. When behavior is no
longer cheap, you can believe what people are telling you. Now this might make
religious people look more trustworthy because it demonstrates they’re more
groupish and – by extension – more cooperative, but this groupishness is a
double-edged sword: those who are inclined towards their group are usually less
inclined towards others. This might mean that religious people are more
trustworthy to their in-group, but not necessarily their outgroup.



> “Who’s up next to demonstrate their trustworthiness?”

There are other explanations, though. The one the present paper favors is the
possibility that religious people tend to follow slower life history strategies.
This means possessing traits like sexual restrictiveness (they’re relatively
monogamous, or at least less promiscuous), greater investment in family, and
generally more future-looking than they are living in the present. This would be
what makes them look more cooperative than the non-religious. Fast life history
strategies are effectively the opposite: they view life as short and
unpredictable and so take benefits today instead of saving for tomorrow, and
invest more in mating effort than parental effort. Looking at religious
individuals as slow-life strategists fits well with previous research suggesting
that religious attitudes correlate better with sexual morality than they do
cooperative morality, and that religions might act as support for long-term,
monogamous, high-fertility mating strategies.

As with many stereotypes, those about religious individuals possessing these
slow-life-history traits to a greater degree seems to be fairly accurate. So,
when people are asked to judge an individual and are given no more information
about them than their religion, they may tend to default to using those
stereotypes to assess other traits of interest, like trust. This should also
predict that when people know more about a particular individual’s life history
strategy – be it fast or slow – religion per se should cease to be used as a
predictor. After all, why bother to use religion to assess someone’s life
history strategy when you can just assess that strategy directly? Religion stops
adding anything at that point, and so information about it should be largely
discarded.

As it turns out, this is basically what the research uncovered. In the first
experiment people (N = 336) were asked whether they perceived targets (dating
profiles of religious or non-religious individuals) as possessing traits like
aggression, impulsivity, education, whether they thought they came from a rough
neighborhood, and whether they trusted the person. As expected, people perceived
the religious targets as being less aggressive, impulsive, more educated, more
committed in sexual relationships, and – accordingly – trusted them more. These
perceptions held even for the non-religious raters on average, who appeared to
trust religious people more than those who shared their lack of belief.
Experiment three basically replicated these same results, but also found that
the effects were partially independent of the specific religion in question.
That is, whether the target being judged was Christian or Muslim, they were
still both trusted more than the non-religious targets (even if Christians were
nominally trusted more than Muslims, likely due to the majority religion of the
country in which the research took place).



> Mileage may vary on the basis of local religious majorities

Experiment two  is where the real interesting finding emerged. The procedure was
generally the same as before, but now the dating profiles contained better
individuating information about the person’s life history strategy. In this
case, the targets described themselves either as looking for “someone special,
settling downing, and starting a family,” or one who, “doesn’t see themselves
setting down anytime soon, as they enjoy playing the field” (paraphrased
slightly). When rating these profiles with better information about the person
(beyond simply their religious behavior/belief), the effect of commitment
strategy on trust was much larger (ηp2 = .197) than the effect of religion per
se (ηp2 = .008).

The authors also tried to understand which variables predicted this relationship
between reproductive strategy and trust. Their first model used “belief in god”
as a mediator and did indeed find a small, but significant relationship running
from reproductive strategy predicting belief in god which in turn predicted
trust. However, when other life history traits were included as mediator
variables (like impulsivity, opportunistic behavior, education, and hopeful
ecology – which means what kind of neighborhood one comes from, effectively),
the belief in god mediator was no longer significant while three of the life
history variables were.

In short, this would suggest that belief in god itself is not the thing doing
much of the pulling when it comes to understanding why people trust religious
people more. Instead, people are using religion as something of a proxy for
someone’s likely reproductive strategy and, accordingly, life history traits. As
such, when people have information directly bearing on the traits they’re
interested in assessing, they largely stop using their stereotypes about
religion in general and instead rely on information about the person (which is
completely consistent with previous research on how people use stereotype
information: when no other information is available, stereotypes are used, but
as more individuating information is available, people rely on that more and
their stereotypes less).

References: Moon, J., Krems, J., & Cohen, A. (2018). Religious people are
trusted because they are viewed as slow life-history strategists. Psychological
Science, DOI: 10.1177/0956797617753606

Posted in Philosophy, Sex and Sexuality


THE BEAUTIFUL PEOPLE

Posted on April 21, 2018 by Jesse Marczyk

> Less money than it looks like if that’s all 10′s

There’s a perception that exists involving how out of touch rich people can be,
summed up well in this popular clip from the show Arrested Development: “It’s
one banana, Michael, how much could it cost? Ten dollars?” The idea is that
those with piles of money – perhaps especially those who have been born into it
– have a distorted sense for the way the world works, as there are parts of it
they’ve never had to experience. A similar hypothesis guides the research I
wanted to discuss today, which sought to examine people’s beliefs in a just
world. I’ve written about this belief-in-a-just-world hypothesis before;
the reviews haven’t been positive.

The present research (Westfall, Millar, & Lovitt, 2018) took the following
perspectives: first, believing in a just world (roughly that people get what
they deserve and deserve what they get) is a cognitive bias that some people
hold to because it makes them feel good. Notwithstanding the fact that “feeling
good” isn’t a plausible function, for whatever reason the authors don’t seem to
suggest that believing the world to be unfair is a cognitive bias as well, which
is worth keeping in the back of your mind. Their next point is that those who
believe in a just world are less likely to have experienced injustice
themselves. The more personal injustice one experiences (those that affect you
personally in a negative way), the more one is likely to reject their belief in
a just world because, again, rejecting that belief when faced with contradictory
evidence should maintain self-esteem. Placed in a simple example, if something
bad happened to you and you believe the world is a just place, that would mean
you deserved that bad thing because you’re a bad person. So, rather than think
you’re a bad person, you reject the idea that the world is fair. Seems that the
biasing factor there would be the message of, “I’m awesome and deserve good
things” as that could explain both believing the world is fair if things are
going well and unfair if they aren’t, rather than the just-world belief being
the bias, but I don’t want to dwell on that point too much yet.

This is where the thrust of the paper begins to take shape: attractive people
are thought to have things easier in life, not unlike being rich. Because being
physically attractive means one will be exposed to fewer personally-negative
injustices (hot people are more likely to find dates, be treated well in social
situations, and so on), they should be more likely to believe the world is a
just place. In simple terms, physical attractiveness = better life = more belief
in a just world. As the authors put it:

> Consistent with this reasoning, people who are societally privileged, such as
> wealthy, white, and men, tend to be more likely to endorse the just-world
> hypothesis than those considered underprivileged

The authors also throw some line in their introduction about how physical
attractiveness is “largely beyond one’s personal control,” and how “…many
long-held beliefs about relationships, such as an emphasis on personality or
values, are little more than folklore,” in the face of people valuing physical
attractiveness. Now these don’t have any relevance to their paper’s theory and
aren’t exactly correct, but should also be kept in the back on your mind to
understand the perspective they are writing from.



> What a waste of time: physical attractiveness is largely beyond his control

In any case, the authors sought to test this connection between greater
attractiveness (and societal privilege) to greater belief in a just world across
two studies. The first of these involved asking about 200 participants (69 male)
about their (a) belief in a just world, (b) perceptions of how attractive they
thought they were, (c) self-esteem, (d) financial status, and (e) satisfaction
with life. About as simple as things come, but I like simple. In this case, the
correlation between how attractive one thought they were and belief in a just
world were rather modest (r = .23), but present. Self-esteem was a better
predictor of just-world beliefs (r = .34), as was life satisfaction (r = .34). A
much larger correlation understandably emerged between life satisfaction and
perceptions of one’s own attractiveness (r = .67). Thinking one was attractive
made one happier with life than it did lead one to believe the world is just.
Money did much the same: financial status correlated better with life
satisfaction (r = .33) than it did just world beliefs (r = .17). Also worth
noting is that men and women didn’t differ in their just world beliefs (Ms of
3.2 and 3.14 on the scale, respectively). 

Study 2 did much the same as study one with basically the same sample, but it
also included ratings of a participant’s attractiveness supplied by others. This
way you aren’t just asking people how attractive they are; you are also asking
people less likely to have a vested interest in the answer to the question (for
those curious, ratings of self-attractiveness only correlated with other-ratings
at r = .21). Now, self-perception of physical attractiveness correlated with
belief in a just world (r = .17) less well than independent ratings of
attractiveness did (r = .28). Somewhat strangely, being rated as prettier by
others wasn’t correlated with self-esteem (r = .07) or life satisfaction (r =
.08) – which you might expect it would if being attractive leads others to treat
you better – though self-ratings of attractiveness were correlated with these
things (rs = .27 and .53, respectively). As before, men and women also failed to
differ with respect to their just world beliefs.

From these findings, the authors conclude that being attractive and rich makes
one more likely to believe in a just world under the premise that they
experience less injustice. But what about that result where men and women don’t
differ with respect to their belief in a just world? Doesn’t that similarly
suggest that men and women don’t face different amounts of injustice? While this
is one of the last notes the authors make in their paper, they do seem to
conclude that – at least around college age – men might not be particularly
privileged over women. A rather unusual passage to find, admittedly, but a
welcome one. Guess arguments about discrimination and privilege apply less to at
least college-aged men and women.

While reading this paper, I couldn’t shake the sense that the authors have a
rather particular perspective about the nature of fairness and the fairness of
the world. Their passages about how belief in a just world is a bias not
containing any comparable comments about how thinking the world is unjust also a
bias, coupled with comments about how attractiveness if largely outside of one
own’s control and this…

> Finally, the modest yet statistically significant relationship between current
> financial status and just-world beliefs strengthens the case that these
> beliefs are largely based on viewing the world from a position of privilege.

 …in the face of correlations ranging from about .2 to .3 does likely say
something about the biases of the authors. Explaining about 10% or less of the
variance in belief in a just world from ratings of attractiveness or financial
status does not scream that ‘these beliefs are largely based’ on such things to
me. In fact, it seems to suggest beliefs in a just world are largely based on
other things. 



> “The room is largely occupied the ceiling fan”

While there is an interesting debate to have over the concept of fairness in
this article, I actually wanted to use this research to discuss a different
point about stereotypes. As I have wrote before, people’s beliefs about the
world should tend towards accuracy. That is not to say they will
always be accurate, mind you, but rather that we shouldn’t expect there to be
specific biases built into the system in many cases. People might be wrong about
the world to various degrees, but not because the cognitive system generating
those perceptions evolved to be wrong (that is, take accurate information about
the world and distort it); they should just be wrong because of imperfect
information or environmental noise. The reason for this is that there are costs
to being wrong and acting on imperfect information. If I believe there is a
monster that lives under my bed, I’m going to behave differently than the person
who doesn’t believe in such things. If I’m acting under and incorrect belief, my
odds of doing something adaptive go down, all else being equal.

That said, there are some cases where we might expect bias in beliefs: the
context of persuasion. If I can convince you to hold an incorrect belief, the
costs to me can be substantially reduced or outweighed entirely by the benefits.
For instance, if I convince you that my company is doing very well and only
going to be doing better in the future, I might attract your investment,
regardless of whether that belief in me you have is true. Or, if I had authored
the current paper, I might be trying to convince you that attractive/privileged
people in the world are biased while the less privileged are grounded realists.

The question arises, then, as to what the current results represent: are the
beautiful people more likely to perceive the world as fair and the ugly ones
more likely to perceive it as unjust because of random mistakes, persuasion, or
something else? Taking persuasion first, if those who aren’t doing as well in
life as they might hope because of their looks (or behavior, or something else)
are able to convince others they have been treated unjustly and are actually
valuable social assets worthy of assistance, they might be able to receive more
support than if they are convinced their lot in life has been deserved.
Similarly, the attractive folk might see the world as more fair to justify their
current status to others and avoid having it threatened by those who might seek
to take those benefits for their own. This represents a case of bias: presenting
a case to others that serves your own interest, irrespective of the truth.

While that’s an interesting idea – and I think there could be an element of that
to it in these results – there another option I wanted to explore as well: it is
possible that neither side is actually biased. They might both be acting off
information that is accurate as far as they know, but simply be working under
different sets of it.



> “As far as I can tell, it seems flat”

This is where we return to stereotypes. If person A has had consistently
negative interactions with people from group X over their life, I suspect person
A would have some negative stereotypes about them. If person B has had
consistently positive interactions with people from the same group X over their
life, I further suspect person B would have some positive stereotypes about
them. While those beliefs shape each person’s expectations of the behavior of
unknown members of group X and those beliefs/expectations contrast with each
other, both are accurate as far as each person is concerned. Person A and B are
both simply using the best information they have and their cognitive systems are
injecting no bias – no manipulation of this information – when attempted to
develop as accurate a picture of the world as possible.

Placed into the context of this particular finding, you might expect that
unattractive people are treated differently than attractive ones, the latter
offering higher value in the mating market at a minimum (along with other
benefits that come with greater developmental stability). Because of this, we
might have a naturally-occurring context where people are exposed to two
different versions of the same world, both develop different beliefs about it,
but neither necessarily doing so because they have any bias. The world
doesn’t feel unfair to the attractive person, so they don’t perceive it as such.
Similarly, the world doesn’t feel fair to the unattractive person who feels
passed over because of their looks. When you ask these people about how fair the
world is, you will likely receive contradictory reports that are both accurate
as far as the person doing the reporting is aware. They’re not biased; they just
receive systematically different sets of information.

Imagine taking that same idea and studying stereotypes on a more local
level. What I’ve read about when it comes to stereotype accuracy research has
largely been looking at how people’s beliefs about a group compare to that group
more broadly; along the lines of asking people, “How violent are men, relative
to women,” and then comparing those responses to data collected from all men and
women to see how well they match up. While such responses largely tend towards
accuracy, I wonder if the degree of accuracy could be improved appreciably by
considering what responses any given participant should provide, given the
information they have access to. If someone grew up in an area where men are
particularly violent, relative to the wider society, we should expect they have
different stereotypes about male violence, as those perceptions are accurate as
far as they know. Though such research is more tedious and less feasible than
using broader measures, I can’t help but wonder what results it might yield. 

References: Westfall, R., Millar, M., & Lovitt A. (2018). The influence of
physical attractiveness on belief in a just world. Psychological Reports,
0, 1-14.

Posted in Cognition, Sex Differences


MAKING A GREAT LEADER

Posted on March 27, 2018 by Jesse Marczyk

> Selfies used to be a bit more hardcore

If you were asked to think about what makes a great leader, there are a number
of traits you might call to mind, though what traits those happen to be might
depend on what leader you call to mind: Hitler, Gandhi, Bush, Martin Luther King
Jr, Mao, Clinton, or Lincoln were all leaders, but seemingly much different
people. What kind of thing could possibly tie all these different people and
personalities together under the same conceptual umbrella? While their
characters may have all differed, there is one thing all these people shared in
common and it’s what makes anyone anywhere a leader: they all had followers.

Humans are a social species and, as such, our social alliances have long been
key to our ability to survive and reproduce over our evolutionary history
(largely based around some variant of the point that two people are better at
beating up one person than a single individual is; an idea that works with
cooperation as well). While having people around who were willing to do what you
wanted have clearly been important, this perspective on what makes a leader –
possessing followers – turns the question of what makes a great leader on its
head: rather than asking about what characteristics make one a great leader, you
might instead ask what characteristics make one an attractive social target for
followers. After all, while it might be good to have social support, you need to
understand why people are willing to support others in the first place to fully
understand the matter. If it was all cost to being a follower (supporting a
leader at your own expense), then no one would be a follower. There must be
benefits that flow to followers to make following appealing. Nailing down what
those benefits are and why they are appealing should better help us understand
how to become a leader, or how to fall from a position of leadership.

With this perspective in mind, our colorful cast of historical leaders suddenly
becomes more understandable: they vary in character, personality, intelligence,
and political views, but they must have all offered their
followers something valuable; it’s just that whatever that something(s) was, it
need not be the same something. Defense from rivals, economic benefits,
friendship, the withholding of punishment: all of these are valuable resources
that followers might receive from an alliance with a leader, even from the
position of a subordinate. That something may also vary from time to time: the
leader who got his start offering economic benefits might later transition into
one who also provides defense from rivals; the leader who is followed out of
fear of the costs they can inflict on you may later become a leader who offers
you economic benefits. And so on.



> “Come for the violence; stay for the money”

The corollary point is that features which fail to make one appealing to
followers are unlikely to be the ones that define great leaders. For example –
and of relevance to the current research on offer – gender per se is unlikely to
define great leaders because being a man or a woman does not necessarily offer
much to many followers. Traits associated with them might – like how those who
are physically strong can help you fight against rivals better than one who is
not, all else being equal – but not the gender itself. To the extent that one
gender tends to end up in positions of leadership it is likely because they tend
to possess higher levels of those desirable traits (or at least reside
predominantly on the upper end of the population distribution of them).
Possessing these favorable traits that allow leaders to do useful things is only
one part of the equation, however: they must also appear willing to use those
traits to provide benefits to their follows. If a leader possesses considerable
social resources, they do you little good if said leader couldn’t be any less
interested in granting you access to them.

This analysis also provides another context point for understanding the
leader/follower dynamic: it ought to be context specific, at least to some
extent. Followers who are looking for financial security might look for
different leaders than those who are seeking protection from outside aggression;
those facing personal social difficulties might defer to different leaders
still. The match between the talents offer by a leader and the needs of the
followers should help determine how appealing some leaders are. Even traits that
might seem universally positive on their face – like a large social network –
might not be positives to the extent it affects a potential follower’s
perception of their likelihood of receiving benefits. For example, leaders with
relatively full social rosters might appear less appealing to some followers if
that follower is seeking a lot of a leader’s time; since too much of it is
already spoken for, the follower might look elsewhere for a more personal
leader. This can create ecological leadership niches that can be filled by
different people at different times for different contexts.

With all that in mind, there are at least some generalizations we can make about
what followers might find appealing in a leader in an, “all else being equal…”
sense: those with more social support with be selected as leaders more often, as
such resources are more capable of resolving disputes in your favor; those with
greater physical strength or intelligence might be better leaders for similar
reasons. Conversely, one might follow such leaders because of the costs failing
to follow would incur, but the logic holds all the same. As such, once these and
other important factors are accounted for, you should expect irrelevant factors
– like sex – to fall out of the equation. Even if many leaders tend to be men,
it’s not their maleness per se that makes them appealing leaders, but rather
these valued and useful traits.



> Very male, but maybe not CEO material

This is a hypothesis effectively tested in a recent paper by von Rueden et al
(in press). The authors examined the distribution of leadership in a small-scale
foraging/farming society in the Amazon, the Tsimane. Within this culture – as
others – men tend to exercise the greater degree of political leadership,
relative to women, as measured by domains including speaking more during social
meetings, coordinating group efforts, and resolving disputes. The leadership
status of members within this group were assessed by ratings of other group
members. All adults within the community (male n = 80; female n = 72) were
photographed, and these photos were then then given to 6 of the men and women in
sets of 19. The raters were asked to place the photos in order in terms of which
person whose voice tended to carry the most weight during debates, and then in
terms of who managed the most community projects. These ratings were then summed
up (from 1 to 19, depending on their position in the rankings, with 19 being the
highest in terms of leadership) to figure out who tended to hold the largest
positions of leadership.

As mentioned, men tended to reside in positions of greater leadership both in
terms of debates and management (approximate mean male scores = 37; mean female
scores = 22), and both men and women agreed on these ratings. A similar pattern
was observed in terms of who tended to mediate conflicts within the community: 6
females were named in resolving such conflicts, compared with 17 males. Further,
the males who were named as conflict mediators tended to be higher in leadership
scores, relative to non-mediating males, while this pattern didn’t hold for the
females.

So why were men in positions of leadership in greater percentages than females?
A regression analysis was carried out using sex, height, weight, upper body
strength, education, and number of cooperative partners predicting leadership
scores. In this equation, sex (and height) no longer predicted leadership score,
while all the other factors were significant predictors. In other words, it
wasn’t that men were preferred as leaders per se, but rather that people with
more upper body strength, education, and cooperative partners were favored,
whether male or female. These traits were still favored in leaders despite
leaders not being particularly likely to use force or violence in their
position. Instead, it seems that traits like physical strength were favored
because they could potentially be leveraged, if push came to shove.



> “A vote for Jeff is a vote for building your community. Literally”

As one might expect, what makes followers want to follow a leader wasn’t their
sex, but rather what skills the leader could bring to bear in resolving issues
and settling disputes. While the current research is far from a comprehensive
examination of all the factors that might tap leadership at different times and
contexts, it represents a sound approach to understanding the problem of why
followers select particular leaders. By thinking about what benefits followers
tended to reap from leaders over evolutionary history can help inform our search
for – and understanding of – the proximate mechanisms through which leaders end
up attracting them.

References:  von Rueden, C., Alami, S., Kaplan, H., & Gurven, M. (In Press). Sex
differences in political leadership in an egalitarian society. Evolution & Human
Behavior, doi:10.1016/j.evolhumbehav.2018.03.005

Posted in EvoPsych, Game Theory, Sex Differences


DOESN’T BULLYING MAKE YOU CRAZY?

Posted on February 26, 2018 by Jesse Marczyk

> “I just do it for the old fashioned love of killing”

Having had many pet cats, I understand what effective predators they can be. The
number of dead mice and birds they have returned over the years is certainly
substantial, and the number they didn’t bring back is probably much higher. If
you happen to be a mouse living in an area with lots of cats, your life is
probably pretty stressful. You’re going to be facing a substantial adaptive
challenge when it comes to avoiding detection by these predators and escaping
them if you fail at that. As such, you might expect mice developed a number of
anti-predator strategies (especially since cats aren’t the only thing they’re
trying to not get killed by): they might freeze when they detect a cat to avoid
being spotted; they might develop a more chronic state of psychological anxiety,
as being prepared to fight or run at a moment’s notice is important when your
life is often on the line. They might also develop auditory or visual
hallucinations that provide them with an incorrect view of the world
because…well, I actually can’t think of a good reason for that last one.
Hallucinations don’t serve as an adaptive response that helps the mice avoid
detection, flee, or otherwise protect themselves against those who would seek to
harm them. If anything, hallucinations seem to have the opposite effect,
directing resources away from doing something useful as the mice would be
responding to non-existent threats.

But when we’re talking about humans and not mice, some people seem to have a
different sense for the issue: specifically, that we ought to expect something
of a social predation – bullying – to cause people to develop psychosis. At
least that was the hypothesis behind some recent research published by Dantchev,
Zammit, and Wolke (2017). This study examined a longitudinal data set of parents
and children (N = 3596) at two primary times during their life: at 12 years old,
children were given a survey asking about sibling bullying, defined as,
“…saying nasty and hurtful things, or completely ignores [them] from their group
of friends, hits, kicks, pushes or shoves [them] around, tells lies or makes up
false rumors about [them].” They were asked how often they experienced bullying
by a sibling and how many times a week they bullied a sibling in the past 6
months (ranging from “Never”, “Once or Twice”, “Two or Three times a month”,
“About once a week,” or, “Several times a week”). Then, at the age of about 18,
these same children were assessed for psychosis-like symptoms, including whether
they experienced visual/auditory hallucinations, delusions (like being spied
on), or felt they had experienced thought interference by others.  

With these two measures in hand (whether children were bullies/bullied/both, and
whether they suffered some forms of psychosis), the authors sought to determine
whether the sibling bullying at time 1 predicted the psychosis at time 2,
controlling for a few other measures I won’t get into here. The following
results fell out of the analysis: children bullied by their siblings and who
bullied their siblings tended to have lower IQ scores, more conduct disorders
early on, and experienced more peer bullying as well. The mothers of these
children were also more likely to experience depression during pregnancy and
domestic violence was more likely to have been present in the households.
Bullying, it would seem, was influenced by the quality of the children and their
households (a point we’ll return to later).



> “This is for making mom depressed prenatally”

In terms of the psychosis measures, 55 of the children in the sample met the
criteria for having a disorder (1.5%). Of those children who bullied their
siblings, 11 met this criteria (3%), as did 6 of those who were purely bullied
(2.5%), and 11 of those were both bully and bullied (3%). Children who were
regularly bullied (about once a week or more), then, were about twice as likely
to report psychosis than those who were bullied less often. In brief,
both being bullied by and bullying other siblings seemed to make hallucinations
more common. Dantchev, Zammit, and Wolke (2017) took this as evidence suggesting
a causal relationship between the two: more bullying causes more psychosis.

There’s a lot to say about this finding, the first thing being this: the vast
majority of regularly-bullied children didn’t develop psychosis; almost none of
them did, in fact. This tells us quite clearly that the psychosis per se is by
no means a usual response to bullying. This is an important point because, as I
mentioned initially, some psychological strategies might evolve to help
individuals deal with outside threats. Anxiety works because it readies
attentional and bodily resources to deal with those challenges effectively. It
seems plausible such a response could work well in humans facing aggression from
their peers or family. We might thus expect some kinds of anxiety disorders to
be more common among those bullied regularly; depression too, since that could
well serve to signal that one is in need of social support to others and help
recruit it. So long as one can draw a reasonable, adaptive line between
psychological discomfort and doing something useful, we might predict a
connection between bullying and mental health issues.

But what are we to make of that correlation between being bullied and the
development of hallucinations? Psychosis would not seem to help an individual
respond in a useful way to the challenges they are facing, as evidenced by
nearly all of the bullied children not developing this response. If such a
response were useful, we should generally expect much more of it. That point
alone seems to put the metaphorical nail in the coffin of two of the three
explanations the authors put forth for their finding: that social defeat
and negative perceptions of one’s self and the world are causal factors in
developing psychosis. These explanations are – on their face – as silly as they
are incomplete. There is no plausible adaptive line the authors attempt to draw
from thinking negatively about one’s self or the world to the development of
hallucinations, much less how those hallucinations are supposed to help. I would
also add that these explanations are discussed only briefly at the end of paper,
suggesting to me not enough time or thought went into trying to understand the
reasons these predictions were made before the research was undertaken. That’s a
shame, as a better sense for why one would expect to see a result would affect
the way research is designed for the better. 



> “Well, we’re done…so what’s it supposed to be?”

Let’s think in more detail about why we’re seeing what we’re seeing regarding
bullying and psychosis. There are a number of explanations one might float, but
the most plausible to me goes something like this: these mental health issues
are not being caused by the bullying but are, in a sense, actually eliciting the
bullying. In other words, causation runs in the opposite direction the authors
think it does.

To fully understand this explanation, let’s begin with the basics: kin are
usually expected to be predisposed to behave altruistically towards each other
because they share genes in common. This means investment in your relatives is
less costly than it would be otherwise, as helping them succeed is, in a very
real sense, helping yourself succeed. This is how you get adaptations like
breastfeeding and brotherly love. However, that cost/benefit ratio does not
always lean in the direction of helping. If you have a relative that is
particularly unlikely to be successful in the reproductive realm, investment in
them can be a poor choice despite their relatedness to you. Even though they
share genes with you, you share more genes with yourself (all of them, in fact),
so helping yourself do a little better can sometimes be the optimal reproductive
strategy over helping them do much better (since they aren’t likely to do
anything even with your help). In that regard, relatives suffering from mental
health issues are likely worse investments than those not suffering from them,
all else being equal. The probability of investment paying off is simply lower.

Now that might end up predicting that people should ignore their siblings
suffering from such issues; to get to bullying we need something else, and in
this case we certainly have it: competition for the same pool of limited
resources, namely parental investment. Brothers and sisters compete for the same
resources from their parents – time, protection, provisioning, and so on – and
resources invested in one child are not capable of being invested in another
much of the time. Since parents don’t have unlimited amounts of these resources,
you get competition between siblings for them. This sometimes results in
aggressive and vicious competition. As we already saw in the study results,
children of lower quality (lower IQ scores and more conduct disorders) coming
from homes with fewer resources (likely indexed by more maternal depression and
domestic violence) tend to bully and be bullied more. Competition for resources
is more acute here and your brother or sister can be your largest source of it.



> They’re much happier now that the third one is out of the way

To put this into an extreme example of non-human sibling “bullying”, there are
some birds that lay two or three eggs in the same nest a few days apart. What
usually happens in these scenarios is that when the older sibling hatches in
advance of the younger it gains a size advantage, allowing it to peck the
younger one to death or roll it out of the nest to starve in order to monopolize
the parental investment for itself. (For those curious why the mother doesn’t
just lay a single egg, that likely has something to do with having a backup
offspring in case something goes wrong with the first one). As resources become
more scarce and sibling quality goes down, competition to monopolize more of
those resources should increase as well. That should hold for birds as well as
humans.

A similar logic extends into the wider social world outside of the family: those
suffering from psychosis (or any other disorders, really) are less valuable
social assets to others than those not suffering from them, all else being
equal. As such, sufferers receive less social support in the form of friendships
or other relationships. Without such social support, this also makes one an
easier target for social predators looking to exploit the easiest targets
available. What this translates into is children who are less able to defend
themselves being bullied by others more often. In the context of the present
study, it was also documented that peer bullying tends to increase with
psychosis, which would be entirely unsurprising; just not because bullying is
causing children to become psychotic.

This brings us to the final causal hypothesis: sometimes bullying is so severe
that it actually causes brain damage that causes later psychosis. This would
involve what I imagine would either be a noticeable degree of physical head
trauma or similarly noticeable changes brought on by a body’s response to stress
that causes brain damage over time. Neither hypothesis strikes me as
particularly likely in terms of explaining much of what we’re seeing here, given
the scope of sibling bullying is probably not often large enough to
pose that much of a physical threat to the brain. I suspect the lion’s share of
the connection between bullying and psychosis is simply that psychotic
individuals are more likely to be bullied, rather than because bullying is doing
the causing. 

References: Dantchev, S., Zammit S., & Wolke, D. (2017). Sibling bullying in
middle childhood and psychotic disorder at 18 years: a prospective cohort
study. Psychological Medicine, https://doi.org/10.1017/S0033291717003841.

Posted in EvoPsych, Violence


MY FATHER WAS A GAMBLING MAN

Posted on February 15, 2018 by Jesse Marczyk

> And if you think I stole that title from a popular song, you’re very wrong

Hawaii recently introduced some bills aimed at prohibiting the sale of games
with for-purchase loot boxes to anyone under 21. For those not already in the
know concerning the world of gaming, loot boxes are effectively semi-random grab
bags of items within video games. These loot boxes are usually received by
players either as a reward for achieving something within a game (such as
leveling up) and/or can be purchased with currency, be that in-game currency or
real world money. Specifically, then, the bills in question are aimed at games
that sell loot boxes for real money, attempting to keep them out of the hands of
people under 21.

Just like tobacco companies aren’t permitted to advertise to minors out of fear
that children will come to find smoking an interesting prospect, the fear here
is that children who play games with loot boxes might develop a taste for
gambling they otherwise wouldn’t have. At least that’s the most common explicit
reason for this proposal. The gaming community seems to be somewhat torn about
the issue: some gamers welcome the idea of government regulation of loot boxes
while others are skeptical of government involvement in games. In the interest
of full disclosure for potential bias – as a long-time gamer and professional
loner – I consider myself to be a part of the latter camp.

My hope today is to explore this debate in greater detail. There are lots of
questions I’m going to discuss, including (a) whether loot boxes are gambling,
(b) why gamers might oppose this legislation, (c) why gamers might support it,
(d) what other concerns might be driving the acceptance of regulation within
this domain, and (e) talk about whether this kind of random mechanics actually
make for better games.



> Lets begin our investigation in gaming’s seedy underbelly

To set the stage, a loot box is just what it sounds like: a package randomized
of in-game items (loot) which are earned by playing the game or purchased. In my
opinion, loot boxes are gambling-adjacent types of things, but not bone-fide
gambling. The prototypical example of gambling is along the lines of a slot
machine. You put money into it and have no idea what you’re going to get out.
You could get nothing (most of the time), a small prize (a few of the times), or
a large prize (almost never). Loot boxes share some of those features – the
paying money for randomized outcomes – but they don’t share others: first, with
loot boxes there isn’t a “winning” and “losing” outcome in the same way there is
with a slot machine. If you purchase a loot box, you should have some general
sense as to what you’re buying; say, 5 items with varying rarities. It’s not
like you sometimes open a loot box and there are no items, other times there are
5, and other times there are 20 (though more on that in a moment). The number of
items you receive is usually set even if the contents are random. More to the
point, the items you “receive” you often don’t even own; not in the true sense.
If the game servers get shut down or you violate terms of service, for instance,
your account with the items get deleted and they disappear from existence and
you don’t get to sue someone for stealing from you. There is also no formal
cashing out of many of these games. In that sense, there is less of a gamble in
loot boxes than what we traditionally consider gambling.

Importantly, the value of these items is debatable. Usually players really want
to open some items and don’t care about others. In that sense, it’s quite
possible to open a loot box and get nothing of value, as far as you’re
concerned, while hitting jackpots in others. However, if that valuation is
almost entirely subjective in nature, then it’s hard to say that not getting
what you want is losing while getting what you do is winning, as that’s going to
vary from person to person. What you are buying with loot boxes isn’t a chance
at a specific item you want; it is a set number of random items from a pool of
options. To put that into an incomplete but simple example, if you put money
into a gumball machine and get a gumball, that’s not really a gamble and you
didn’t really lose. It doesn’t become gambling, nor do you lose, if the gumballs
are different colors/flavors and you wanted a blue one but got a green one.

One potential exception to the argument of equal value to this is when the items
opened aren’t bound to the opener; that is, they can be traded or sold to other
players. You don’t like your gumball flavor? Well, now you can trade your friend
your gumball for theirs, or even buy their gumball from them. When this
possibility exists, secondary markets pop up for the digital items where some
can be sold for lots of real money while others are effectively worthless. Now,
as far as the developers are concerned, all the items can have the same value,
which makes it look less like gambling; it’s the secondary market that makes it
look more like gambling, but the game developers aren’t in control of that.



> Kind of like these old things

An almost-perfect metaphor for this can be found in the sale of Baseball cards
(which I bought when I was younger, though I don’t remember what the appeal
was): packs containing a set number of cards – let’s say 10 – are purchased for
a set price – say $5 – but the contents of those packs is randomized. The value
of any single card, from the perspective of the company making them, is 1/10 the
cost of the pack. However, some people value specific cards more than others; a
rookie card of a great player is more desired than the card for a veteran who
never achieved anything. In such cases, a secondary market crops up among those
who collect the cards, and those collectors are willing to pay a premium for the
desired items. One card might sell for $50 (worth 10-times the price of a pack),
while another might be unable to find a buyer at all, effectively worth $0.

This analogy, of course, raises other questions about the potential legality of
existing physical items, like sports cards, or those belonging to any trading
card game (like Magic: The Gathering, Pokemon, or Yugioh). If digital loot boxes
are considered a form of gambling and might have effects worth protecting
children from, then their physical counterparts likely pose the same risks. If
anything, the physical versions look more like gambling because at least some
digital items cannot be traded or sold between players, while all physical items
pose that risk of developing real value on a secondary market. Imagine putting
money into a slot machine, hitting the jackpot, and then getting nothing out of
it. That’s what many virtual items amount to.

Banning the sale of loot boxes in gaming from people under the age of 21 likely
also entails the banning of card packs from them as well. While the words
“slippery slope” are usually used together with the word “fallacy,” there does
seem to be a very legitimate slope here worth appreciating. The parallels
between loot boxes and physical packs of cards are almost perfect (and, where
they differ, card packs look more like gambling; not less). Strangely, I’ve seen
very few voices in the gaming community suggesting that the sale of packs of
cards should be banned from minors; some do (mostly for consistency sake; they
don’t raise the issue independently of the digital loot box issue almost ever as
far as I’ve seen), but most don’t seem concerned with the matter. The bill being
introduced in Hawaii doesn’t seem to mention baseball or trading cards anywhere
either (unless I missed it), which would be a strange omission. I’ll return to
this point later when we get to talking about the motives behind the approval of
government regulation in the digital realm coming from gamers.



> The first step towards addiction to that sweet cardboard crack

But, while we’re on the topic of slippery slopes, let’s also consider another
popular game mechanic that might also be worth examination: randomized item
drops from in-game enemies. These aren’t items you purchase with money (at least
not in game), but rather ones you purchase with time and effort. Let’s consider
one of the more well-known games to use this: WoW (World of Warcraft). In WoW,
when you kill enemies with your character, you may receive valued items from
their corpse as you loot the bodies. The items are not found in a uniform
fashion: some are very common and other quite rare. I’ve watched a streamer kill
the same boss dozens of times over the course of several weeks hoping to finally
get a particular item to drop. There are many moments of disappointment and
discouragement, complete with feelings of wasted time, after many attempts are
met with no reward. But when the item finally does drop? There is a moment of
elation and celebration, complete with a chatroom full of cheering viewers. If
you could only see the emotional reaction of the people to getting their reward
and not their surroundings, my guess is that you’d have a hard time
differentiating a gamer getting a rare drop they wanted from someone opening the
desired item out of a loot box for which they paid money.

What I’m not saying is that I feel random loot drops in World of Warcraft are
gambling; what I am saying is that if one is concerned about the effects loot
boxes might have on people when it comes to gambling, they share enough in
common with randomized loot drops that the latter are worth examining seriously
as well. Perhaps it is the case that the item a player is after has a
fundamentally different psychological effect on them if chances at obtaining it
are purchased with real money, in-game currency, or play time. Then again,
perhaps there is no meaningful difference; it’s not hard to find stories of
gamers who spent more time than is reasonable trying to obtain rare in-game
items to the point that it could easily be labeled an addiction. Whether buying
items with money or time have different effects is a matter that would need to
be settled empirically. But what if they were fundamentally similar in terms of
their effects on the players? If you’re going to ban loot boxes sold with cash
under the fear of the impact they have on children’s propensity to gamble or
develop a problem, you might also end up with a good justification for banning
randomized loot drops in games like World of Warcraft as well, since both
resemble pulling the lever of a slot machine in enough meaningful ways.

Despite that, I’ve seen very few people in the pro-regulation camp raise the
concern about the effects that World of Warcraft loot tables are having on
children. Maybe it’s because they haven’t thought about it yet, but that seems
doubtful, as the matter has been brought up and hasn’t been met with any
concern. Maybe it’s because they view the costs of paying real money for items
as more damaging than paying with time. Either way, it seems that even after
thinking about it, those who favor regulation of loot boxes largely don’t seem
to care as much about card games, and even less about randomized loot tables.
This suggests there are other variables beyond the presence of gambling-like
mechanics underlying their views.



> “Alright; children can buy some lottery tickets, but only the cheap ones”

But let’s talk a little more about the fear of harming children in general. Not
that long ago there were examination of other aspects of video games:
specifically, the component of violence often found and depicted within them.
Indeed, research into the topic is still a thing today. The fear sounded like a
plausible one to many: if violence is depicted within these games – especially
within the context of achieving something positive, like winning by killing the
opposing team’s characters – those who play the games might become desensitized
to violence or come to think it acceptable. In turn, they would behave more
violently themselves and be less interested in alleviating violence directed
against others. This fear was especially pronounced when it came to children who
were still developing psychologically and potentially more influenced by the
depictions of violence.

Now, as it turns out, those fears appear to be largely unfounded. Violence has
not been increasing as younger children have been playing increasingly violent
video games more frequently. The apparent risk factor for increasing aggressive
behavior (at least temporarily; not chronically) was losing at the game or
finding it frustrating to play (such as when the controls feel difficult to
use). The violent content per se didn’t seem to be doing much causing when it
came later violence. While players who are more habitually aggressive might
prefer somewhat different games than those who are not, that doesn’t mean the
games are causing them to be violent.

This gives us something of a precedent for worrying about the face-validity of
the claims that loot boxes are liable to make gambling seem more appealing on a
long-term scale. It is possible that the concern over loot boxes represents more
of a moral panic on the part of the legislatures, rather than a real issue
having a harmful impact. Children who are OK with ripping an opponent’s head off
in a video game are unlikely to be OK with killing someone for real, and
violence in video games doesn’t seem to make the killing seem more appealing. It
might similarly be the case that opening loot boxes makes people no more likely
to want to gamble in other domains. Again, this is an empirical matter that
requires good evidence to prove the connection (and I emphasize the word good
because there exists plenty of low-quality evidence that has been used to
support the connection between violence in video games causing it in real life).



> Video games inspire cosplay; not violence

If it’s not clear at this point, I believe the reasons that some portion of the
gaming community supports this type of regulation has little to nothing to do
with their concerns about children gambling. For the most part, children do not
have access to credit cards and so cannot themselves buy lots of loot boxes, nor
do they have access to lots of cash they can funnel into online gift cards. As
such, I suspect that very few children do serious harm to themselves or their
financial future when it comes to buying loot boxes. The ostensible concern for
children is more of a plausible-sounding justification than one actually doing
most of the metaphorical cart-pulling. Instead, I believe the concern over loot
boxes (at least among gamers) is driven by two more mundane concerns.

The first of these is simply the perceived cost of a “full” game. There has long
been a growing discontent in the gaming community over DLC (downloadable
content), where new pieces of content are added to a game after release for a
fee. While that might seem like the simple purchase of an expansion pack (which
is not a big deal), the discontent arises were a developer is perceived to have
made a “full” game already, but then cut sections out of it purposefully to sell
later as “additional” content. To place that into an example, you could have a
fighting game that was released with 8 characters. However, the game became
wildly popular, resulting in the developers later putting together 4 new
characters and selling them because demand was that high. Alternatively, you
could have a developer that created 12 characters up front, but only made 8
available in the game to begin with, knowingly saving the other 4 to sell later
when they could have just as easily been released in the original. In that case,
intent matters.

Loot boxes do something similar psychologically at times. When people go to the
store and pay $60 for a game, then take it home to find out the game wants them
to pay $10 or more (sometimes a lot more) to unlock parts of the game that
already exist on the disk, that feels very dishonest. You thought you were
purchasing a full game, but you didn’t exactly get it. What you got was more of
an incomplete version. As games become increasingly likely to use these loot
boxes (as they seem to be profitable), the true cost of games (having access to
all the content) will go up.



> Just kidding! It’s actually 20-times more expensive

Here is where the distinction between cosmetic and functional (pay-to-win) loot
boxes arises. For those not in the know about this, the loot boxes that games
sell vary in terms of their content. In some games, these items are nothing more
than additional colorful outfits for your characters that have no effect on game
play. In others, you can buy items that actually increase your odds of winning a
game (items that make your character do more damage or automatically improve
their aim). Many people who dislike loot boxes seem to be more OK (or even
perfectly happy) with them so long as the items are only cosmetic. So long as
they can win the game as effectively spending $0 as they could spending $1000,
they feel that they own the full version. When it feels like the game you bought
gives an advantage to players who spent more money on it, it again feels like
the copy of the game you bought isn’t the same version as theirs; that it’s not
as complete an experience.

Another distinction arises here in that I’ve noticed gamers seem more OK with
loot boxes in games that are Free-to-Play. These are games that cost nothing to
download, but much of their content is locked up-front. To unlock content, you
usually invest time or money. In such cases, the feeling of being lied to about
the cost of the game don’t really exist. Even if such free games are ultimately
more expensive than traditional ones if you want to unlock everything (often
much more expensive if you want to do so quickly), the actual cost of the game
was $0. You were not lied to about that much and anything else you spent
afterwards was completely voluntary. Here the loot boxes look more like a part
of the game than an add-on to it. Now this isn’t to say that some people don’t
dislike loot boxes even in free-to-play games; just that they mind them less.



> “Comparatively, it’s not that bad”

The second, related concern, then, is that developers might be making design
decisions that ultimately make games worse to try and sell more loot boxes. To
put that in perspective, there are some cases of win/win scenarios, like when a
developer tries to sell loot boxes by making a game that’s so good people enjoy
spending money on additional content to show off how much they like it.
Effectively, people are OK with paying for quality. Here, the developer gets
more money and the players get a great game. But what happens when there is a
conflict? A decision needs to be made that will either (a) make the game play
experience better but sell fewer loot boxes, or (b) make the game play
experience worse, but sell more loot boxes? However frequently these decisions
needs to be made, they assuredly are made at some points.

To use a recent example, many of the rare items in the game Destiny 2 were found
within an in-game store called Eververse. Rather than unlocking rare items
through months of completing game content over and over again (like in Destiny
1), many of these rare, cosmetic items were found only within Eververse. You
could unlock them with time, in theory, but only at very slow rates (which were
found to actually be intentionally slowed down by the developers if a player put
too much time into the game). In practice, the only way to unlock these rare
items was through spending money. So, rather than put interesting and desirable
content into the game as a reward for being good at it or committed to it, it
was largely walled off behind a store. This was a major problem for people’s
motivation to continue playing the game, but it traded off against people’s
willingness to spend money on the game. These conflicts created a worse
experience for a great many players. It also yielded the term “spend-game
content” to replace “end-game content.” More loot boxes in games potentially
means more decisions like that will be made where reasons to play the game are
replaced with reasons to spend money.

Another such system was discussed in regards to a potential patent by Electronic
Arts (EA), though as far as I’m aware it has not made its way into a real game
yet. This system revolved around online, multiplayer games with items available
for purchase. The system would be designed such that players who spent money on
some particular item would be intentionally matched against players of lower
skill. As the lower-skill players would be easier for the buyer to beat with
their new items, it would make the purchaser feel like their decision to buy was
worth it. By contrast, the lower-level player might become impressed by how good
the player with the purchased item performed and feel they would become better
at the game if they too purchased it. While this might encourage players to buy
in-game items, it would yield an ultimately less-competitive and interesting
matchmaking system. While such systems are indeed bad for the game play
experience, it is at least worth noting that such a system would work if the
items were being sold came from loot boxes or were directly purchased.



> “Buy the golden king now to get matched against total scrubs!”

If I’m right and the reasons gamers who favor regulation center around the cost
and design direction of games, why not just say that instead of talking about
children and gambling? Because, frankly, it’s not very persuasive. It’s too
selfish of a concern to rally much social support. It would be silly for me to
say, “I want to see loot boxes regulated out of games because I don’t want to
spend money on them and think they make for worse gaming experiences for me.”
People would just tell me to either not buy loot boxes or not buy games with
loot boxes. Since both suggestions are reasonable and I can do them already, the
need for regulation isn’t there.

Now if I decide to vote with my wallet and not buy games with loot boxes, that
won’t have any impact on the industry. My personal impact is too small. So long
as enough other people buy those games, they will continue to be produced and my
enjoyment of the games will be decreased because of the aforementioned cost and
design issues. What I need to do, then, is convince enough people to follow my
lead and not buy these games either. It wouldn’t be until enough gamers aren’t
buying the games that there would be incentives for developers to abandon that
model. One reason to talk about children, then, is because you don’t trust that
the market will swing in your favor. Rather than allow the market to decide
feely, you can say that children are incapable of making good choices and are
being actively harmed. This will rally more support to tip the scales of that
market in your favor by forcing government intervention. If you don’t trust
enough people will vote with their wallet like you do, make it illegal for
younger gamers to be allowed to vote in any other way.

A real concern about children, then, might not be that they will come to view
gambling as normal, but rather that they will come to view loot boxes (or other
forms of added content, like dishonest DLC) in games as normal. They will accept
that games often have loot boxes and they will not be deterred from buying
titles that include them. That means more consumers now and in the future who
are willing to tolerate or purchase loot boxes/DLC. That means fewer games
without them which, in turn, means fewer options available to those voting with
their wallets and not buying them. Children and gambling are brought up not
because they are the gamer’s primary target of concern, but rather because
they’re useful for a strategic end.

Of course, there are real issues when it comes to children and these
microtransactions: they don’t tend to make great decisions, sometimes get access
to the parent’s credit card information and then go on insane spending sprees in
their games. This type of family fraud has been the subject of previous legal
disputes, but it is important to note that this is not a loot box issue per se.
Children will just as happily waste their parents money on known quantities of
in-game resources as they would on loot boxes. It’s also something more a matter
of parental responsibilities and creating purchasing verification than it is the
heart of the matter at hand. Even if children do occasionally make lots of
unauthorized purchases, I don’t think major game companies are counting on that
as an intended source of vital revenue.



> They start ballin’ out so young these days

For what it’s worth, I think loot boxes do run certain risks for the industry,
as outlined above. They can make games costlier than they need to be and they
can result in design decisions I find unpleasant. In many regards I’m not a fan
of them. I just happen to think that (a) they aren’t gambling and (b) don’t
require government intervention to remove because they are harming children,
persuading them that gambling is fun and leading to more of it in the future. I
think any kinds of microtransactions – whether random or not – can result in the
same kinds of harms, addiction, and reckless spending. However, when it comes to
human psychology, I think loot boxes are designed more a tool to fit our
psychology than one that shapes it, not unlike how water takes the shape of the
container it is in and not the other way around. As such, it is possible that
some facets of loot boxes and other random item generation mechanics make
players engage with the game in a way that yields more positive experiences, in
addition to the costs they carry. If these gambling-like mechanics weren’t, in
some sense, fun people would simply avoid games with them. 

For instance, having content that one is aiming to unlock can provide a very
important motivation to continue playing a game, which is a big deal if you want
your game to last and be interesting for a long time. My most recent example of
this is Destiny 2 again. Though I didn’t play the first Destiny, I have a friend
who did that told me about it. In that game, items randomly dropped, and they
dropped with random perks. This means you could get several versions of the same
item, but have them all be different. It gave you a reason and a motivation to
be excited about getting the same item for the 100th time. This wasn’t the case
in Destiny 2. In that game, when you got a gun, you got the gun. There was no
need to try and get another version of it because that didn’t exist. So what
happened when Destiny 2 removed the random rolls from items? The motivation for
hardcore players to keep playing long-term largely dropped off a cliff. At least
that’s what happened to me. The moment I got the last piece of gear I was trying
to achieve, a sense of, “why am I playing?” washed over me almost instantly and
I shut the game off. I haven’t touched it since. The same thing happen to me in
Overwatch when I unlocked the last skin I was interested in at the time. Had all
that content be available from the start, the turning-off point likely would
have come much sooner. 

As another example, imagine a game like World of Warcraft, where a boss has a
random chance to drop an amazing item. Say this chance is 1 in 500. Now imagine
an alternative reality where this practice is banned because it’s deemed to be
too much like gambling (not saying it will be; just imagine that it was). Now
the item is obtained in the following way: whenever the boss is killed, it drops
a token guaranteed. After you collect 500 of those tokens, you can hand them in
and get the item as a reward. Do you think players would have a better time
under that kind of gambling-like system, where each boss kill represents the
metaphorical pull of a slot machine lever, or in the consistent condition? I
don’t know the answer to that question offhand, but what I do know is that
collecting 500 tokens sure sounds boring, and that’s coming from the person who
values consistency, saving, and doesn’t enjoy traditional gambling. No one is
going to make a compilation video of people reacting to finally collecting 500
items because all you’d have was another moment, just like the last 499 moments
where the same thing happened. People would – and do – make compilation videos
of streamers finally getting valuable or rare items, as such moments are more
entertaining for views and players alike.

Posted in Philosophy


POST NAVIGATION

← Older posts



JESSE MARCZYK

Evolutionary Experimental Psychologist. Ph.D.


BLOGGING SINCE OCT 2., 2011




EMAIL:

Popsychblog@gmail.com


ARCHIVES

 * June 2019 (1)
 * October 2018 (1)
 * September 2018 (1)
 * August 2018 (1)
 * June 2018 (1)
 * May 2018 (1)
 * April 2018 (1)
 * March 2018 (1)
 * February 2018 (2)
 * January 2018 (1)
 * December 2017 (2)
 * November 2017 (2)
 * October 2017 (1)
 * September 2017 (2)
 * August 2017 (2)
 * July 2017 (2)
 * June 2017 (3)
 * May 2017 (4)
 * April 2017 (2)
 * March 2017 (2)
 * February 2017 (3)
 * January 2017 (2)
 * December 2016 (4)
 * November 2016 (2)
 * October 2016 (4)
 * September 2016 (2)
 * August 2016 (4)
 * July 2016 (3)
 * June 2016 (3)
 * May 2016 (4)
 * April 2016 (3)
 * March 2016 (4)
 * February 2016 (3)
 * January 2016 (4)
 * December 2015 (3)
 * November 2015 (4)
 * October 2015 (4)
 * September 2015 (4)
 * August 2015 (4)
 * July 2015 (4)
 * June 2015 (4)
 * May 2015 (4)
 * April 2015 (4)
 * March 2015 (3)
 * February 2015 (4)
 * January 2015 (4)
 * December 2014 (3)
 * November 2014 (6)
 * October 2014 (4)
 * September 2014 (3)
 * August 2014 (4)
 * July 2014 (5)
 * June 2014 (4)
 * May 2014 (5)
 * April 2014 (3)
 * March 2014 (4)
 * February 2014 (3)
 * January 2014 (4)
 * December 2013 (5)
 * November 2013 (5)
 * October 2013 (4)
 * September 2013 (6)
 * August 2013 (4)
 * July 2013 (6)
 * June 2013 (6)
 * May 2013 (6)
 * April 2013 (8)
 * March 2013 (5)
 * February 2013 (6)
 * January 2013 (6)
 * December 2012 (6)
 * November 2012 (4)
 * October 2012 (4)
 * September 2012 (5)
 * August 2012 (5)
 * July 2012 (6)
 * June 2012 (6)
 * May 2012 (53)

Search


BLOGROLL

 * Douglas Kenrick
 * Evolutionary Psychology and the Emotions
 * Evolutionary Psychology Blog
 * Evolutionary Psychology FAQ
 * Evolutionary Psychology Primer
 * Follow Popsych on Twitter
 * Greg Cochran
 * My Psychology Today Page
 * Neuroskeptic
 * Psychology Today
 * Research Blogging
 * Saturday Morning Breakfast Cereal
 * Slate Star Codex
 * Why Evolution Is True


META

 * Register
 * Log in
 * Entries RSS
 * Comments RSS
 * WordPress.org


CATEGORIES

 * Altruism (10)
 * April Fools (2)
 * Book Review (4)
 * Classics (5)
 * Cognition (19)
 * Creativity (1)
 * Depression (5)
 * Development (14)
 * Disgust (4)
 * Economics (22)
 * EvoPsych (178)
 * Female Orgasm (4)
 * Game Theory (16)
 * Group Selection (7)
 * Homosexuality (12)
 * Humor (2)
 * Incest (3)
 * Learning (2)
 * Memes (3)
 * Morality (105)
 * Neuroscience (2)
 * Philosophy (81)
 * Race (14)
 * Rape (8)
 * Reasoning (24)
 * Semantics (15)
 * Sex and Sexuality (77)
 * Sex Differences (41)
 * Sexism (44)
 * Sexual Selection (12)
 * Signaling (22)
 * Statistics (12)
 * Uncategorized (10)
 * Violence (23)

Proudly powered by WordPress