CallingBullshit_Chap.6.pdf

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 1/21

CHAPTER 6

Selection Bias

W E ARE BOTH SKIERS, AND we have both spent our share of time in the Wasatch mountains
outside of Salt Lake City, Utah, enjoying some of the best snow on the planet. The perfect
Utah powder even factored into the choice that one of us made about what college to attend. A
number of ski resorts are situated in the Wasatch mountains, and each resort has its own
personality. Snowbird rises out of Little Cottonwood Canyon, splayed out in steel and glass
and concrete, its gondola soaring over sheer cliffs to a cirque from which terrifyingly steep
chutes drop away. Farther up the canyon, Alta has equally challenging terrain, even better
snow, and feels lost in time. Its lodges are simple wooden structures and its lifts are bare-
bones; it is one of three remaining resorts in the country that won’t allow snowboarders. Start
talking to a fellow skier at a party or on an airplane or in a city bar, and the odds are that she
will mention one or both of these resorts as among the best in North America.

In nearby Big Cottonwood Canyon, the resorts Brighton and Solitude have a very different
feel. They are beautiful and well suited for the family and great fun to ski. But few would
describe them as epic, and they’re hardly destination resorts. Yet if you take a day off from
Alta or Snowbird to ski at Solitude, something interesting happens. Riding the lifts with other
skiers, the conversation inevitably turns to the relative merits of the local resorts. But at
Solitude, unlike Alta or Snowbird—or anywhere else, really—people often mention Solitude as
the best place to ski in the world. They cite the great snow, the mellow family atmosphere, the
moderate runs, the absence of lift lines, the beauty of the surrounding mountains, and
numerous other factors.

When Carl skied Solitude for the first time, at fifteen, he was so taken by this tendency
that he mentioned it to his father as they inhaled burgers at the base lodge before taking the
bus back to the city.

“I think maybe I underestimated Solitude,” Carl told him. “I had a pretty good day here.
There is some good tree skiing, and if you like groomed cruising…

“And I bet I didn’t even find the best runs,” Carl continued. “There must be some amazing
lines here. Probably two-thirds of the people I talked to today like this place even more than
Alta or Snowbird! That’s huge praise.”

Carl’s father chuckled. “Why do you think they’re skiing at Solitude?” he asked.

This was Carl’s first exposure to the logic of selection effects. Of course when you ask
people at Solitude where they like to ski, they will answer “Solitude.” If they didn’t like to ski
at Solitude, they would be at Alta or Snowbird or Brighton instead. The superlatives he’d
heard in praise of Solitude that day were not randomly sampled from among the community
of US skiers. Skiers at Solitude are not a representative sample of US skiers; they are skiers

104

105

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 2/21

who could just as easily be at Alta or Snowbird and choose not to be. Obvious as that may be
in this example, this basic principle is a major source of confusion and misunderstanding in
the analysis of data.

In the third chapter of this book, we introduced the notion of statistical tests or data
science algorithms as black boxes that can serve to conceal bullshit of various types. We
argued that one can usually see this bullshit for what it is without having to delve into the fine
details of how the black box itself works. In this chapter, the black boxes we will be
considering are statistical analyses, and we will consider some of the common problems that
can arise with the data that is fed into these black boxes.

Often we want to learn about the individuals in some group. We might want to know the
incomes of families in Tucson, the strength of bolts from a particular factory in Detroit, or the
health status of American high school teachers. As nice as it would be to be able to look at
every single member of the group, doing so would be expensive if not outright infeasible. In
statistical analysis, we deal with this problem by investigating small samples of a larger group
and using that information to make broader inferences. If we want to know how many eggs
are laid by nesting bluebirds, we don’t have to look in every bluebird nest in the country. We
can look at a few dozen nests and make a pretty good estimate from what we find. If we want
to know how people are going to vote on an upcoming ballot measure, we don’t need to ask
every registered voter what they are thinking; we can survey a sample of voters and use that
information to predict the outcome of the election.

The problem with this approach is that what you see depends on where you look. To draw
valid conclusions, we have to be careful to ensure that the group we look at is a random
sample of the population. People who shop at organic markets are more likely to have liberal
political leanings; gun show attendees are more likely to be conservative. If we conduct a
survey of voters at an organic grocery store—or at a gun show—we are likely to get a
misleading impression of sentiments citywide.

We also need to think about whether the results we get are influenced by the act of
sampling itself. Individuals being interviewed for a psychology study may give different
answers depending on whether the interviewer is a man or a woman, for example. We run
into this effect if we try to use the voluminous data from the Internet to understand aspects of
social life. Facebook’s autocomplete feature provides a quick, if informal, way to get a sense of
what people are talking about on the social media platform. How healthy is the institution of
marriage in 2019? Let’s try a Facebook search query:

106

107

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 3/21

This paints a happy—if saccharine—picture. But on Facebook people generally try to
portray their lives in the best possible light. The people who post about their husbands on
Facebook may not be a random sample of married people; they may be the ones with happy
marriages. And what people write on Facebook may not be a reliable indicator of their
happiness. If we type the same query into Google and let Google’s autocomplete feature tell us
about contemporary matrimony, we see something very different:

Yikes! At least “the best,” “amazing,” and “dope” (as opposed to “a dope”) make the top
ten. It seems that people turn to Google when looking for help, and turn to Facebook when
boasting about their lives. What we find depends on where we look.

We should stress that a sample does not need to be completely random in to be
useful. It just needs to be random with respect to whatever we are asking about. Suppose we
take an election poll based on only those voters whose names appear in the first ten pages of
the phone book. This is a highly nonrandom sample of people. But unless having a name that
begins with the letter A somehow correlates with political preference, our sample is random
with respect to the question we are asking: How are you going to vote in the upcoming
election?*1

Then there is the issue of how broadly we can expect a study’s findings to apply. When can
we extrapolate what we find from one population to other populations? One aim of social
psychology is to uncover universals of human cognition, yet a vast majority of studies in social
psychology are conducted on what Joe Henrich and colleagues have dubbed WEIRD
populations: Western, Educated, Industrialized, Rich, and Democratic. Of these studies, most
are conducted on the cheapest, most convenient population available: college students who
have to serve as study subjects for course credit.

How far can we generalize based on the results of such studies? If we find that American
college students are more likely to engage in violent behavior after listening to certain kinds of
music, we need to be cautious about extrapolating this result to American retirees or German
college students, let alone people in developing countries or members of traditional societies.

You might think that basic findings about something like visual perception should apply
across demographic groups and cultures. Yet they do not. Members of different societies vary
widely in their susceptibility to the famous Müller-Lyer illusion, in which the direction of
arrowheads influences the apparent length of a line. The illusion has by far the strongest
effect on American undergraduates.*2 Other groups see little or no difference in the apparent
line length.

108

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 4/21

Again, where you look determines what you see.

WHAT YOU SEE DEPENDS ON WHERE YOU LOOK

I f you study one group and assume that your results apply to other groups, this is
extrapolation. If you think you are studying one group, but do not manage to obtain a
representative sample of that group, this is a different problem. It is a problem so important
in statistics that it has a special name: selection bias. Selection bias arises when the
individuals that you sample for your study differ systematically from the population of
individuals eligible for your study.

For example, suppose we want to know how often students miss class sessions at the
University of Washington. We survey the students in our Calling Bullshit class on a sunny
Friday afternoon in May. The students’ responses indicate that they miss, on average, two
classes per semester. This seems implausibly low, given that our course is filled to capacity
and yet only about two-thirds of the seats are occupied on any given day. So are our students
lying to us? Not necessarily. The students who are answering our question are not a random
sample of eligible individuals—all students in our class—with respect to the question we are
asking. If they weren’t particularly diligent in their attendance, the students in our sample
wouldn’t have been sitting there in the classroom while everyone else was outside soaking up
the Friday afternoon sunshine.

Selection bias can create misleading impressions. Think about the ads you see for auto
insurance. “New GEICO customers report average annual savings over $500” on car
insurance. This sounds pretty impressive, and it would be easy to imagine that this means
that you will save $500 per year by switching to GEICO.

But then if you look around, many other insurance agencies are running similar ads.
Allstate advertisements proclaim that “drivers who switched to Allstate saved an average of
$498 per year.” Progressive claims that customers who switched to them saved over $500.
Farmers claims that their insured who purchase multiple policies save an average of $502.
Other insurance companies claim savings figures upward of $300. How can this be? How can
all of the different agencies claim that switching to them saves a substantial amount of
money? If some companies are cheaper than the competition, surely others must be more
expensive.

The problem with thinking that you can save money by switching to GEICO (or Allstate, or
Progressive, or Farmers) is that the people who switched to GEICO are nowhere near a
random sample of customers in the market for automobile insurance. Think about it: What
would it take to get you to go through the hassle of switching to GEICO (or any other agency)?
You would have to save a substantial sum of money. People don’t switch insurers in to
pay more!

109

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 5/21

Different insurance companies use different algorithms to determine your rates. Some
weight your driving record more heavily; some put more emphasis on the number of miles
you drive; some look at whether you store your car in a garage at night; others offer lower
rates to students with good grades; some take into account the size of your engine; others
offer a discount if you have antilock brakes and traction control. So when a driver shops
around for insurance, she is looking for an insurer whose algorithms would lower her rates
considerably. If she is already with the cheapest insurer for her personal situation, or if the
other insurers are only a little cheaper, she is unlikely to switch. The only people who switch
are those who will save big by doing so. And this is how all of the insurers can claim that those
who switch to their policies save a substantial sum of money.

This is a classic example of selection bias. The people who switch to GEICO are not a
random sample of insurance customers, but rather those who have the most to gain by
switching. The ad copy could equivalently read, “Some people will see their insurance
premiums go up if they switch to GEICO. Other people will see their premiums stay about the
same. Yet others will see their premiums drop. Of these, a few people will see their premiums
drop a lot. Of these who see a substantial drop, the average savings is $500.” While accurate,
you’re unlikely to hear a talking reptile say something like this in a Super Bowl commercial.*3

In all of these cases, the insurers presumably know that selection bias is responsible for
the favorable numbers they are able to report. Smart consumers realize there is something
misleading about the marketing, even if it isn’t quite clear what that might be. But sometimes
the insurance companies themselves can be caught unaware. An executive at a major
insurance firm told us about one instance of selection bias that temporarily puzzled his team.
Back in the 1990s, his employer was one of the first major agencies to sell insurance policies
online. This seemed like a valuable market to enter early, but the firm’s analytics team turned
up a disturbing result about selling insurance to Internet-savvy customers. They discovered
that individuals with email addresses were far more likely to have filed insurance claims than
individuals without.

If the difference had been minor, it might have been tempting to assume it was a real
pattern. One could even come up with any number of plausible post hoc explanations, e.g.,
that Internet users are more likely to be young males who drive more miles, more recklessly.
But in this case, the difference in claim rates was huge. Our friend applied one of the most
important rules for spotting bullshit: If something seems too good or too bad to be true, it
probably is. He went back to the analytics team that found this pattern, told them it couldn’t
possibly be correct, and asked them to recheck their analysis. A week later, they reported back
with a careful reanalysis that replicated the original result. Our friend still didn’t believe it and
sent them back to look yet again for an explanation.

This time they returned a bit sheepishly. The math was correct, they explained, but there
was a problem with the data. The company did not solicit email addresses when initially
selling a policy. The only time they asked for an email address was when someone was in the
process of filing a claim. As a result, anyone who had an email address in the company’s
database had necessarily also filed a claim. People who used email were not more likely to file
claims—but people who had filed claims were vastly more likely to have email addresses on
file.

Selection effects appear everywhere, once you start looking for them. A psychiatrist friend
of ours marveled at the asymmetry in how psychiatric dis s are manifested. “One in four

110

111

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 6/21

Americans will suffer from excessive anxiety at some point,” he explained, “but in my entire
career I have only seen one patient who suffered from too little anxiety.”

Of course! No one walks into their shrink’s office and says “Doctor, you’ve got to help me.
I lie awake night after night not worrying.” Most likely there are as many people with too little
anxiety as there are with too much. It’s just that they don’t go in for treatment. Instead they
end up in prison, or on Wall Street.

THE HIDDEN CAUSE OF MURPHY’S LAW

I n Portugal, about 60 percent of families with children have only one child, but about 60
percent of children have siblings. This sounds impossible, but it’s not. The picture below
illustrates how this works. Out of twenty Portuguese families, we would expect to see about
twelve with a single child, seven with two children, and one with three children. Thus most
families are single-child, but because multi-child families each have multiple children, most
children live in multi-child families.

What does this have to do with bullshit? Universities boast about having small class sizes,
but students often find these statistics hard to believe: “An average class size of 18? That’s
bullshit! In three years I’ve only had two classes with fewer than 50 students!”

Both the students and the university are right. How? This difference in perception arises
because, just as multi-child families have a disproportionately large number of children, large
classes serve a disproportionately large number of students. Suppose that in one semester, the
biology department offers 20 classes with 20 students in each, and 4 classes with 200
students in each. Look at it from an administrator’s perspective. Only 1 class in 6 is a large
class. The mean class size is [(20 × 20) + (4 × 200)] / 24 = 50. So far so good.

But now notice that 800 students are taking 200-student classes and only 400 are taking
20-student classes. Five classes in six are small, but only one student in three is taking one of
those classes. So if you ask a group of random students how big their classes are, the average
of their responses will be approximately [(800 × 200) + (400 × 20)] / 1,200 = 140. We will
call this the experienced mean class size,*4 because it reflects the class sizes that students
actually experience.

Because larger classes contain more students, the average student is enrolled in a class
that is larger than the average class. Institutions can exploit this distinction to promote their
own agendas. A university recruiting pamphlet might proclaim, “The average biology class
size is 50 students.” The student government, lobbying for reduced class sizes, might report

112

113

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 7/21

that “the average biology student is taking a class of 140.” Neither is false, but they tell very
different stories.

This principle explains why faculty are likely to have a different notion of class sizes than
students do. If you wanted to know how big classes are, you might think you could ask either
students or teachers and get the same answer. So long as everyone tells the truth, it shouldn’t
matter. But it does—a lot. Large or small, each class has one instructor. So if you sample
instructors at random, you are likely to observe a large class or a small class in proportion to
the frequency of such classes on campus. In our example above, there are more teachers
teaching small classes. But large classes have many students and small classes have few, so if
you sample students at random, students are more likely to be in large classes.*5

Recall from chapter 5 Goodhart’s law: “When a measure becomes a target, it ceases to be a
good measure.” Class sizes provide an example. Every autumn, college and university
administrators wait anxiously to learn their position in the U.S. News & World Report
university rankings. A higher ranking improves the reputation of a school, which in turn
draws applications from top students, increases alumni donations, and ultimately boosts
revenue and reputation alike. It turns out that class size is a major ingredient in this ranking
process, with a strong premium placed on small classes.

Schools receive the most credit in this index for their proportions of undergraduate
classes with fewer than 20 students. Classes with 20 to 29 students score second
highest, 30 to 39 students third highest and 40 to 49 students fourth highest. Classes
that are 50 or more students receive no credit.

By summing up points across classes, the U.S. News & World Report ranking is rewarding
schools that maximize the number of small classes they offer, rather than minimizing the
experienced mean class size. Since the student experience is what matters, this may be a
mistake. Consider the numerical example we provided earlier. The biology department in that
example has 24 instructors and 1,200 students enrolled. Classes could be restructured so that
each one has 50 students. In this case the experienced mean class size would plummet from
140 to 50, but the department would go from a good score to a bad score according to the U.S.
News & World Report criterion.*6 To eliminate this perverse incentive for universities to pack
most of their students into large classes, we suggest that the U.S. News ranking use
experienced class size, rather than class size, in their calculations.

The same mathematical principles explain the curious fact that most likely, the majority of
your friends have more friends than you do. This is not true merely because you are the kind
of person who reads a book about bullshit for fun; it’s true of anyone, and it’s known as the
friendship paradox. Understanding the friendship paradox is a bit more difficult than
understanding the class size issue that we just treated, but a basic grasp of the problem should
not be beyond reach. The sociologist Scott Feld, who first outlined this paradoxical result,
explains it as follows. Suppose people have ten friends, on average. (We say ten rather than
five hundred because Feld’s paper was written in 1991, back when “friends” were people you
actually had met in person—and often even liked.) Now suppose in your circle there is an
introvert with five friends and a socialite with fifteen friends. Taken together they average ten
friends each. But the socialite is friends with fifteen people and thus makes fifteen people feel

114

115

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 8/21

insecure about how few friends they have, whereas the introvert is friends with only five and
thus makes only five people feel better about themselves.

It’s well and good to make intuitive arguments, but is the friendship paradox actually true
in real life? Feld looked at a diagram of friendships among 146 adolescent girls, painstakingly
collected three decades prior. He found while many of these girls had fewer friends than their
friends, relatively few had more friends than their friends.

But this is just one sample of one group in one small town. We would like to address this
question on a far broader scale, and in the social media age, researchers can do so. One team
looked at 69 billion friendships among 720 million users on Facebook. They find, indeed, that
most users have fewer friends than their friends. In fact, this is the case for 93 percent of
Facebook users! Mind twisting, right? These researchers found that the Facebook users have,
on average, 190 friends, but their friends have, on average, about 635 friends.

Subsequent studies have distinguished between a weak form and a strong form of the
friendship paradox. The weak form pertains to the mean (average) number of friends that
your friends have. The weak form is maybe not so surprising: Suppose you follow Rihanna
and 499 other people on Twitter. Rihanna has over ninety million followers, so the 500
people you follow will average at the very least 90,000,000 / 500 = 180,000 followers—far
more than you have. The strong form is more striking. It states that most people have fewer
friends than their median friend has. In other words, your friends by the number of
friends they have. Pick the friend that is right in the middle. That friend likely has more
friends than you do. This phenomenon can’t be attributed to a single ultrapopular friend. The
same research team found that the strong form holds on Facebook as well: 84 percent of
Facebook users have fewer friends than the median friend count of their friends. Unless you
are Kim Kardashian or someone of similar ilk, you are likely to be in the same situation.

You may find it disconcerting to realize that the same logic applies to your past sexual
history. Chances are, the majority of your partners have slept with more people than you
have.

Okay. Forget we mentioned that. Back to statistics. Selection effects like this one are
sometimes known as observation selection effects because they are driven by an association
between the very presence of the observer and the variable that the observer reports. In the
class size example, if we ask students about the size of their classes, there is an association
between the presence of the student observer and the class size. If instead we ask teachers
about the sizes of their classes, there are no observation selection effects because each class
has only one teacher—and therefore there is no association between the presence of a teacher
in the classroom and the size of the class.

Observation selection effects explain some of what we typically attribute to bad luck. If
you commute by bus on a regular basis, you have probably noticed that you often have to wait
a surprisingly long time for the next bus to arrive. But what is considered a “long wait”? To
answer this question, we need to compare your wait to the average waiting time. Suppose that
buses leave a bus stop at regular ten minute intervals. If you arrive at an arbitrary time, how
long do you expect to wait, on average? The answer: five minutes. Since you might arrive
anywhere in the ten-minute window, a nine-minute wait is just as likely as a one-minute wait,
an eight-minute wait is just as likely as a two-minute wait, and so on. Each pair averages out
to five minutes. In general, when the buses run some number of minutes apart, your average
waiting time will be half of that interval.

What happens if the city operates the same number of buses, so that buses leave every ten
minutes on average—but traffic forces the buses to run somewhat irregularly? Sometimes the
time between buses is quite short; other times it may extend for fifteen minutes or more. Now
how long do you expect to wait? Five minutes might seem like a good guess again. After all,

116

6/27/2021 Chapter 6: Selection Bias, Calling Bullshit

https://web-b-ebscohost-com.ezproxy.losrios.edu/ehost/ebookviewer/ebook/bmxlYmtfXzIyOTMxMDlfX0FO0?sid=3be3e61a-003b-4171-9b17-ba1a46… 9/21

the same number of buses are running and the average time between buses is still ten
minutes.

But the actual average time that you will wait is longer. If you were equally likely to arrive
during any interval between buses, the differences in between-bus gaps would average out
and your average waiting time would be five minutes, as before. But you are not equally likely
to arrive during any interval. You are more likely to arrive during one of the long intervals
than during one of the short intervals. As a result, you end up waiting longer than five
minutes, on average.

In the picture above, buses run every 10 minutes, on average, but they are clumped
together so that some intervals are 16 minutes long and others are only 4 minutes long. You
have an 80 percent chance of arriving during one of the long intervals, in which case you will
wait 8 minutes on average. Only 20 percent of the time will you arrive during one of the short
intervals and wait 2 minutes on average. Overall, your average wait time will be (0.8 × 8) +
(0.2 × 2) = 6.8 minutes, substantially longer than the 5 minutes you would wait on average if
the buses were evenly spaced.

So while it seems as though you tend to get unlucky with waiting times and wait longer
than expected, it’s not bad luck at all. It is just an observation selection effect. You are more
likely to be present during a long between-bus interval, so you end up waiting and waiting.

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your paper today and save 30% with the discount code HAPPY

X
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 323 412 5597

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 30% with the discount code HAPPY