The West Kowloon Cultural District Poll Fraud
The background to the public opinion poll on the West Kowloon Cultural District was previously covered in this post: Pictures At An Exhibition. My assertion was that the poll was a meaningless exercise that pretends to satisfy the sense of democracy on a question that should not have been left up to public opinion. Anyway, the poll results have just been released.
(The Standard) Poll fraud suspected in West Kowloon consultation. By Teddy Ng. October 7, 2005.
One of the three shortlisted bidders was alleged to have tried to manipulate a government-funded public consultation exercise completed earlier this year seeking feedback on the three proposals for the West Kowloon Cultural District project.
The consultation exercise began in mid-December 2004 and ended in June, during which the public was admitted free to exhibitions featuring the three proposals at several venues. Visitors were given a comment card on which to express their opinions. The cards were placed in collection boxes at the exhibition venues or sent in via the Internet, fax or by post. The consultation exercise was commissioned by the Public Policy Research Institute of the Hong Kong Polytechnic University.
The research institute received a total of 33,416 comment cards. Some 72.7 percent of the cards received responded to the question "Which proposal should be taken forward to the next phase?" Of these, 54.7 percent preferred proposal Z, 33.8 percent went for proposal X and 18.2 percent chose proposal Y.
However, the research institute found that 4,176 cards responding to that question had been flagged, meaning they were submitted in identical envelopes or had similar mailing labels, and that their answers to the questions were very similar. More than 90 percent of the flagged cards voted for proposal Z. Excluding the flagged cases, the percentages favoring Z and X were closer. The percentage favoring Z dropped to 47.8 percent, and the percentage favoring X increased to 39.9 percent.
... the government did not disclose the names of the X, Y and Z bidders. Nor did any of the three respond to the accusation. Deputy Secretary for Housing, Planning and Lands Au King-chi said it was difficult to judge whether there was an attempt to defraud.
As to who X, Y and Z are, Ming Pao reported these half-guesses:
Now we turn to the more fun part about the technical aspects. This report in The Standard had been carefully written in the sense that there are some subtle phrasing that a professional statistician would recognize.
For example, the report said about the comment cards were "submitted in identical envelopes or had similar mailing labels." The use of the conjunction 'or' meant that there were at least two types of instruments at issue.
By comparison, Ming Pao was sloppy in reporting the same fact with "4000多張信封一模一樣" (translation: More than four thousand identical envelopes), even though Ming Pao offered the graphics of two of the envelopes used. Apple Daily reported "在2005年2月21的一周開始，發現收到4176份以相同信封寄出的郵寄意見卡" (translation: beginning in the week of February 21, 2005, 4,176 opinion cards with similar envelopes arrived in the mail.)
The next thing to note in the report was that "their answers to the questions were very similar. More than 90 percent of the flagged cards voted for proposal Z." There is a subtlety with respect to the use of the descriptor 'very similar.' For one thing, it does not mean that the 4,176 questionnaires were identical. After all, they did not even all agree on proposal Z.
If this is to be called a fraud, then there are two models of operation.
First, persons acting on behalf of Z (either as official policy or upon their own initiative) printed copies of the questionnaire from the Internet and distributed them among friends, families, co-workers and clients together with pre-printed envelopes/mailing labels. As such, they were coming from different individuals. However, this is not necessarily fraudulent, because those respondents have every right to articulate their opinions. Some of the respondents did not provide personal data, but that was not obligatory as some people may have privacy concerns. These respondents were influenced with respect to the desirable answers for the key question (such as the best proposal), but they are free to choose their answers on the other questions. Thus, the responses may be similar but they are not identical to each other.
Second, persons acting on behalf of Z (either as official policy or upon their own initiative) printed copies of the questionnaire from the Internet, filled in the questionnaires themselves and mailed them in. This is a well-known survey research issue and is statistically detectable. I will give some instances from my personal experience.
In certain Latin American countries, I run a consumer survey. The mode of interviewing was door-to-door, face-to-face personal interviewing since neither mail nor telephone systems had sufficient coverage of the population. As the interviewer is paid on a per-interview basis, he/she is tempted to cheat. Why take a two-hour bus ride to knock on a door only to find no one present? Why not just stay home and fill out all the questionnaires?
When the survey questionnaires come back, I have an automatic analysis system in place to compare an interviewers' work against the norm. Now, the comparison is not necessarily about incidence levels because those can be legitimately different. For example, if an interviewer worked in an Amish community, then all the responses can come back with no television viewing. This interviewer's work is radically different from the norm, but it is understandable.
It turns out that the more revealing comparisons are about the internal variance within an interviewer's work compared to the norm of all other interviewers. The reason is simple: most people are not very good at lying or cheating.
- For a readership study, a cheater may get the idea that People and Readers Digest are popular and they may check off more responses for them. Unfortunately, they don't know the relationship between People and Readers Digest. As a result, the cheater may show statistical significant differences compared to the norm with respect to the duplicated audiences. Another simple question is this: You ask people about their readership to a couple hundred of magazines. What is the average number of magazines actually read? This is where most cheaters get caught, as they may show an average of 15 plus or minus 5, whereas the norm is really 8 and ranging between zero to more than 100. Nothing in the cheater's life experience will allow them to deduce the expected distributions.
- For an opinion study, there may be a question like "My friends consult me on financial investments" with the fixed responses "Strongly Agree; Agree; Neither Agree Nor Disagree; Disagree; Strongly Disagree." The cheater will have no idea about how to distribute the answers among the response categories (equally? more in the middle?). When compared to the norm, there will be a consistent and statistically significant pattern.
- For a product usage study, the cheater won't be familiar with the brand positions and volumetrics. How many brands of shampoo do people use? How often?
In any case, these comment cards are highly problematic. There are 33,416 comment cards coming from a population of 6.9 million of which there are 3.2 million registered voters. If there was a random sample, it would be more than adequate. But this was known to be a self-selected sample, whether or not fraud is involved. So we cannot take the comment cards to represent anything, except themselves (and that is a relatively small number compared to the total population size).
If we cannot trust the comment cards that were collected, then there is another source of information.
The Public Policy Research Institute of the Hong Kong Polytechnic
University conducted three telephone polls comprising 4,553 respondents for the
Hong Kong government. The telephone survey revealed that more than 80 percent of the respondents said they had no opinion on which proposal the government should take forward to the next phase.
More than 60 percent of the respondents said a cultural district should be established along the waterfront area of West Kowloon, and more than 75 percent said the project should sustain cultural and arts development of Hong Kong.
More than half also supported the use of the canopy as the landmark for the project.
Here are some more details in Apple Daily:
- 8% supported proposal Z
- 3.5% supported proposal X
- 2.5% supported proposal Y
- 5% objected to all three proposals
- 81% did not respond to this question
What does this prove? That proposal Z is the most popular choice? No, not at 8%. More importantly, this proves that 81% of the respondents were smart enough to recognize that they may not be qualified or informed to decide how to spend HK$30 billion, if at all.