Associate Professor Mohd Azizuddin Mohd Sani of Universiti Utara Malaysia offered a critique of our latest INVOKE Centre for Policy Initiatives (I-CPI) survey, as reported in Malaysiakini on the 26th of July 2017. We would like to address his concerns about our response rate and the representativeness of our surveys.
Associate Professor Mohd Azizuddin cited the fact that many whom we contacted did not answer our survey questions as a key reason why the survey finding is not reflective of the actual level of support for BN, PH and PAS.
But, a low response rate is not necessarily inaccurate.
I-CPI would like to take this opportunity to share the nitty gritty of doing a poll so that the public understands why we decided to publish 3 important information that are not disclosed by any other polls conducted so far.
For every survey finding that is made available to the public, I-CPI publishes the number of voters actually contacted, the number of voters who answer and the number of voters who stay on the line to answer all the questions. For the latest survey that was disputed by Associate Professor Mohd Azizuddin, our proprietary computerised polling system called 2.5 million voters randomly, of which 160,761 voters actually picked up the call. Out of that, in the end 17,107 voters completed the survey and their responses formed the data for the finding that was shared with the public.
This gives a response rate (the number of people who pick up the phone, instead of the call going directly to voice mail or remains unanswered) of 6.54% and a completion rate (the number of those who pick up the call who went on to answer all the questions) of 10.64%.
The reason why we publish the population information (no of voters called, no of voters responded and no of voters who completed) is for the public to be able to benchmark our finding against internationally recognised surveys by comparing the response rate. We hope other polls emulate this practice as well in the future in order to allow for comparability and reliability.
Therefore, for a poll that cites findings from 1,500 voters, the pollster would have to randomly call typically between 100,000 to 150,000 voters given the average response and completion rate (the completion rate is typically lower when the questions relate to voting preference). If a poll claims to have managed to interview 1,500 voters who give complete responses from a set of 5,000 voters called, the extremely high completion and response rates would have flagged up some irregularities because a survey’s response rate can be benchmarked internationally.
We can confirm that our response rate for this survey is consistent with our previous surveys and with response rates around the world. For example, according to “Assessing the Representativeness of Public Opinion Surveys” by the Pew Research Center, a 9% response rate is typical for the United States. In the United Kingdom, it is 6.66% as suggested by Martin Boon of ICM International to BBC News. For our latest survey, it was 6.54% and is within the band of response rates elsewhere in the world (if anything, one would expect a much lower response rate among Malaysian voters compared to the American and British voters).
(For more reading on this, please refer to http://www.pewresearch.org/2017/05/15/what-low-response-rates-mean-for-telephone-surveys/)
The perceived and disputed low response rate, as brought up by Associate Professor Mohd Azizuddin, could have led to a few concerns on the accuracy and representativeness of the survey finding.
One typical concern that have been highlighted regularly by members of the public is the perception that a low response rate results in a small sample size. There has always been a confusion or misunderstanding on how big is a sample size that can accurately represent the national voting sentiment. We often get the feedback that a survey finding from 17,000 voters is useless because the sample size constitutes only 0.05% of the Malaysian population and therefore is too negligible to be representative.
Our final sample size for the survey that Professor Mohd Azizuddin referred to was 17,107 respondents. This gives us a margin of error arising from random sampling of +/- 2%. By any standard, a sample size of 17,107 respondents is actually very large and often is seen as unnecessary. In theory, if the sample is chosen randomly with as little bias as possible, a sample size of 1,000 can be as representative as a sample size of 100,000. The polls in the USA and Europe typically have a sample size of between 1,000 to 2,000 respondents and that are internationally accepted.
The other concern that Associate Professor Mohd Azizuddin might have (given our overall response rate) is the lower response rate for specific demographic groups may be even lower, thus excluding specific demographic groups from the survey. From 6 months of non-stop polling, we have been able to track the behaviour of particular demographic sub-groups (for example, non-Bumiputera women above 50 years old from the Peninsular East Coast) when responding to a call from us. Some groups are more likely to answer surveys, some less so. Our proprietary computerised polling system is designed to be able to decide automatically to call more or fewer numbers from each sub-group to compensate for different response rates. This ensures that the breakdown of the responses at the end of the process are demographically representative of Malaysian voters.
However, Associate Professor Mohd Azizuddin’s main contention (quite apart from demographics – age, ethnicity & gender) was his belief that the political attitudes of our respondents were biased due to the questions that were heavily leaning against BN policies. We posed statements about issues such as 1MDB & GST to respondents, asking if they agreed or disagreed. Professor Mohd Azizuddin argued that since all the statements were starkly unfavourable to Barisan Nasional, it was more likely that many of BN supporters hung up before finishing the survey.
The reality is many of the people contacted hung up even before the first question. In fact, the “Agree/Disagree” statements about GST, 1MDB and other issues were not posed until the third question. The low response rate was primarily due to a very large percentage of our calls going to voicemail even before they even picked up the call. This is normal and our response rate so far is also comparable to the response rate in telemarketing in Malaysia.
Tabulation of response rate as the survey progresses to the last question
In fact, the reverse of the Associate Professor’s conclusion is probably true. Our experience over the past six months proves that, if anything, the respondents to our surveys are biased towards supporting BN. A good indication of this is our question in our latest survey asking how they voted in GE13. The actual popular national vote split was 43%:40% in favour of Pakatan Rakyat with 15% not voting. In contrast, our survey respondent profile is clearly skewed in favour of Barisan Nasional (please see the profile of the 17,107 respondents in terms of their GE13 voting history below). Indeed, it was largely for this reason that we designed the statements to be more “PH-friendly”, in part, to compensate for this.
GE13 voting history of the 17,107 respondents who took part in the latest survey
We hope these explanations address the Associate Professor’s concerns (and any other parties who doubted our survey findings). As a matter of principle and in the spirit of responsible polling, we disclose our response rates to ensure credibility of our results. It goes without saying that Malaysians (many of whom would have hung up on us) would find a high response rate very surprising. We hope in the future, it will be helpful if other pollsters adopt the same transparency. This will enhance the public’s trust in the poll numbers that are published from time to time.
Each of the 17,107 completed interviews is recorded and auditable. We invite any further queries or any party that wishes to audit the samples of the recorded interviews.
ON BEHALF OF INVOKE