Journal of Intercultural Communication, ISSN 1404-1634, issue 50, July 2019

The Mismeasure of Culture

Self-Report Questionnaires and Positivist Analysis in Intercultural Communication Research

Stewart Nield

Xi-an Jiao Tung Liverpool University

Abstract

The use of self-report questionnaires has been one of the most prevalent methods of data collection in Intercultural Communication research since as early as the 1950s. Yet, the method has received widespread criticism from a range of academic disciplines. For the field of Intercultural Communication, there is the added concern that self-report data and its epistemological companion, quantitative analysis, succeed only in reducing complex intercultural issues to an overly-simplistic numerical summary. Despite these criticisms, the number and influence of self-report questionnaires in Intercultural Communication research continues to grow. Therefore, this paper seeks to remind researchers of the many problems existing when asking respondents to self-report on complex intercultural issues. By way of an example, Ang et al’s (2007) Cultural Intelligence Scale (CQS) will be analysed through a critically interpretive approach.

Keywords: intercultural research, post-positivism, self-report questionnaires, cultural intelligence, cultural intelligence scale


1. Introduction

For over half a century, the use of self-report questionnaires has been one of the most common methods of data collection in academic research. Bryman (2004:731) argues that leadership research is “replete with countless studies that employ questionnaires”. Likewise, Paulhus & Vazire (2007:224) state that this method remains “the most commonly used mode of assessment – by far” in the diverse field of personality research, while Baldwin (2009:3) states that questionnaires are “essential” for behavioural and medical research. Other areas of study that frequently rely on self-report questionnaires extend to research in mental disabilities and autism, behavioural science, consumer attitudes, and, the field that will be the focus of this paper, Intercultural Communication.

According to Matsumoto & Hwang (2013:849), there has been a recent surge throughout the Intercultural Communication literature in the development of cross-cultural competence questionnaires, which is evidenced by Deardorff’s (2016:211) assertion that there are currently over 100 cultural self-assessment tools. The convenience of administering these questionnaires and the fact that large sample sizes can be accessed are the primary reasons why self-report questionnaires maintain their popularity (Hamamura, Heine & Paulhus, 2008:933). Similarly, Thomas (2007:212) attributes this approval to the fact that they are “systematic, comparable and economic”.

Another perceived advantage of self-report instruments is an issue that will be discussed in more detail in this paper; that is, self-report questionnaires and the data they produce lend themselves conveniently to quantitative methods of analysis. Quantitative analysis has the benefit of being able to highlight what Bryman (2004:753) terms “eye-catching behaviours” in that the results of this kind of study are often able to provide a numerically elegant summary of a participant’s “score”. This can be further portrayed visually in the form of graphs or tables, or compared with scores of other groups of respondents. Tests can also be administered over different periods of time to assess any change in the respondent, while variables can be further identified and manipulated to suit the researcher’s needs. All of this is consistent with the positivist approach to experimental design and data analysis, and there has long been a desire amongst certain research fields to be considered more “scientific” within the academic community by adhering to the principles of positivism and quantitative analysis. As will be discussed in further detail throughout this paper, certain elements of the broad discipline of Intercultural Communication have followed a similar trend.

The result of this is that self-report questionnaires have now become commonplace within intercultural research. However, as a form of data-collection, they have received considerable criticism. Academic disapproval usually takes the shape of concerns involving the questionable reliability and validity of these assessment tools. For example, Thomas (2007) highlights the difficulties in measuring cultures and the prevalence of inconsistent response styles around the world, while Spector (1994:385) states that researchers should be “sceptical” about results yielded from asking people to talk about themselves. Other critics of self-report questionnaires argue that they fail to adequately capture the complex nuances of intercultural issues (Deardorff 2006). In this paper, I will draw upon some of these concerns in an attempt to critically examine the concept of using self-report questionnaires as the sole form of data collection in intercultural research. If the use of self-report is as prevalent throughout a wide range of research areas as the above discussion would suggest, then surely questions need to be asked about the validity and suitability of this form of data collection. To illustrate these concerns, the Cultural Intelligence Scale (CQS)[1] developed by Ang et al (2007) will be critically examined.

While this paper is certainly not the first to raise questions about the validity of self-report questionnaires in intercultural research, it would appear that the growth of this method has continued unabated despite the various criticisms it has received. Recently, the CQS has been criticised by Schlagel & Sarstedt (2016) and Bucker, Furrer & Lin (2015), who respectively question the instrument’s convergent reliability and discriminant validity. However, neither of these two studies address the initial ontological concern surrounding the ability of self-report questionnaires and quantitative analysis to capture the intricacies of intercultural competencies. Therefore, instead of the statistics-heavy methods pursued by these researchers, this paper will adopt a more critically interpretive approach that will challenge the assumptions upon which self-report questionnaires and the CQS are based, as well as questioning the suitability of this form of data-collection and data-analysis for complex intercultural issues. I will also deconstruct a number of the CQS items for critical analysis that will highlight some of the philosophical and operational concerns raised by critics of this method.

2. The Cultural Intelligence Scale (CQS)

The Cultural Intelligence Scale (CQS) was developed during the mid-2000s by a team of scholars including Peter Earley, Elaine Mosakowski, Ang Soon, Doug Livermore and Linn van Dyne. Cultural Intelligence (CQ), the construct that the CQS is derived from, was initially devised in response to the supposed failures of previous, trait-based approaches to dealing with culture. Ang et al (2007:336) define CQ as “an individual’s capability to function and manage effectively in culturally diverse settings” and as “a multidimensional construct targeted at situations involving cross-cultural interactions arising from differences in race, ethnicity and nationality”. According to the developers, “rather than expecting individuals to master all the norms, values and practices of various cultures encountered, cultural intelligence helps leaders develop an overall perspective and repertoire” (Van Dyne, Ang & Livermore, 2010:132). Merely learning the traits attributed to certain nationalities is an ineffective (and potentially detrimental) way of approaching intercultural issues. Therefore, CQ’s attempt to offer a more universal approach that moved away from the rigid categories proposed in earlier research (e.g Hofstede’s cultural dimensions) was well-received.

Initially, CQ research focused on the development of cross-cultural managers. In recent years however, both the construct and the questionnaire have expanded their scope to become frameworks used in a wide range of research fields. According to the Cultural Intelligence Center’s website, over 100 peer-reviewed journal articles have been published using the CQS (Cultural Intelligence Center 2016). In addition to the broad field of global leadership, other research areas using the concept of CQ include cross-cultural training (e.g. Eisenberg et al 2013), Business School programme design (e.g. Ahn & Ettner 2013) and support for international students (e.g. Wang et al 2015).

3. Self-report questionnaires and positivist analysis: fundamental problems

From the discussion so far, it can be seen that the use of self-report questionnaires such as the CQS as a means of gathering data in intercultural research has become widespread in recent years. However, criticisms of self-report questionnaires have been raised since as early as 1927 (Paulhus & Vazire, 2007:224), while criticisms of CQ and the CQS are beginning to emerge in the intercultural literature (Schlagel & Sarstedt 2016; Bucker, Furrer & Lin 2015; Blasco, Eghomfeldt & Jakobsen 2012). Before critically examining the CQS in more detail, the following section will examine some of the philosophical and operational concerns surrounding the general suitability of self-report questionnaires (and with it, the reliance on statistics and quantitative methods) in intercultural research.

3.1 Philosophical considerations

As highlighted above, one of the contributing factors in explaining the increasing popularity of self-report questionnaires in intercultural research is the convenient link between this form of data and quantitative analysis. For some researchers, legitimacy in one’s claims can only be achieved through in-depth statistical analysis. This fundamentally positivist way of investigating the world has undoubted benefits in some fields of study, but its suitability for the social sciences and personality assessment has long been a source of contention.

For the purposes of this discussion, positivism here is taken to be the conceptual model and research philosophy that is most closely associated with quantitative research methods; that is, an “emphasis on measurement and the generation of law-like certainties” (Thomas 2007:215). From an ontological point of view, positivists argue that reality is clearly definable, discernible and unchanging. As Karasz & Singelis (2009:909) state, positivism “attempts to gauge reality as it really is” (emphasis in original), and in relation to culture, Holliday (2016:56) states that positivism views cultural groups as “solid, fixed, separate geographical blocks”. Furthermore, Zhu (2016:32) states that this approach views culture as a stable phenomenon, while Angouri (2016:83) discusses the tendency to cluster individuals into groups that supposedly share a range of characteristics. Therefore, cultural realities are often portrayed in nationalistic terms which can be seen as measurable, and thus quantifiable, concrete phenomena.

From these definitions, an image begins to emerge that highlights the potential unsuitability of positivism for dealing with complex cultural issues, and since as early as the mid-1990s, many intercultural researchers have moved away from this view of reality (Moon 2011:36). A main tenet of positivism is to view people dispassionately as objects of research, yet according to Thomas (2007:215), it is widely acknowledged that little, if any, research can be value-free. In addition, because we are dealing with multifaceted individuals functioning in complex social structures, there needs to be an understanding of the vast number of factors that play a role in intercultural communication. However, quantitative analyses of self-report scores are not always able to uncover the subtle nuances required for this level of understanding. To illustrate this point, Newby (2014:39) asserts that quantitative methods and positivism may be able to provide us with an accurate picture of how many people are currently living in poverty, but they do nothing to shed light on what it is like to actually live in poverty.

It is therefore argued by numerous researchers that cultural realities need to be understood in more subjective, interpretive terms. Holliday (2011:51) criticises the “school of categorists” that have been influential in intercultural research over the years. It is argued here that reality is socially constructed in a manner that is far more dynamic and complex than the fixed and stable understandings of reality attributed to positivism. This categorisation leads to what Dervin (2011:39) describes as “uncritical, systematic and reified” ways of dealing with culture which can in turn lead to essentialism and stereotypes (see 3.2 below). Therefore, rather than attempting to objectively isolate and dissect culture, there is instead a need to understand culture from a more intersubjective viewpoint. One person’s interpretation of a cultural reality is highly likely to be different from that of another individual, and is further prone to change depending on particular contexts and the specific dynamics that are involved in any given interaction.

This leads us to an epistemological understanding of the complexity of our relationship with culture. How we understand culture depends on the individual, and any data produced when examining cultures thus requires interpretation (which in turn is likely to be influenced by a wide range of experiences and assumptions within the interpreter). A common epistemological question asks “What counts as data and findings?” (Zhu 2016:30), and in the case of intercultural research, if we assume that the nature of cultural reality is dynamic, any data from which we make intercultural claims, therefore, must also be dynamic. Consequently, fixed and supposedly objective data sources (such as scores from a self-report questionnaire) are limiting when dealing with complex intercultural issues. Researchers adopting a postmodern perspective argue that cultural categories cannot be considered as fact (e.g. Gu & Schwiesfurth 2006:87) and some claim that intercultural researchers need to abandon the notion that cultural realities are clearly definable and capable of categorisation (Holliday 2011).

3.2 Quantitative data: Reducing the complex to the basic

As intimated above, a further issue for consideration is the epistemological link between self-report data and quantitative research. Quantitative analysis is often seen as producing a sort of “bottom line” which provides us with a neat encapsulation of whatever is being investigated. It has the capacity to reduce a complex subject to its basic essence, usually in the form of a numerical value. From an epistemological point of view, self-report questionnaire designers are able to answer the question, “how do we know this to be the case?” by drawing on a range of statistical techniques that aim to prove validity and reliability for the instruments they use and the results they produce. However, ontologically speaking, the view that reality can be abbreviated to this most basic form is not entirely consistent with the dynamic nature that intercultural phenomena display. Those approaching culture from a positivist perspective may see cultures as “complete” groups of humans, yet Holliday (2016:57) challenges the notion that culture is a “solid, describable system” and states that it is not always easy to actually recognise or locate cultural groups (2011:194).

A consequence of this reductionist nature of quantitative data might be the tendency to veer towards essentialist generalisations. Various definitions of essentialism exist in the intercultural literature, but the majority of them centre around the notion that it is a negative phenomenon resulting from the basic assumption that people belonging to large groups of society possess the same characteristics as each other. In agreement, Voronov & Singer (2002:461) state:

“When a whole culture or society is pigeonholed in dichotomous categories... subtle differences and qualitative nuances that are more characteristic of that social entity may be glossed over”

Essentialism then is the propensity to define people’s behaviours according to the stereotypes (typically, negative) associated with that particular culture. Furthermore, it is argued by Anghouri (2016:92) that there is a strong link between the use of categorisation and this form of stereotypical thinking. For example, it is often the case that researchers pool data gathered from self-report questionnaires in order to make comparisons based on nationality (Schlagel & Sarstedt 2016; Zhu 2016:31-32). By doing this, researchers fall into the simplistic trap of making generalisations based on the categories formed during the design of the questionnaire. This is an example of what Dervin (2011:44) describes as “boundary fetishism” and is considered to be a major problem in intercultural research. An illustration of this occurred during the 1980s, when business practitioners and academics developed a fervent interest in Japan due its impressive economic performance. As a result, research comparing American and Japanese cultures flourished using self-report questionnaires as a means of comparing the two nations (Rohlfer & Zhang 2016:51) leading to essentialist assumptions about the two cultures. Another example of this categorisation is the grossly simplistic Individualist-Collectivist dichotomy, and as China’s influence continues to grow, crude comparisons between “East” and “West” are common throughout a range of current research fields.

3.3 The complexity of self-reporting

Another criticism often levelled at self-report questionnaires challenges the degree to which respondents actually know how they truly feel when asked to reflect on certain issues. As stated by Paulhus & Vazire (2007:224), “The psychological processes underlying an act of self-reporting are now understood to be exceedingly complex”. This is especially true when encountering the multifaceted nature of culture due to the fact that one’s cultural ideologies are often formed and maintained at the subconscious level. Therefore, expecting respondents to access these deep-seated feelings, especially if they are hastily filling out a 20-item questionnaire on their lunch break, for example, appears to be a demanding task. Paulhus & Vazire (2007:232) also argue that self-assessors are often unable to call on accurate feelings and memories when completing a questionnaire. In addition, Deardorff (2016:226) calls into question the actual value of achieving a high score on a cultural self-report questionnaire in the first place. They are often administered in some form of educational or training context where, it is argued, the aim should be to adopt a more modest approach and establish areas for improvement, not merely to impress the teacher or the manager.

Yet another problem with self-report questionnaires is the cultural differences that exist in response styles. In agreement with a wide range of literature, Lee et al (2002) state, somewhat simplistically, that respondents from “Asian cultures” tend to veer towards the mid-range on Likert-scale items. Hammamura, Heine & Paulhus (2008:933) claim that these variations in response styles “may reflect the fundamentally different ways that members of different cultures construe themselves and their social worlds”. Furthermore, Rohlfer & Zhang (2016:49) highlight the many translation problems that lead to construct validity concerns in cross-cultural questionnaires.

This is further related to the issue of how intercultural questionnaires are developed in the first place. As Karasz & Singelis (2009:914) state, the majority of the intercultural self-report instruments currently in circulation are devised by white, middle-class academics, thus raising questions about their suitability for other groups of people. This was a similar criticism levelled at the highly influential work of Hofstede, whose “cultural dimensions” were primarily based on data gathered from self-report questionnaires. In her withering critique, Piller (2011:80-84) shows that Hofstede’s data, which has been used extensively by the academic community to develop a wide range of further questionnaires and cultural models, was only gathered from well-educated, mid- or upper-level employees of IBM from around the world. In light of the fact that Hofstede’s cultural labels have been applied to groups that could potentially total billions of people (e.g. so-called “Collectivist cultures”), it is important to highlight the non-representative nature of the data on which these models were founded. As a result of these criticisms, many intercultural theorists have now discredited the work of Hofstede (Mosakowski, Calic & Earley 2013).

4. The CQS: A Mismeasure of Intercultural Competence?

From the above discussion, a wide range of problems with self-report questionnaires have been demonstrated, which highlight some of the philosophical and operational issues that have been raised concerning this form of data collection. This kind of questionnaire only provides us with “half a picture” (Deardorff 2016:210) and academics usually discourage the sole use of this form of data collection in intercultural studies (Deardorff 2006:252). The following section will examine these criticisms of self-report in relation to Ang et al’s (2007) CQS.

4.1 Problematic assumptions

In The Mismeasure of Man, Stephen Jay Gould (1996) provides an excoriating dismissal of how the concept of IQ was developed and extensively tested in the early part of the 20th Century. According to the author, “the reification of intelligence as a single, measurable entity” (1996:189) was the culmination of desperate attempts amongst Psychologists of that era to gain scientific credibility through what Gould terms, “the prestige of numbers” (1996:117). This involves the assumption that such a concept as intelligence can be isolated and then measured. As can be seen, similar assumptions have been fundamental in the attempts to define and measure a so-called cultural intelligence. Jahoda (2012:291) draws attention to Hofstede’s oft-quoted definition of culture, which is highly relevant here:

“I treat culture as ‘the collective programming of the mind which distinguishes the members of one human group from another.’ This is not a complete definition... but it covers what I have been able to measure” (Hofstede, 1984, emphasis added by Jahoda, 2012:291).

We have already seen how Hofstede’s work has been discredited amongst Intercultural Communication researchers (including advocates of the CQS), yet it would appear that Ang et al (2007) are merely replicating Hofstede’s flawed ontological position. Instead of measuring cultural dimensions as Hofstede did, the CQ theorists are measuring what the authors term “a multidimensional construct targeted at situations involving cross-cultural interactions” (Ang et al 2007:336). Both make the mistake of assuming that these complex notions can be measured in the first place. Following this line of argument, the developers’ claims of multidimensionality can be further questioned due to the apparent attempts to reduce the highly complex to four supposedly distinct dimensions.

4.2 Development and Validation of the CQS

There is a strong emphasis on the cross-validation process that has been adopted in the development of the CQS throughout the Ang et al (2007) article. According to the authors, “We collected additional data from Singapore and the USA to assess the generalizability of the CQS across samples, time and countries” (Ang et al 2007:344). However, a sample size of two nations is clearly not enough to adequately assess generalizability across countries[2]. Including participants from Singapore and the USA may create a convenient link between the overused cultural labels of “East” and “West”, and was perhaps an attempt by the developers to pre-empt the criticisms of Western-centricism directed towards earlier intercultural self-assessment tools. However, from the outset, the value of using countries as a variable for comparison is highly debatable due to the essentialist assumptions that it promotes (Phillips 2010:54). As discussed above, comparing self-report scores between nations is an overused, simplistic and often-criticised method of analysis in intercultural research (Deardorff 2016); therefore, using similar methods as an attempt to claim cross-validation appears to perpetuate this trend. In addition, participants in both countries comprised solely of undergraduate university students (Ang et al 2007:346), which, according to Bucker, Furrer & Lin (2015:263), is a limiting approach due to the probable lack of experience with intercultural issues amongst this group. The fact that convenience sampling was used for this part of the validation process is also problematic. Using groups of people that have the potential to be socio-demographically similar (for example, being middle-class) is unlikely to achieve the desired invariance between the two groups, and as Johnstone Young (2016:281-282) argues, clear limitations need to be applied to research whenever convenience sampling is used.

One could go further and assert that this method is no different to that used by Hofstede when developing his widely criticised cultural dimensions. As highlighted above (see Piller 2011), one of the major criticisms that Hofstede’s influential work has received centres on the fact that his sweeping generalisations about cultures were established from data that was gathered from a severely limited sample size. If the authors of the CQS want to claim true validity, a more diverse sample could have been used during these developmental and validation stages.

In addition, Ang et al (2007) catalogue the extensive efforts that have been taken in producing a valid questionnaire free from Western bias. According to the authors, this involved reviewing “the intelligence and intercultural competencies literatures” supplemented by “interviews with eight executives with global work experience” (Ang et al 2007:343), leading to the formulation of questionnaire items that were further reviewed by “three faculty and three international executives” (Ang et al 2007:344). While acknowledging the efforts taken here in ensuring it was not formed around Western-centric ideological and cultural assumptions, it is difficult to ascertain the background of the “international” faculty and executives that were influential during this important item-generation stage. Do these individuals, for example, view cultures as fixed or dynamic phenomena? How much critical cultural theory have they been exposed to? These factors are likely to have a profound influence on an individual’s perspectives on culture, and other than the authors’ vague claim that each individual has “significant cross-cultural experience” (Ang et al 2007:344), more information could have been provided.

After its initial cross-validation, the CQS was further tested against other self-report questionnaires such as the Big Five personality test and various other cultural adaptation tests. The respondents in this stage of the process were again US and Singaporean undergraduates (Ang et al 2007:347). However, using self-report questionnaires, which all experience the significant validity problems highlighted above, to validate another self-report questionnaire appears to be a self-confirming exercise in circular logic that is unlikely to uncover significant differences. This is a major criticism of self-report questionnaires that was actually raised at least 25 years ago: according to Oppenheim (1992:206), “New scales are often correlated with older, well-known scales which, however, may themselves be of questionable validity”.

The authors also introduced a Task Performance element to a later validation process stage whereby participants were required to produce a peer-assessed business proposal and presentation (Ang et al 2007:354). This was followed by another study, the participants of which included “103 foreign professionals and their supervisors” (Ang et al 2007:358), which required various other self- and supervisor assessments related to task performance. While it is acknowledged that this is a worthy attempt to move beyond the limited sample and method of university undergraduates using self-report questionnaires, as well as bringing in more opportunities for subjective evaluation, the fact that the foreign professionals in question were all employed by the same company and presumably belonged to similar socio-economic groups echoes the sampling problems that were highlighted above. In addition, according to the authors, positive links between the CQS and the more interpretive Task Performance “received less empirical support” (Ang et al 2007:362), which shows that when the validation process moves away from relying on self-report questionnaires, any predictive correlations become less pronounced.

4.3 Self-awareness and previous experience

A major concern with the CQS involves whether or not the questionnaire contains items about which respondents are fully self-aware. As discussed above, the processes involved in self-reporting are highly complex, often requiring respondents to access deeply-ingrained values and opinions. In relation to the CQS, the first dimension to be measured is Metacognitive CQ. Examples of items from this section are as follows:

MC1 I am conscious of the cultural knowledge I use when interacting with people with different cultural backgrounds.
MC2 I adjust my cultural knowledge as I interact with people from a culture that is unfamiliar to me

The criticism that self-report respondents do not know their true feelings on complex interpersonal issues appears to be especially relevant here. Are respondents fully aware of their metacognitive strategies when operating across cultures? If an individual was familiar with a range of Intercultural Communication literature for example, or had been previously trained in critically reflexive thought, it is feasible that an awareness of these issues could exist. For the rest of the population, however, doubts can certainly be raised. Paulhus & Vazire (2007:232), claim “much information is unavailable to the earnest self-assessor”, which raises questions about this issue.

Other examples of this complexity can be found in the Behavioural CQ section:

BEH2 I use pause and silence differently to suit different cross-cultural situations.
BEH4 I change my non-verbal behaviour when a cross-cultural situation requires it.

It is possible that respondents may have never considered these issues in detail prior to completing the questionnaire, or even that the respondent had never encountered a cross-cultural situation before. A wide range of CQ research is carried out using university students as respondents and it is debatable as to how much experience they would actually have of these complex issues. Thus, in this situation, it is likely that respondents will offer idealistic or entirely fabricated responses. In addition, how conscious an individual is of his or her non-verbal behaviour is likely to vary considerably from person to person.

4.4 Cultural differences in interpretations

It is also a concern that respondents may have entirely different interpretations of what the questionnaire items actually mean. In a recent study, Schlagel & Sarstedt (2016) examined the CQS for measurement invariance and found a range of differences in interpretations of the questionnaire across countries. For example, several of the CQ dimensions did not demonstrate convergent reliability and internal consistency amongst French and Chinese respondents (Schlagel & Sartstedt, 2016:641). This construct validity concern can be further demonstrated by examining one of the CQ items in more detail:

MC3 I am conscious of the cultural knowledge I apply to cross-cultural interactions

“Cultural knowledge” is a multifaceted concept in itself that has a wide range of potential interpretations (e.g. ranging from a knowledge of languages, cultural norms, traditions, festivals, rituals, through to an understanding of eating habits, fashions or possibly even driving behaviours). An individual’s understanding of the term will be further influenced by their own previous experiences, attitudes and interactions. Holliday (2011:47) uses the term “movable groups and realities” when discussing the complexity of cultural definitions, highlighting the ambiguous nature of the issues under consideration. Moreover, how one “applies” this knowledge is an equally indistinct concept that is likely to vary not only in diverse cross-cultural contexts but also within same-culture groups.

Further concerns relating to the ambiguous nature of some of the CQS items are particularly pertinent for respondents who have been asked to complete the questionnaire in a language that is not their own. If the respondent is completing a version of the questionnaire that has not undergone a robust translation process, some difficulty in understanding the true meaning of this item would undoubtedly be encountered. While Ang et al (2007) include a Chinese version of the CQS in their study, and other translated versions are included in Schlagel & Sartstedt (2016), careful attention needs to be paid to the translation processes that are involved (Rohlfer & Zhang 2016:49).

4.5 Socially Desirable Responding (SDR)

As highlighted in section 3.3, there are significant differences in how people respond to self-report questionnaires with some cultures generally adopting a humbler approach to self-report questionnaires, while other groups have been shown to provide more extreme responses (Paulhus & Vazire 2007:231). For the CQS, it is not difficult to envisage both intercultural and intracultural variances in how some of the questionnaire items are processed by different groups of respondents. For example, one of the Motivation items ask respondents to consider the following:

MOT4 I enjoy living in cultures that are unfamiliar to me.

A long-standing concern raised by academics regarding SDR lies in the inconsistent response styles that result when questionnaire items are framed positively (known as “enhancement”), or negatively (referred to as “denial”). As Paulhus (1984:598) states, this involves the tendency to associate socially desirable characteristics with the self while disassociating oneself with undesirable attributes. To illustrate, if the above item were instead worded as, “I do not enjoy living in cultures that are unfamiliar to me”, then the item becomes open to other potential interpretations. When compared with the idea of enjoying living in different cultures, actively stating that one does not enjoy this kind of lifestyle evokes a more negative image suggesting the person is perhaps closed-minded or xenophobic. Therefore, it is unlikely that certain groups of people would associate themselves with these negative characteristics. Other items in the Motivation section of the CQS are equally susceptible to this problem:

MOT1 I enjoy interacting with people from different cultures.
MOT2 I am confident that I can socialize with locals in a culture that is unfamiliar to me.

Importantly, it is the variance in how enhancement or denial questionnaire items are interpreted across cultures that is concerning in terms of the reliability of responses. We know that there is inconsistency in how cultural groups and the people within them deliberate over these forms of questionnaire items (Lalwani, Shavitt & Johnson 2006); therefore, depending on the cultural background of the respondents, an accurate and consistent image of these issues is unlikely to be achieved.

From the above discussion, a number of problems with the CQS have been demonstrated. As highlighted in previous sections, other researchers have arrived at similar conclusions. For example, Schlagel & Sarstedt (2016) claim that the CQS suffers from a lack of measurement invariance for some of the dimension sets (in particular, Cognitive CQ) and specific items, which indicates that the same construct is being measured differently across different groups. Bucker, Furrer & Lin (2015) found that the questionnaire demonstrated low discriminant validity, i.e. that supposedly different questionnaire items are too closely related to each other. From the analysis of a selection of CQS items above, it is not difficult to see why these issues with validity and reliability have been demonstrated in previous research.

Other general criticisms of the CQS include its potentially limited scope for researchers. As shown above, the complexity of some of the questionnaire’s items make it a challenging prospect for even those who may be experienced in intercultural matters. However, what about other groups of society? Would high-school children be able to complete the CQS, for example? How about low-level employees of an international company? Perhaps those two groups are not the intended participants for CQ research, but there is an argument that they should be. To concur with Wood & St. Peters (2014:558-559), CQ should not only be the domain of high-level, global business leaders; yet, a disproportionate amount of research is dedicated to this group.

Therefore, a number of questions can be raised when looking at this self-report assessment tool. Jobe (2009:25), in writing about the apparent benefits of self-report questionnaires, states the following:

“for a respondent to provide accurate information, the respondent must, at a minimum, comprehend the question being asked, recall information from memory, make decisions about the accuracy of the information recalled, and format an answer.”

From the above analysis, there is an argument that the CQS does not fully meet these criteria. Therefore, in agreement with Bucker, Furrer & Lin (2015:260), we can arrive at the conclusion that ‘despite the growing number of studies, the development of a valid and reliable measure of CQ remains a work in progress’.

5. Conclusions

Perhaps it is unfair to single out the CQS for criticism when, as mentioned at the start of this paper, there are over 100 cultural self-report items currently available in the Intercultural Communication literature (Deardorff 2016:211). It would also be unfair to accuse Soon Ang and the CQ theorists of understanding this field of study through solely positivist lenses as it is rarely the case that researchers adopt a purely positivist epistemology (Brinkmann & Kvale 2015:68). However, if any of the many other popular intercultural self-assessment tools had been analysed, it is likely that similar fundamental concerns would have emerged. That is, any attempt to encapsulate the highly complex, multi-dimensional facets of intercultural communication in a self-report questionnaire are likely to only produce limited results. As discussed above, this method of data collection experiences numerous validity and reliability problems. Respondents are unlikely to be fully self-aware of the issues under consideration and may have interpreted these issues through cultural lenses that vary considerably between and within cultural groups. In addition, there has been a failure of self-report proponents to adequately address the overall crucial ontological and epistemological concern that it is difficult and perhaps misguided to attempt to reduce complex cultural realities to an overly-simplistic number.

One question for the field of Intercultural Communication to consider is whether adhering to concepts such as cultural intelligence (or any of the other terms commonly used throughout the field such as global mindset or even the ubiquitous intercultural competence) is actually beneficial in aiding the improvement of what they are attempting to define and measure. As claimed in The Mismeasure of Man (Gould 1996), any efforts at reifying multidimensional constructs are likely to result in damaging consequences (e.g. oversimplification and categorisation). Is the reification of a so-called ‘Cultural Intelligence’ open to similar criticisms? This approach to defining and isolating a particular form of intelligence assumes that there is a clearly discernible construct that can be identified, measured and then apparently improved. However, a wide range of researchers would argue that this view is untenable with the dynamic and complex realities involved in Intercultural Communication. Furthermore, throughout his treatise, Gould (1996) points to the “allure of numbers” that caused scientists in the past to cling, often desperately, to quantitative methods in order to validate their theories of intelligence, no matter how outlandish they were (e.g. the use of skull size to prove that white, northern-Europeans were intellectually superior to any other race). Are intercultural communication researchers who rely on positivism following a similar path? To be sure, I am not claiming that the positivist intercultural researcher shares the same racist ideologies as the 19th century craniometrist; rather, it is my assertion that, by relying on numbers and quantitative methods – by reducing the complex to its most basic form – researchers overlook a vast array of subtle differences that exist when people from different cultures interact.

In drawing this discussion to a close, it would be remiss if there were to be no mention of the Paradigm Wars that have been waged for a number of decades over the Quantitative / Qualitative debate. Bryman (2008:13) questions whether or not there has actually been “a cessation of hostilities” between the two opposing approaches to scientific enquiry, and for the field of Intercultural Communication, the divide appears to be particularly strong. While some intercultural researchers remain firmly on the paradigmatic fence (e.g. Hu & Fan 2010), there is evidence of a shift in approaches away from relying solely on quantitative methods. Holliday (2016:58-60) discusses the need to abandon experimental methods when analysing culture in favour of a more interpretive constructivist approach. In order to allow researchers to truly shed their own ideologies, the author argues that techniques such as “thick description”, “making the familiar strange”, or “full blown ethnographies” represent the most effective forms of data collection for intercultural research. One could draw on the many ethnographic methods (such as participant observations and unstructured interviews) available to qualitative researchers adopting a more post-modernist approach that moves beyond misleading categorisations and generalisations. In addition, the highly respected Deardorff (2016:226) argues the need for more performance-based, learner-centred methods in attempting to understand more about the improvement of intercultural communication.

While it is acknowledged that advocates of quantitative intercultural research will in all likelihood remain unconvinced by the arguments in this paper, it is hoped that awareness has at least been raised concerning some of the methodological and philosophical problems resulting from relying solely on these methods in intercultural research. It is also worth acknowledging that research paradigms, and how researchers interpret them, change over time. It is thus possible that what I am advocating in this paper will be challenged and considered out of date in the not-too-distant future. Nevertheless, current research would suggest that an approach which draws upon the wide range of qualitative techniques available to the intercultural researcher is one that is likely to succeed in gaining a richer understanding of the complex, multidimensional, and intersubjective nature of what is being investigated. Perhaps more importantly, it is how the researcher engages with these methods and the data they produce that will influence the direction of future intercultural research. In attempting to move beyond the preoccupation with measurement, there needs to be more critical awareness of the Western-centric ideologies and researcher-bias that have at times been prevalent throughout the field.

Finally, in attempting to end this discussion on a more positive note, it would be duplicitous of me if I did not mention that I have actually used the CQS (and other self-report tools) with reasonable success in the intercultural communication classroom. These questionnaires often raise some very interesting discussion points that students can break down and analyse in great detail. For instance, one of the items on the CQS states ‘I am sure I can deal with the stresses of adjusting to a culture that is new to me’. I have used this statement in discussions that require students to predict, before they move to another culture, what some of the causes of stress might be so that they can develop strategies to deal with them in advance. Students appear to enjoy reflecting on these issues as well as the fact that they can receive a ‘score’ at the end of the questionnaire. This can also be used as an opportunity to help students develop critical thinking skills. By highlighting some of the reliability and validity issues that have been discussed in this paper, intercultural communication students can begin to understand more about the fluid and dynamic nature that characterises this field of study.

References

Ahn, M. J., & Ettner, L. (2013). Cultural intelligence (CQ) in MBA curricula. Multicultural Education & Technology Journal, 7(1), 4–16. https://doi.org/10.1108/17504971311312591

Ang, S., Van Dyne, L. &. Koh, C. (2006). Personality Correlates of the Four-Factor Model of Cultural Intelligence. Group & Organization Management, 31(1), 100–123. https://doi.org/10.1177/1059601105275267

Ang, S., Dyne, L. Van, Koh, C., Ng, K. Y., Templer, K. J., Tay, C., & Chandrasekar, N. A. (2007). Cultural Intelligence : Its Measurement and Effects on Cultural Judgment and Decision Making , Cultural Adaptation and Task Performance. Management and Organization REview, 3(3), 335–371. https://doi.org/10.1111/j.1740-8784.2007.00082.x

Angouri, J. (2016). 'Studying identity' in H. Zhu, (ed.) Research Methods in Intercultural Communication: A Practical Guide Hoboken, New Jersey: John Wiley & Sons Inc.

Blasco, M., Feldt, L. E., & Jakobsen, M. (2012). If only cultural chameleons could fly too: A critical discussion of the concept of cultural intelligence. International Journal of Cross Cultural Management, 12(2), 229–245. https://doi.org/10.1177/1470595812439872

Baldwin, W. (2009). ‘Information no one else knows: The value of self-report’ in Stone et al (eds.) The Science of Self-Report: Implications for Research and Practice New Jersey: Taylor & Francis

Brinkmann, S. & Kvale S. 2015. Interviews: Learning the Craft of Qualitative Interviewing Thousand Oaks: Sage

Bucker, J., Furrer, O. Lin, Y. (2015). Measuring cultural intelligence (CQ): A new test of the CQ scale. International Journal of Cross Cultural Management, 15(3), 259-284. https://doi.org/10.1177/1470595815606741

Bryman, A. (2004). Qualitative research on leadership: A critical but appreciative review. Leadership Quarterly, 15(6), 729–769. https://doi.org/10.1016/j.leaqua.2004.09.007

Bryman, A. (2008). 'The end of the Paradigm Wars?' in Alasuutari, P., Bickman, L. & Brannen, J. (eds.) The SAGE Handbook of Social Research Methods London: SAGE Publications

Cultural Intelligence Center (2016). 'Cultural Intelligence Research' [Online] Available at: https://culturalcq.com/research/ (Accessed: 16 November, 2016)

Deardorff, D. K. (2006). Identification and Assessment of Intercultural Competence as a Student Outcome of Internationalization. Journal of Studies in International Education, 10(3), 241–266. https://doi.org/10.1177/1028315306287002

Deardorff, D.K. (2016). How to assess intercultural competence in H. Zhu, (ed.) Research Methods in Intercultural Communication: A Practical Guide Hoboken, New Jersey: John Wiley & Sons Inc.

Dervin, F. (2011). A plea for change in research on intercultural discourses: A “liquid” approach to the study of the acculturation of Chinese students. Journal of Multicultural Discourses, 6(1), 37–52. https://doi.org/10.1080/17447143.2010.532218

Eisenberg, J., Lee, H. J., Brück, F., Brenner, B., Claes, M. T., Mironski, J., & Bell, R. (2013). Can business schools make students culturally competent? Effects of cross-cultural management courses on cultural intelligence. Academy of Management Learning and Education, 12(4), 603–621. https://doi.org/10.5465/amle.2012.0022

Gould, S.J. (1996). The Mismeasure of Man New York: W.W. Norton & Company

Gu, Q., & Schweisfurth, M. (2006). Who Adapts? Beyond Cultural Models of “the” Chinese Learner. Language, Culture and Curriculum, 19(1), 74–89. https://doi.org/10.1080/07908310608668755

Hamamura, T., Heine, S. J., & Paulhus, D. L. (2008). Cultural differences in response styles : The role of dialectical thinking. Personality and Individual Differences, 44, 932–942. https://doi.org/10.1016/j.paid.2007.10.034

Holliday, A. (2011) Intercultural Communication and Ideology London: Sage Publications

Holliday, A. (2016). Studying Culture in H. Zhu, (ed.) Research Methods in Intercultural Communication: A Practical Guide Hoboken, New Jersey: John Wiley & Sons Inc.

Hu, Y. & Fan, W. (2011). An exploratory study on intercultural communication research contents and methods: A survey based on the international and domestic journal papers published from 2001 to 2005. International Journal of Intercultural Relations, 35(5), pp.554–566. Available at: http://dx.doi.org/10.1016/j.ijintrel.2010.12.004.

Jahoda, G. (2012). Critical reflections on some recent definitions of “culture.” Culture & Psychology, 18(3), 289–303. https://doi.org/10.1177/1354067X12446229

Jobe, J.B. (2009). Cognitive processes in self-report in Stone et al (eds) The Science of Self-Report: Implications for Research andPractice New Jersey: Taylor & Francis

Johnstone Young, T. (2016). Questionnaires and Surveys in H. Zhu, (ed.) Research Methods in Intercultural Communication: A Practical Guide Hoboken, New Jersey: John Wiley & Sons Inc.

Karasz, A., & Singelis, T. M. (2009). Qualitative and Mixed Methods Research in Cross-Cultural Psychology. Journal of Cross-Cultural Psychology, 40(6), 909–916. https://doi.org/10.1177/0022022109349172

Lalwani, A. K., Shavitt, S., & Johnson, T. (2006). What is the relation between cultural orientation and socially desirable responding? Journal of Personality and Social Psychology, 90(1), 165–178. https://doi.org/10.1037/0022-3514.90.1.165

Lee, J. W., Jones, P. S., Mineyama, Y., & Zhang, X. E. (2002). Cultural Differences in Responses to a Likert Scale. Research in Nursing & Health, 25, 295–306. https://doi.org/10.1002/nur.10041

Newby, P. (2014). Research Methods for Education London: Routledge

Matsumoto, D., & Hwang, H. C. (2013). Assessing Cross-Cultural Competence : A Review of Available Tests. Journal of Cross-Cultural Psychology, 44(6), 849–873. https://doi.org/10.1177/0022022113492891

Moon, D. (2011). Critical Reflections on Culture and Critical Intercultural Communication in Nakayama, T. K., & Halualani, R. T. (eds). The Handbook of Critical Intercultural Communication. Chichester: John Wiley & Sons

Mosakowski, E., Calic, G., & Earley, P. C. (2013). Cultures as learning laboratories: What makes some more effective than others. Academy of Management Learning and Education, 12(3), 512–526. https://doi.org/10.5465/amle.2013.0149

Oppenheim, A.N. (1992). Questionnaire Design, Interviewing and Attitude Measurement London: Continuum

Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46(3), 598–609. https://doi.org/10.1037/0022-3514.46.3.598

Paulhus, D.L., & Vazire, S. (2007). The Self-Report Method. Handbook of Research Methods in Personality Psychology, 224–239. New York: The Guildford Press

Phillips, A. (2010). What’s wrong with Essentialism? Distinktion: Scandinavian Journal of Social Theory, 11(1), pp.47–60. Available at: http://www.tandfonline.com/doi/abs/10.1080/1600910X.2010.9672755.

Piller, I. (2011). Intercultural Communication: A critical introduction Edinburgh: Edinburgh University Press

Rohlfer, S., & Zhang, Y. (2016). Culture studies in international business: paradigmatic shifts. European Business Review, 28(1), 39–62. https://doi.org/http://dx.doi.org/10.1108/09564230910978511

Schlagel, C., & Sarstedt, M. (2016). Assessing the measurement invariance of the four dimensional cultural intelligence scale across countries: A composite model approach. European Management Journal, 34(6), 633–649. https://doi.org/10.1016/j.emj.2016.06.002

Spector, P. (1994). Using Self-Report Questionnaires in OB Research : A Comment on the Use of a Controversial Method Journal of Organisational Behaviour 15 385-392 http://www.jstor.org/stable/248821.

Thomas, A. (2007). Self-report data in cross-cultural research: issues of construct validity in questionnaires for quantitative research in educational leadership. International Journal of Leadership in Education, 10(2), 211–226. https://doi.org/10.1080/13603120601097488

Van Dyne, L., Ang, S., & Livermore, D. (2010). Cultural intelligence: A pathway for leading in a rapidly globalizing world. In K.M. Hannum. B. McFeeters, & L. Booysen (eds.), Leading across differences: Cases and perspectives (pp.131-138). San Francisco, CQ: Pfeiffer.

Voronov, M. & Singer, J.A. (2002). The Myth of Individualism Collectivism- A Critical Review. The Journal of Social Psychology, 2002, 142(4), 461–480., Available at: papers://13507515-a992-4cea-b488-dec1e41c1983/Paper/p7780

Wang, K., Heppner, P., Wang, L., Zhu, F. (2015). Cultural intelligence trajectories in new international students: Implications for the development of cross-cultural competence. International Perspectives in Psychology: Research, Practice Consultation 4(1), 51-65. http://dx.doi.org/10.1037/ipp0000027

Wood, E. D., & St. Peters, H. Y. Z. (2014). Short-term cross-cultural study tours: impact on cultural intelligence. The International Journal of Human Resource Management, 25(4), 558–570. https://doi.org/10.1080/09585192.2013.796315

Zhu, H. (2016). Identifying Research Paradigms in H. Zhu, (ed.) Research Methods in Intercultural Communication: A Practical Guide Hoboken, New Jersey: John Wiley & Sons Inc.

About the Author

Stewart Nield is an Intercultural Communication Instructor working at Xi-an Jiao Tung Liverpool University, China. He holds a MA in Applied Linguistics and a MBA that focused on Intercultural Leadership Skills development. He is currently studying for his Doctorate at the University of Bath, UK.

Author's Address

Stewart Nield
Xi-an Jiao Tung Liverpool University
111 Ren Ai Rd
Suzhou
Jiangsu 215123
China
E-mail: stewart.nield@xjtlu.edu.cn
Tel. +8651288161389



[1]In the interests of full disclosure, I should highlight that I recently used the CQS as part of an unpublished MBA dissertation. While the results for the particular study were interesting, I was anecdotally aware of the limitations that this tool had in providing a supposedly holistic overview of the respondents’ so-called ‘Cultural Intelligence’. Therefore, my initial scepticism about the questionnaire prompted the motivations to write this paper.

[2] In fairness to the authors, this concern is acknowledged, rather briefly, in the limitations section of Ang et al (2007:365).