Qualitative Data Analysis I: Text Analysis

Preview:

DESCRIPTION

Qualitative Data Analysis I: Text Analysis - a summary based on Chapter 17 of H. Russell Bernard’s Research Methods in Anthropology: Qualitative and Quantitative Approaches for a Report for Anthro 297: Seminar in Research Design and Methods under Dr. Francisco Datar, Department of Anthropology, College of Social Sciences and Philosophy, University of the Philippines Diliman

Citation preview

Qualitative Data Analysis I: Text Analysis

a summary based on Chapter 17 of H. Russell Bernard’s Research Methods in Anthropology: Qualitative and Quantitative Approaches

Cindy Cruz 86-16518PhD Media Studies

about.me/cindycruzcabrera

Report for Anthro 297: Seminar in Research Design and Methods

Dr. Francisco Datar

What are considered “texts”?

• Naturally occurring text covering recoverable human information about human thought and behavior: books, magazines, newspapers, diaries, property transactions, recipes, correspondence, song lyrics, and billboards

• Artifacts (clay pots, houses, toys, clothing, computers, furniture)

• Images (family photo albums and videos, slasher films, sitcoms)

• Behaviors (making beer, laying out a garden, dressing the dead for funerals)

• Events (church ceremonies, homecoming games, blind dates)

• Whole corpora of texts are becoming available in machinereadable form, either via the Internet or on CD-ROM

Native Ethnography:Text Collection in Anthropology

• Collaboration of anthropologists and informants in collecting texts and writing the ethnography of indigenous cultures of the world in the native languages of those cultures

• Informants training in anthropology and collecting texts and writing the ethnography of indigenous cultures of the world in the a major literary language like English, Japanese, French, Spanish, etc.

• Autobiographies by speakers of nonliterarylanguages who were not trained as ethnographers but who wrote in a major literary language

• The ‘‘as told to’’ autobiographies

Some Traditions of Text Analysis

Hermeneutics/Interpretive Analysis

• Derives from biblical hermeneutics, also called biblical exegesis - the continual interpretation of the words of biblical texts in order to understand their original meaning and their directives for living in the present

• Close interpretation of text in the search for meanings and their interconnection in the expression of culture

• Requires deep involvement with the culture, including an intimate familiarity with the language, so that the symbolic referents emerge during the study of those expressions

• Connections among symbols can be seen only with the knowledge of what the symbols are and what they are supposed to mean

Narrative Analysis

• Goal: to discover regularities in how people tell stories or give speeches, achieved mostly through the analysis of written text

• Involves ‘‘working back and forth between content and form, between organization at the level of the whole narrative and at the level of the details of lines within a single verse or even words within a line’’

• Gradually, an analysis emerges that reflects the analyst’s understanding of the larger narrative tradition and of the particular narrator.

Performance Analysis

• To analyze the text and figure out how to narrate it today as performers would have done in ancient times through the study of stylized oral narratives of their modern speakers for a systematic comparison across other ancient texts to look for recurrent sound patterns that signify variations in meaning

• This depends on having marked for features like voice quality, loudness, pausing, intonation, stress, and nonphonemic vowel length

• Translations are also considered texts in their own right and can be analyzed just as original texts can.

Schema Analysis

• Schemas are comprised of rules – a grammar – that helps people make sense of so much information.

• Schemas, or scripts, enable culturally skilled people to fill in the details of a story.

• Schema analysis combines elements of anthropological linguistics and cognitive psychology in the examination of text

• Analyzing narratives help in learning about cultural schemas (a schema shared by people in society)

Discourse Analysis

• Involves the close study of naturally occurring interactions• Discursive practices—all the little things that make our

utterances uniquely our own—are seen as concrete manifestations both of the culture we share with others and of our very selves

• In traditional ethnography, discourse is distilled into a description of a culture by observing—and describing—ordinary discourse itself, since culture emerges from the constant interaction and negotiation between people.

• Formal discourse analysis, in fact, involves taping interactions and careful coding and interpretation.

• Covers conversation analysis, which is the study of how people take turns in ordinary discourse—who talks first (and next, and next), who interrupts, who waits for a turn –as governed by flexible, negotiable rules of turn-taking employed by participants in the conversation.

Grounded Theory

• A set of techniques for identifying categories and concepts that emerge from text; and linking the concepts into substantive and formal theories.

• Mechanics – Produce transcripts of interviews and read through a

small sample of text. – Identify potential analytic categories (potential

themes) that arise.– As the categories emerge, pull all the data from those

categories together and compare them. – Think about how categories are linked together. – Use the relations among categories to build

theoretical models, constantly checking the models against the data—particularly against negative cases.

– Present the results of the analysis using exemplars (quotes from interviews) that illuminate the theory

• The key to making all this work is called memoing- keeping running notes about the coding and about potential hypotheses and new directions for the research throughout the grounded-theory process.

• The heart of grounded theory is identifying themes in texts and coding the texts for the presence or absence of those themes.

• Grounded-theory research is mostly based on conducting inductive or “open” coding (becoming grounded in the data and allowing understanding to emerge from close study of the texts) and deductive coding (beginning with a hypothesis before starting coding).

Building Conceptual Models by Memoing

• Once you have a set of themes coded in a set of texts, the next step is to identify how themes are linked to each other in a theoretical model

• Memoing is a widely used method for recording relations among themes.

• In memoing, you continually write down your thoughts about what you’re reading. These thoughts become information on which to develop theory.

• Memoing is taking ‘‘field notes’’ on observations about texts.

Three kinds of memos

• Code notes - describe the concepts that are being discovered in ‘‘the discovery of grounded theory.’’

• Theory notes – attempt to summarize one’s ideas about what’s going on in the text.

• Operational notes - about practical matters.

Steps for Displaying Concepts and Building Models

• Code data for general topics used to guide interviews.

• Use these codes to search for and retrieve examples of text related to various interview topics

• Look at how substantive categories are related.• Identify major categories.• Link/define quotes with a substantive theme to

ground the theory in text.• Confirm the validity of a model by testing it on an

independent sample of data.

Using Exemplar Quotes

• One of the most important methods in grounded-theory text analysis is the presentation of direct quotes from respondents— quotes that lead the reader to understand quickly what it took you months or years to figure out.

• Choose segments of text (verbatim quotes from respondents) as exemplars of concepts and theories or as exemplars of exceptions to your theories (negative cases).

• Choose the exemplars very carefully because your choices constitute your analysis, as far as the reader is concerned

Content Analysis

• While grounded theory is concerned with the discovery of hypotheses from texts and the building of explanatory models from the same or subsequently collected texts, content analysis is concerned with testing hypotheses from the start—usually, but not always, quantitatively.

• This requires:– Starting with a theory that you want to test– Creating a set of codes for variables in the theory– Applying those codes systematically to a set of texts– Testing the reliability of coders when more than one

applies the codes to a set of texts– Creating a unit-of-analysis-by-variable matrix from the

texts and codes– Analyzing that matrix statistically

• ‘‘Texts’’ don’t have to be made of words for content analysis.

Sampling in Content Analysis

• The two components to sampling in content analysis are – identifying the corpus of texts – identifying the units of analysis within the texts.

• Text analysis, particularly nonquantitative analysis, is often based on purposive sampling.

• Nonquantitative studies in content analysis may also be based on extreme or deviant cases, cases that illustrate maximum variety on variables, cases that are somehow typical of a phenomenon, or cases that confirm or disconfirm a hypothesis.

• Once a sample of texts is established, the next step is to identify the basic, nonoverlapping units of analysis. This is called unitizing or segmenting.

Coding in Content Analysis

• With a set of texts in hand, the next steps are to develop a codebook and actually code the text.

• Formulate and test specific hypotheses about the texts.

• Test coding to demonstrate that resource items/categories are exhaustive and mutually exclusive.

• Measure for intercoder reliability to see whether the constructs being investigated are shared—whether multiple coders reckon that the same constructs apply to the same chunks of text

HRAF: Cross-Cultural Content Analysis

• In the 1940s, George Peter Murdock, Clellan S. Ford, and other behavioral scientists at Yale led the effort to organize an interuniversity, nonprofit organization that is now the Human Relations Area Files (HRAF) at Yale University – the largest archive of ethnography.

• Pages of the HRAF database are indexed by professional anthropologists, following the Outline of Cultural Materials, or OCM, a massive indexing system that was developed to organize and classify material about cultures and societies of the world.

• The OCM is used by cross-cultural researchers to find ethnographic data for testing hypotheses about human behavior across cultures.

• HRAF turns the ethnographic literature into a database for content analysis and cross-cultural tests of hypotheses because you can search the archive for every reference to any of the codes across the more than 400 cultures that are covered.

Five steps in doing an HRAF study

• State a hypothesis that requires cross-cultural data.

• Draw a representative sample of the world’s cultures.

• Find the appropriate OCM codes in the sample.• Code the variables according to whatever

conceptual scheme you’ve developed in forming your hypothesis.

• Run the appropriate statistical tests and see if your hypothesis is confirmed.

Computers and Text Analysis

• Programs for automated content analysis are based on the concept of a computerized, contextual dictionary.You feed the program a piece of text; the program looks up each word in the dictionary and runs through a series of disambiguation rules to see what the words mean or an adjective.

• Most content analysis, however, is not based on computerized dictionaries, but on the tried-and-true method of coding a set of texts for themes, producing a text-by-theme profile matrix, and then analyzing that matrix with statistical tools. Most text analysis packages today support this kind of work.

• SPSS, SAS, SYSTAT, EZ-Text, AnSWR, QDA Miner and C-ISAID

Recommended