ARTICLE 93: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 6 of 7 Parts

Written by Dr. Hannes Nel

In academic research we need to think inductively and deductively.

Inductive thinking is used to develop a new theory.

Therefore, it is what you would mostly use when writing a dissertation for a doctoral degree.

And you should use inductive thematic analysis to analyse the data that you collect.

Deductive thinking is used to test existing theory.

Therefore, it is what you would mostly use when writing a thesis for a master’s degree.

And you should use retrospective analysis to analyse the data that you collect.

Narrative analysis uses both inductive and deductive thinking more or less equally.

That is why both a dissertation and a thesis can be written in a narrative format.

I will discuss the nature of inductive thematic analysis, narrative analysis and retrospective analysis in this article.

Inductive thematic analysis (ITA)

Inductive thematic analysis draws on inductive analytic methods. It involves reading through textual data and identifying and coding emergent themes within the data.

ITA requires the generation of free-flow data. The most common data collection techniques associated with ITA are in-depth interviews and focus groups. You can also analyse notes from participant observation activities with ITA, but interview and focus group data are better. ITA is often used in qualitative inquiry, and non-numerical computer software, specifically designed for qualitative research, is often used to code and group data.

Paradigmatic approaches that fit well with ITA include post-structuralism, rationalism, symbolic interactionism, and transformative research.

Narrative analysis

The word “narrative” is generally associated with terms such as “tale”, or “story”. Such stories are mostly told in the first person, although somebody else might also tell the story about a different character, that is in the second or third person. First person will apply if an interview is held. Every person has his or her own story, and you can design your research project to collect and analyse the stories of participants, for example when you study the lived experiences of somebody who is a member of a gang on the Cape Flats.

There are different kinds of narrative research studies ranging from personal experiences to oral historical narratives. Therefore, narrative analysis refers to a variety of procedures for interpreting the narratives obtained through interviews, questionnaires by email or post, perhaps even focus groups. Narrative analysis includes formal and structural means of analysis. One can, for example, relate the information obtained from a gang member in terms of circumstances and reasons why he or she became a gang member, growth into gang activities, the consequences of criminal activities for his or her personal life, career, etc. One can also do a functional analysis looking at gang activities and customs (crime, gang fights, recruiting new members, punishment for transgression of gang rules, etc.)

In the analysis of narrative, you will track sequences, chronology, stories or processes in the data, keeping in mind that most narratives have a backwards and forwards nature that needs to be unravelled in the process of analysing the data.

Like many other data collection approaches, narrative analysis, also sometimes called ‘narrative inquiry’, is based on the study and textual representation of discourse, or the analysis of words. The type of discourse or text used in narrative analysis is, as the name indicates, narratives.

The sequence of events can be generated and recorded during the data collection process, such as through in-depth interviews or focus groups; they can be incidentally captured during participant observation; or, they can be embedded in written forms, including diaries, letters, the internet, or literary works. Narratives are analysed in numerous ways and narrative analysis can be used in research within a substantial variety of social sciences and academic fields, such as sociology, management, labour relations, literature, psychology, etc.

Narrative analysis can be used for a wide range of purposes. Some of the more common usages include formative research for a subsequent study, comparative analysis between groups, understanding social or historical phenomena, or diagnosing psychological or medical conditions. The underlying principle of a narrative inquiry is that narratives are the source of data used, and their analysis opens a gateway to better understanding of a given research topic.

In most narratives meaning is conveyed at different levels, for example informational content level that is suitable for content analysis; textual level that is suitable for hermeneutic or discourse analysis, etc.

Narrative analysis has its own methodology. In narrative analysis you will analyse data in search of narrative strings (present commonalities running through and across texts), narrative threads (major emerging themes) and temporal/spatial themes (past, present and future contexts).

Retrospective analysis

Retrospective analysis is sometimes also called ‘retrospective studies’ or ‘trend analysis’ or ‘trend studies’. Retrospective analysis usually looks back in time to determine what kind of changes have taken place. For example, if you were to trace the development of computers over the past three decades, you would see some remarkable changes and improvements.

Retrospective analysis focuses on changes in the environment rather than in people, although changes in the fashions, cultures, habits, values, jobs, etc. are also often analysed. Each stage in a chronological development is represented by a sample and each sample is compared with the others against certain criteria.

Retrospective analysis examines recorded data to establish patterns of change that have already occurred in the hope of predicting what will probably happen in the future. Predicting the future, however, is not simple and often not accurate. The reason for this is that, as the environment changes, so do the variables that determine or govern the change. It, therefore, stands to reason that, the longer ahead one tries to predict the future, the more inaccurate will your predictions probably be.

Retrospective analysis does not include the same respondents over time, so the possibility exists for variation in data due to the different respondents rather than the change in trends.

Summary

Inductive thematic analysis, or ITA:

  1. Draws on inductive analytical methods.
  2. Involves reading textual data.
  3. Identifies and codes emergent themes within the data.
  4. Requires the generation of free-flow data.
  5. Favours in-depth interviews and focus groups.
  6. Can also use participant observation.
  7. Fits well with qualitative research and critical or interpretive paradigms.

Narrative analysis:

  1. Tells stories related by people.
  2. Ranges from personal experiences to historical narratives.
  3. Can use a wide range of data collection methods.
  4. Includes formal, structural and functional analysis.
  5. Tracks sequences, chronology, stories or processes in data.
  6. Is based on the textual representation of discourse, or the analysis of words.
  7. Is used by a substantial variety of social sciences.
  8. Can be used for a wide range of purposes.
  9. Conveys meaning on different levels.
  10. Has its own methodology.

Retrospective analysis:

  1. Looks back in time to identify change.
  2. Focuses on change in the environment.
  3. Represents and compares change in samples.
  4. Sometimes tries to predict the future.
  5. Does not include the same respondents over time.

Close 

It is a good idea to mention and explain how you analysed the data that you collected in your thesis or dissertation.

Ph. D. students will already do so in their research proposal.

That is why you need to know which data analysis methods are available and what they mean.

It will also help to ensure that you use the data that you collect efficiently and effectively to achieve the purpose of your research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 92: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 4 of 7 Parts: Ethnographic analysis

Written by Dr. Hannes Nel

I wonder if ethnographic research was ever as vitally important as now.

The COVID-19 pandemic has dramatically changed the way people live, interact, socialise and survive.

No doubt, research on how to combat the virus is still the priority.

However, while numerous researchers are frantically working on finding an effective and safe vaccine, life goes on.

And it will take long before everybody is vaccinated anyway.

And we need to determine what the impact of unemployment, financial difficulties, famine, crime and the loss of loved ones on our psychological health is.

And we need to find ways in which to cope with the new reality.

I discuss ethnographic analysis in this article.

Ethnographic analysis typically addresses the issue of ‘what is going on’ between the participants in some segment (or segments) of the data, in great analytical depth and detail. Ethnographic studies aim to provide contextual, interpretive accounts of their participants’ social worlds.

Ethnographic analysis is rarely systematic or comprehensive: rather, it is selective and limited in scope. Its main advantage is to permit a detailed, partially interpretive, account of mundane features of the social world. This account may be limited to processes within the focus group itself, or (more typically) it may take the focus group discussion as offering a ‘window’ on participants’ lives.

Ethnographic analysis aims to ground interpretation in the particularities of the situation under study, and in ‘participants’ (rather than ‘analysts’) perspectives. Data are generally presented as accounts of social phenomena or social practices, substantiated by illustrative quotations from the focus group discussion. Key issues in ethnographic analysis are:

•           how to select the material to present,

•           how to give due weight to the specific context within which the material was generated, while retaining some sense of the group discussion as a whole, and

•           how best to prioritise participants’ orientation in presenting an interpretive account.

Researchers using ethnographic research, such as observing people in their natural settings, often ask the question what role the researcher should adopt when conducting research: an overt and announced role or a covert and secret role? The most common roles that you as the researcher may play are complete participation, participation as an observer, observer as a participant and complete observer.

The complete participant seeks to engage fully in the activities of the group or organisation being researched. Thus, this role requires you to enter the setting covertly so that the participants will not be aware of your presence or at least not aware that you are doing research on them. By doing research covertly you are supposed to be able to gather more accurate information than if participants were aware of what you are doing – they should act more naturally than otherwise. The benefit of the covert approach is that you should gain better understanding of the interactions and meanings that are held important to those regularly involved in the group setting. Covert research can, however, expose you to the risk that your efforts might prove unsuccessful, especially if the participants find out that you were doing research on them without them being informed and without their agreement. Such research can also lead to damage to the participants in many ways, for example by embarrassing them, damaging their career prospects, damaging their personal relationships, etc.

You will act ethically and more safely if you, as the researcher observe a group or individual and participate in their activities. In this case you formally make your presence and intentions known to the group being studied and you ask for their permission. This may involve a general announcement that you will be conducting research, or a specific introduction as the researcher when meeting the various people who will form part of the target group for the research.

This approach requires of you to develop sufficient rapport with the participants to gain their support and co-operation. You will need to explain to them why the research is important and how they will benefit from it. The possibility exists that you may become emotionally involved in the activities and challenges of the target group, which might have a negative effect on your ability to interpret information objectively.

The researcher as observer only is, as we already discussed, an etic approach. Here you will distance yourself from the idea of participation but still do your research openly and in agreement with the target group. Such transparent research often involves visiting just one site or a setting that is offered only once. It will probably be necessary to do relatively formal observation. The risk exists that you may fail to adequately appreciate certain informal norms, roles, or relationships and that the group might not trust you and your intentions, which is why the period of observation should not be too long.

The complete and unannounced observer tends to be a covert role. In this case, you typically remain in the setting for a short period of time but are a passive observer to the flow of activities and interactions.

Summary

Ethnographic analysis:

  1. Analyses events and phenomena in a social context.
  2. Is selective and limited in scope.
  3. Delivers a detailed interpretation of commonplace features of the social world.
  4. Focuses on specific aspects of the target group’s lives.

Key issues of ethnographic analysis are:

  1. How data to analyse is selected.
  2. The context on which the collection and analysis focuses.
  3. Interpretation and description of the findings by focusing on the target group’s orientation.

Observation is often used for the collection of data.

An emic or etic approach can be followed.

An etic approach is often also executed covertly.

Covert collection of data can promote accuracy because the target group for the research will probably behave naturally if they do not know that they are being observed.

A covert approach can be rendered inadvisable because of ethical considerations.

An overt approach requires gaining the trust of the target group for the research.

Close

You probably noticed that it is near impossible to discuss data collection and data analysis separately.

Besides, ethnography is a research method, and ethnographic data collection and analysis are part of the method.

Natural scientists will probably only use it to trace the ontology of scientific concepts or phenomena.

And then the data will be historical in nature.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 91: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 4 of 7 Parts: Elementary Analysis

Written by Dr. Hannes Nel

Most social qualitative research requires the analysis of several variables simultaneously (called “multivariate analysis”), for example the analysis of the simultaneous association of age, education, and gender would be an example of multivariate analysis. Specific techniques for conducting a multivariate analysis include factor analysis, multiple correlation, regression analysis, and path analysis. All techniques are based on the preparation and interpretation of comparative tables and graphs, so you should practise doing this if you do not already know how.

These are largely quantitative techniques. Fortunately, the statistical calculations are done for you by the computer, so just be aware of the definitions.

Factor analysis. Factor analysis is a statistical procedure used to uncover relationships among many variables. This allows numerous inter-correlated variables to be condensed into fewer dimensions, called factors. It is possible, for example, that variations in three or four observed variables mainly reflect the variations in a single unobserved variable, or in a reduced number of unobserved variables. Clearly this type of analysis is mostly numerical in nature. Factors are analysed inductively to determine trends, relationships, correlations, causes of phenomena, etc. Factor analysis searches for variations in response to variables that are difficult to observe and that are suspected to have an influence on events or phenomena.

Multiple correlation. Multiple correlation is a statistical technique that predicts values of one variable based on two or more other variables. For example, what will happen to the incidence of HIV AIDS (variable that we are doing research on) in a particular area if unemployment increases (variable 1), famine breaks out (variable 2) and the incidence of TB (variable 3) increases? 

Multiple correlation is a linear relationship among more than two variables. It is measured by the coefficient of multiple determination, which is a measure of the fit of a linear regression. A linear regression falls somewhere between zero and one (assuming a constant term has been included in the regression); a higher value indicates a stronger relationship between the variables, with a value of one indicating a perfect relationship and a value of zero indicating no relationship at all between the independent variables collectively and the dependent variable.

Path analysis. Path analysis can be a statistical method of finding cause/effect relationships, a method for finding the trail that leads users to websites or an operations research technique. We also have “critical path analysis” which is mostly used in project management and is a method by means of which activities in a project are planned to be executed in a logical sequence of events to ensure that the project is completed in an efficient and effective manner. We are concerned about path analysis as an operations research technique here.

Path analysis is a method of decomposing correlations into different pieces of interpretation of effects (e.g. how does parental education influence children’s income when they are adults?). Path analysis is closely related to multiple regression; you might say that regression is a special case of path analysis. It is a “causal model” because it allows us to test theoretical propositions about cause and effect without manipulating variables.

Regression analysis. Regression analysis can be used to determine which factors influence events, phenomena, or relationships.

Regression analysis includes a variety of techniques for modelling and analysing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. If, for example, you wish to determine the effect of tax, legislation and education on levels of employment, levels of employment will be the dependent variable while tax, legislation and education will be the independent variables. More specifically, regression analysis helps one understand how to maintain control over a dependent variable. In the level of employment example, you might wish to know what should be done in terms of tax, legislation and education to improve employment or at least to maintain a healthy level of employment. In this example it is of interest to characterise the variation of the dependent variable around the regression function, which can be described by a probability distribution (how much the level of employment would change and in what direction if all, some or one of the independent variables change by a particular value).

Regression analysis typically estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are held fixed. Seen from this perspective, the example of employment levels would mean investigating what would happen if tax, legislation and education remain unchanged.

Regression analysis is widely used for prediction and forecasting, although this should be done with circumspection. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, to explore the forms of these relationships. Regression analysis presupposes causal relationships between the independent and dependent variables, although investigation can also show that such relations do not exist. An example of using regression analysis, also called “multiple regression” is to determine which factors from colour, paper type, number of advertisements and content (independent variables) have the biggest effect on the number of magazines sold (dependent variable).

Summary

Multivariate analysis can be used for the analysis of several variables simultaneously.

Techniques that can be used for conducting multivariate analysis include factor analysis, multiple correlation, path analysis and regression analysis.

Factor analysis is used to uncover relationships among many variables.

Factors are analysed inductively to determine trends, relationships, correlations, cause of phenomena, etc.

Multiple correlation predicts values of one variable based on two or more other variables.

Multiple correlation is a linear relationship among more than two variables.

Path analysis seeks cause/effect relationships.

It can also be used to find data or to manage projects.

Regression analysis can be used to determine which factors influence events, phenomena or relationships.

It includes a variety of techniques for modelling and analysing several variables when the focus is on the relationship between a dependent variable and one or more independent variables.

Regression analysis helps us to understand how to maintain control over a dependent variable.

Close

Statistics are a wonderfully flexible way in which to analyse data.

Dedicated computer software can do the calculations for us and show us the numbers in tabular and graphic format.

All we need to do, is to analyse the numbers or graphs.

It is mostly quite easy to interpret visual material.

And you will impress your study leader, lecturer and other stakeholders in your research if you use such analysis techniques.

Most importantly, it will be so much easier and faster to come to conclusions and to derive valid and accurate findings from your conclusions.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 90: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Part 3 of 7 Parts

Written by Dr. Hannes Nel

I discuss conversation and discourse analysis as data collection methods in this article.

Conversation and discourse analysis

Both conversation and discourse analysis approaches stem from the ethnomethodological tradition, which is the study of the ways in which people produce recognisable social orders and processes. Both of these approaches tend to examine text as an “object of analysis”. Discourse analysis is a rather comprehensive process of evaluating the structures of conversations, negotiations and other forms of discourse as well as how people interact when communicating with one another. The sharing of meaning through discourse always takes place in a particular context so the social construction of such discourse can also be analysed.

Conversation and discourse analysis both study “naturally” occurring language, as opposed to text resulting from more “artificial” contexts, such as formal interviews. The purpose is to identify social and cultural meanings and phenomena from the discourse studied, which is why the process is suitable for almost any culture-related research.

The name “discourse” shows that it is language that is analysed while language is also used to do research. It can be a complex process and is often better suited to those more interested in theorising about life than those who want to research actual life events.

Discourse analysis focuses on the meaning of the spoken and written word, and the reasons why it is the way it is. Discourse refers to expressing oneself using words and to the variety and flexibility of language in the way language is used in ordinary interaction.

When doing research, we often look for answers in places or sources that we can easily reach when the real answers might lie somewhere else. Discourse analysis is one method which allows us to move beyond the obvious to the less obvious, although much more relevant sources of data.

Discourse analysis analyses what people say apart from just picturing facts. Discourses are ever-present ways of knowing, valuing and experiencing the world. Different people have different discourses. Gangs on the Cape Flats, for example, use words and sentences that the ordinary man on the street will find difficult to understand. Discourses are used in everyday texts for building power and knowledge, for regulation and normalisation, for the development of new knowledge and power relations.

As a language-based analytical process, discourse analysis is concerned with studying and analysing written texts and spoken words to reveal any possible relationships between language and social interaction. Language is analysed as a possible source of power, dominance, inequality and bias. Processes that may be the subject of research include how language is initiated, maintained, reproduced and transformed within specific social, economic, political and historical contexts. A wide variety of relationships and context can be investigated and analysed, including ways in which the dominant forces in society construct versions of reality that favour their interests, and to uncover the ideological assumptions that are hidden in the words of our written text or oral speech in order to resist, overcome or even capitalise on various forms of power. Criminals in a correctional facility will, for example, be included or excluded from gangs on account of certain ways of speech and codes that only they know.

Discourse analysis collects, transcribes and analyses ordinary talk and everyday explanations for social actions and interaction. It emphasizes the use of language as a way to construct social reality. Yin[1] defines discourse analysis as follows:

“Discourse analysis focuses on explicit theory formation and analysis of the relationships between the structures of text, talk, language use, verbal interaction or communication, on the one hand, and societal, political, or cultural micro- and macro-structures and cognitive social representations, on the other hand.”

Discourse analysis examines a discourse by looking at patterns of the language used in a communication exchange as well as the social and cultural contexts in which these communications occur. It can include counting terms, words, and themes. The relationship between a given communication exchange and its social context requires an appreciation and understanding of culturally specific ways of speaking and writing and ways of organising thoughts.

Oral communication always fits into a context which lends meaning to it. It always has a double structure, namely the propositional context (ontology) and the performatory content (epistemological meaning). Oral communication can, for example, be used with good effect to understand human behaviour, thought processes and points of view. 

The result of discourse analysis is a form of psychological natural history of the phenomena in which you are interested. To be of value for research purposes oral communication must be legitimate, true, justified, sincere and understandable. It should also be coherent in organisation and content and enable people to construct meaning in social context. Participants in oral communication should do so voluntarily and enjoy equal opportunity to speak.

Discourse analysis is a form of critical theory. You, as the researcher, need to ensure that the discourse and the participants in the discussion meet the requirements for such interaction. It will also be your duty to eliminate or at least reduce any forces or interventions that may disrupt the communication. Such discourse can also be taken further by having other participants in the research process elaborate and further analyse the results of initial communications. For this purpose, you need to be highly sensitive to the nuance of language.

Any qualitative research allows you to make use of coding and structuring of data by means of dedicated research software, such as ATLAS.ti or CAQDAS. This will enable you to discover patterns and broad areas of salient argumentation, intentions, functions, and consequences of the discourse. By seeking alternative explanations and the degree of variability in the discourse, it is possible to rule out rival interpretations and arrive at a fair and accurate comprehension of what took place and what it meant. 

Discourse analysis can also be used to analyse and interpret written communication on condition that the written communication is a written version of communication relevant to the topic being researched. This requires a careful reading and interpretation of textual material.

Discourse analysis has been criticized for its lack of system, its emphasis on the linguistic construction of a social reality, and the impact of the analysis in shifting attention away from what is being analysed and towards the analysis itself. Discourse is in actual fact a text in itself, with the result that it can also be analysed for meaning and inferences, which might lead to the original meaning of oral communication being eroded at the expense of accuracy, authenticity, validity and relevance. 

Conversation analysis is arguably the most immediate and most frequently used form of discourse analysis in the sense that it includes any face-to-face social interaction. Social interaction inevitably includes contact with other people and contact with other people mostly includes communication. People construct meaning through speech and text, and its object of analysis typically goes beyond individual sentences. Data on conversations can be collected through direct communication, which needs to be recorded by taking notes, making a video or electronic recording.

Conversation analysis is the study of talk in interaction and generally attempts to describe the orderliness, structure and sequential patterns of interaction, whether this is universal or a casual conversation. Conversation analysis is a way of analysing data and has its own methodological features. It studies the social organisation of two-way conversation through a detailed inspection of voice recordings and transcriptions made from such recordings, and relies much more on the patterns, structures and language used in speech and the written word than other forms of data analysis.

Conversation analysis assumes that it is fundamentally through interaction that participants build social context. The notion of talk as action is central to its framework. Within a focus group we can see how people tell stories, joke, agree, debate, argue, challenge or attempt to persuade. We can see how they present particular ‘versions’ of themselves and others for particular interactional purposes, for example to impress, flatter, tease, ridicule, complain, criticise or condone.

Participants build the context of their talk in and through the talk while talking. The talk itself, in its interactional context, provides the primary data for analysis. Further, it is possible to harness analytical resources intrinsic to the data: by focusing on participants’ own understanding of the interaction as displayed directly in their talk, through the conversational practices they use. In this way, a conversation analytic approach prioritises the participants’ (rather than the analysts’) analysis of the interaction.

Naturally occurring data, i.e. data produced independent of the researcher, encompass a range of universal contexts (for example classrooms, courtrooms, doctors’ surgeries, etc.), in which talk has been shown both to follow the conversations of ‘every-day’ conversation and systematically to depart from these.

Conversation analysis tends to be more granular than classical discourse analysis, looking at elements such as grammatical structures and concentrating on smaller units of text, such as phrases and sentences. An example of conversation analysis is where a researcher “eavesdrops” on the way in which different convicted criminals talk to other inmates to find a pattern in their cognitive thinking processes.

While conversation and discourse analysis are similar in several ways, there are some key differences. Discourse analysis is generally broader in what it studies, utilising pretty much any naturally occurring text, including written texts, lectures, documents, etc. An example of discourse analysis would be if a researcher were to go through transcripts or listen in on group discussions between convicted serial murderers to examine their patterns of reasoning.

The implications of discourse and conversation analysis for data collection and sampling are twofold. The first pertains to sample sizes and the amount of time and effort that goes into text analysis at such a fine level of detail, relative to thematic analysis. In a standard thematic analysis, the item of analysis may be a few sentences of text, and the analytic action would be to identify themes within that text segment. In contrast, linguistic-oriented approaches, such as conversation and discourse analysis, require intricate dissection of words, phrases, sentences and interaction among speakers. In some cases, tonal inflection is included in the analysis. Linguistic analysis, be it transcripts of conversations, interviews or any other form of communication, often consists of an abundance of material to analyse, which requires detailed analysis. This requires substantial time and effort, with the result that not too many samples can be processed in a reasonable time.

The data source inevitably determines the type and volume of analysis that can be done. Both discourse analysis and conversation analysis are interested in naturally occurring language. In-depth interviews and focus groups can be used to collect data, although they are not ideal if it is important to analyse social communication. Analysis of such data often requires reading and rereading material to identify key themes and other wanted information which would lead to meanings relevant to the purpose of the research. 

Existing documents, for example written statements made by convicted criminals, are excellent sources of data for discourse analysis as well as conversation analysis. In terms of field research, participant observation is ideal for capturing “naturally occurring” discourse. Minutes of meetings, written statements, transcripts of discussions, etc. can be used for this purpose. During participant observation, one can also record naturally occurring conversations between two or more people belonging to the target population for the study, for example two surviving victims of attacks by serial killers, two security guards who had experiences with attempted serial killings, etc. In many cases legal implications might make listening in to conversations difficult to do without running the risk of encountering legal problems.

Text can be any documentation, including personal reflections, books, official documents and many more. In action research this is enhanced with personal experiences, which can also be put on paper so that they often become historical data. In action research the research is given a more relevant cultural “flavour” by engaging participants from the community directly in the data collection and analysis. The emphasis is on open relationships with participants so that they have a direct say in how data is collected and interpreted. If participants decide that technical procedures such as sampling or skilled tasks such as interviewing should be part of the data collection and analysis process, they could draw on expert advice and training supplied by researchers.

Paradigmatic approaches that fit well with discourse and conversation analysis include constructivism, hermeneutics, interpretivism, critical theory, post-structuralism and ethnomethodology.

Summary

Discourse analysis:

  1. Evaluates the structures of conversations, negotiations and other forms of communication.
  2. Is dependent on context.
  3. Analyses and uses language.
  4. Focuses on the meaning of the spoken and written word.
  5. Allows the researcher to move from the obvious to the less obvious.
  6. Is concerned with studying and analysing written texts and spoken words to reveal the relationships between language and social interaction.
  7. Examines a discourse by looking at patterns of the language used.
  8. Delivers a form of psychological natural history of the phenomena being investigated.
  9. Is a form of critical theory.
  10. Is criticised for its lack of system, emphasis on the linguistic construction of social reality and the lack of focus on the research problem.

Conversation analysis:

  1. Is a form of discourse analysis.
  2. Includes face-to-face social interaction.
  3. Attempts to describe the orderliness, structure and sequential patterns of interaction.
  4. Has its own methodological features.
  5. Assumes that it is fundamentally through interaction that participants build social context.

Discourse and conversation analysis:

  1. Stem from the ethnomethodological tradition.
  2. Examine text as the object of analysis.
  3. Study naturally occurring language.
  4. Identify social and cultural meanings and phenomena.
  5. Require intricate dissection of words, phrases, sentences and interaction between people.

Close

The differences between discourse and conversation analysis are subtle.

Discourse analysis is broader than conversation analysis in the range of its analysis.

While conversation analysis tends to go into finer detail than discourse analysis.

Enjoy your studies.

Thank you.


[1] 2016: 69.

Continue Reading

ARTICLE 89: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis, Part 2 of 7

Written by Dr. Hannes Nel

Hello, I am Hannes Nel and I discuss comparative and content analysis in this article.

Although quite simple, comparative and content analysis are most valuable for your research towards a master’s degree or a Ph. D.

It does not matter what the topic of your research is – you will compare concepts, events or phenomena and you will study the content of existing data sources.

What you need to know, is how to analyse and use such data.

Comparative analysis

Comparative analysis is a means of analysing the causal contribution of different conditions to an outcome of interest. It is especially suitable for analysing situations of causal complexity, that is, situations in which an outcome may result from several different combinations of causal conditions. The diversity, variety and extent of an analysis can be increased, and the significance potential of empirical data can be improved through comparative analysis. The human element plays an important role in comparative research because it is often human activities and manifestations that are compared.

Although theoretical abstractions from reality can be and, in some instances are the only way in which to do valid comparison, the units of analysis can also be whole societies or systems within societies. Comparative research does not simply mean comparing different societies or the same society over time – it might involve searching systematically for similarities and differences between the cases under consideration.

Comparative researchers usually base their research on secondary sources, such as policy papers, historical documents or official statistics, but some degree of interviewing and observation could also be involved. A measure of verification is achieved by consulting more than one source on a particular issue.

Qualitative research approaches are most suitable for the conduct of comparative analysis, with the result that many paradigmatic approaches can be used. Examples include behaviourism, critical race theory, critical theory, ethnomethodology, feminism, hermeneutics and many more.

Content analysis

Content analysis is a systematic approach to qualitative data analysis, making it suitable to serve as the foundation of qualitative research software. It is an objective and systematic way in which to identify and summarise message content. The term ‘content analysis’ refers to the analysis of such things as books, brochures, written or typed documents, transcripts, news reports, visual media as well as the analysis of narratives such as diaries or journals. Although mostly associated with qualitative research approaches, statistical and other numerical data can also be analysed, making content analysis suitable for quantitative research as well. Sampling and coding are ubiquitous elements of content analysis.

The most obvious example of content analysis is the literature study that any researcher needs to do when preparing a research proposal as well as when conducting the actual research for a doctoral or master’s degree.

Especially (but not only) inexperienced students often think that volume is equal to quality, with the result that they include any content in their thesis or dissertations without even asking themselves if it is relevant to the research that they are doing. The information that you include in your thesis or dissertation must be relevant and it must add value to your thesis or dissertation.

We analyse the characteristics of language as communication regarding its content. This means examining words or phrases within a wide range of texts, including books, book chapters, essays, interviews and speeches as well as informal conversation and headlines. By examining the presence or repetition of certain words and phrases in these texts you are able to make inferences about the philosophical assumptions of a writer, a written piece, the audience for which a piece is written, and even the culture and time in which the text is embedded. Due to this wide array of applications, content analysis is used in literature and rhetoric, marketing psychology and cognitive science, etc.

The purpose of content analysis is to identify patterns, themes, biases and meanings. Classical content analysis will look at patterns in terms used, ideas expressed, associations among ideas, justifications, and explanations. It is a process of looking at data from different angles with a view to identifying key arguments, principles or facts in the text that will help us to understand and interpret the raw data. It is an inductive and iterative process where we look for similarities and differences in text that would corroborate or disprove theory or a hypothesis. A typical content analysis would be to evaluate the contents of a newly written academic book to see if it is on a suitable level and aligned with the learning outcomes of a curriculum.

Content analysis can also be used to analyse ethnographic data. Ethnographic data can be used to prove or disprove a hypothesis. However, in this case validity might be suspect, primarily because a hypothesis should be proven or rejected on account of valid evidence. Quantitative analysis is often regarded as more “scientific” and therefore more accurate than qualitative data. This, however, is a perception that only holds true if the quantitative data can be shown to be objective, accurate and authentic. Qualitative data that is sufficiently corroborated is often more valid and accurate than quantitative data based on inaccurate or manipulated statistics.

Content analysis would typically comprise of three stages: stating the research problem, collecting and retrieving the text and employing sampling methods, interpretation and analysis. Stating the problem will typically be done early in the thesis or dissertation. Collecting and retrieving text and employing sampling methods are typically the actual research process, which may include interviewing, literature study, etc.

It is a good idea to code your work as you write. Find one or more key words for every section and keep record of it. In this manner you will be able to find arguments that belong together more easily, and you will be able to avoid duplication of the same content at different places in your thesis or dissertation. Most dedicated computer software enables you to not only keep content with the same code together, but also to access and even print it. This is especially valuable for structuring the contents of your thesis or dissertation in a logical narrative format and to come to conclusions without contradicting yourself.

Content analysis sometimes incorporates a quantitative element. It is based on examining data for recurrent instances, i.e. patterns, of some kind. These instances are then systematically identified across the data set and grouped together. You should first decide on the unit of analysis: this could be the whole group, the group dynamics, the individual participants, or the participant’s utterances. The unit of analysis provides the basis for developing a coding system, and the codes are then applied systematically across a transcript. Once the data have been coded, a further issue is whether to quantify them via counting instances. Counting is an effective way in which to provide a summary or overview of the data set as a whole.

Interviewing is mostly used prior to doing content analysis, although literature study can also be used. Analysing data obtained through interviewing includes analysing data obtained from a focus group. This variation of content analysis usually begins by examining the text of similarly used words, themes, or answers to questions. Analysed data need to be arranged to fit the purpose of the research. This can, for example, be achieved by indexing data under certain topics or subjects or by using dedicated research software. In addition to individual ideas, the flow of ideas throughout the group should also be examined. It is, for example, important to determine which ideas enjoy the most support and agreement.

Paradigmatic approaches that fit well with content analysis include feminism, hermeneutics, interpretivism, modernism, post-colonialism and rationalism.

Summary

Comparative analysis:

  1. Analyses the conditions that lead to an outcome.
  2. Involves searching systematically for similarities and differences.
  3. Mostly uses secondary data sources.
  4. Is mostly used with qualitative research.

Theoretical abstracts can be used for comparative analysis.

Comparative analysis is used:

1.      To increase the diversity, variety and extent of an analysis.

2.      To analyse human activities.

3.      To analyse whole societies and systems within societies.

Content analysis:

  1. Can serve as the foundation for qualitative research.
  2. Can be used with qualitative and quantitative research.
  3. Extensively uses literature as data.
  4. Can also be used to analyse ethnographic data.

The purpose of content analysis is to identify patterns, themes, biases and meanings.

It would typically comprise of three stages: stating the research problem, collecting data, and analysing data.

Coding can be used with good effect in content analysis.

Close

You probably already noticed that the differences between different data analysis methods are just a matter of emphasis.

They share many elements.

For example, both comparative analysis and content analysis use literature as sources of data.

Both fit in better with qualitative research than with quantitative research.

This means that you can use more than one data analysis method to achieve the purpose of your research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 88: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Methods Part 1 of 7 Parts

Written by Dr. Hannes Nel

Isn’t life strange?

There are so many ways in which we can learn.

And the interrelatedness of events, phenomena and behaviour can be researched in so many ways.  

And we can discover truths and learn lesson by linking data, paradigms, research methods, dada collection and analysis methods.

And by changing the combination of research concepts, we can discover new lessons, knowledge and truths.

Research often deals with the analysis of data to discriminate between right and wrong, true and false.

Furthermore, people and life form a system with a multitude of links and correlations.

Consequently, we can learn by conducting research on even just an individual. 

I discuss the following two data analysis methods in this article:

  1. Analytical induction.
  2. Biographical analysis.

Analytical induction

Induction, in contrast to deduction, involves inferring general conclusions from particular instances. It is a way of gaining understanding of concepts and procedures by identifying and testing causal links between them. Analytical induction is, therefore, a procedure for analysing data which requires systematic analysis.

It aims to ensure that the analyst’s theoretical conclusions cover the entire range of the available data.

Analytical induction is a data analysis method that is often regarded as a research method. It uses inductive, as opposed to deductive reasoning. Qualitative data can be analysed without making use of statistical methods. The process to be explained and the factors that explain the phenomenon are progressively redefined in an iterative process to maintain a perfect relationship between them.

The procedure of analytical induction means that you, as the researcher, form an initial hypothesis or a series of hypotheses or a problem statement or question and then search for false evidence in the data at your disposal and formulate or modify your conclusions based on the available evidence. This is especially important if you work on a hypothesis, seeing that evidence can prove or refute a hypothesis.

Data are studied and analysed to generate or identify categories of phenomena; relationships between these categories are sought and working typologies and summaries are written based on the data that you examined. These are then refined by subsequent cases and through analysis. You should not only look for evidence that corroborates your premise but also for evidence that refutes it or calls for modification. Your original explanation or theory may be modified, accepted, enlarged or restricted, based on the conclusions to which the data leads you. Analytical induction will typically follow the following procedure:     

  1. A rough definition of the phenomenon to be explained is formulated.
  2. A hypothetical explanation of the phenomenon is formulated.
  3. A real-life case is studied in the light of the hypothesis, with the object of determining whether the hypothesis fits the facts in the case.
  4. If the hypothesis does not fit the facts, either the hypothesis is reformulated or the phenomenon to be explained is redefined, so that the case is excluded.
  5. Practical certainty may be attained after a small number of cases have been examined, but the discovery of negative evidence disproves the explanation and requires a reformulation.
  6. The procedure of examining cases, redefining the phenomenon, and reformulating the hypothesis is continued until a universal relationship is established, each negative case calling for a redefinition or a reformulation.
  7. Theories generated by logical deduction from a priori assumptions.

Paradigmatic approaches that can be used with analytical induction include all paradigms where real-life case studies are conducted, for example transformative research, romanticism, relativism, rationalism, post-structuralism, neoliberalism and many more.

Biographical analysis

Biographical analysis focuses on an individual. It would mostly focus on a certain period in a person’s life when she or he did something or was somebody of note. Biographical analysis can include research on individual biographies, autobiographies, life histories and the history of somebody told by those who know it. Data for a biographical analysis will mostly be archived documents or at least documents that belong in an archive. Interviews can also be used if the person is still alive or by interviewing people who knew the individual well when still alive.

Although biographical analysis mostly deals with prominent individuals, it can also deal with humble people, people with tragic life experiences, people from whose life experiences lessons can be learned, etc. Regardless of whether the individual is or was a prominent person or not, you as the researcher will need to collect extensive information on the individual, develop a clear understanding of the historical and contextual background, and have the ability to write in a good narrative format.

You can approach an autobiographical analysis as a classical biography or as an interpretive biography. A classical biography is one in which you, as the researcher, would be concerned about the validity and criticism of primary sources so that you will develop a factual base for explanations. An interpretive biography is a study in which your presence and your point of view are acknowledged in the narrative. Interpretive biographies recognise that in a sense, the writer ‘creates’ the person in the narrative.  

Summary

Analytical induction:

  1. Is a procedure for analysing data.
  2. Requires systematic analysis.
  3. Identifies and tests causal links between phenomena.
  4. Ensures complete coverage of data through theoretical conclusions.
  5. Is regarded as a research method by some.
  6. Progressively refines the explanation of phenomena.
  7. Searches for false information through hypothesis testing.
  8. Searches for relationships between phenomena.
  9. Modifies wrong conclusions.
  10. Identifies categories of phenomena.
  11. Enables the researcher to write and summarise working typologies.

Biographical analysis:

  1. Focuses on the individual.
  2. Can include research on individual biographies, autobiographies and life histories.
  3. Mostly fall back on archival documents.
  4. Can deal with anybody’s experiences from which others can gain value and learn lessons.
  5. Can be a classical or interpretive biography.

Close

In this video, we saw how we can gain knowledge by testing the validity, authenticity and accuracy of data.

We also saw that we can learn from the experiences of others.

There are many other ways in which we can discover knowledge by analysing existing data.

We will discuss them in the six articles following on this one.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 87: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Through Coding

Written by Dr. Hannes Nel

Introduction

Hello, I am Hannes Nel and I introduce the data analysis process and ways in which to analyse data in this article. 

You need to know what the different data analysis methods mean if you are to conduct professional academic research. There are a range of approaches to data analysis and they share a common focus. Initially most of them focus on a close reading and description of the collected data. Over time, they seek to explore, discover, and generate connections and patterns underlying the data.

You would probably need to code the data that you collect before you will be able to link it to the problem statement, problem question or hypothesis for your research. Making use of dedicated computer software would be the most efficient way to do this. However, even if you arrange and structure your data by means of more basic computer software, such as Microsoft Excel or, even more previous century, cards on which you write information, you will still be coding the data.

The fundamentals of data analysis

The way you collect, code and analyse data would largely depend on the purpose of your research. Quantitative and qualitative data analysis are different in many ways. However, the fundamentals of data analysis can mostly be applied to both. In the case of quantitative research, the principles of natural science and the tenets of mathematics can often be added to the fundamentals. Therefore, the fundamentals that I discuss here refer mostly to qualitative research and the narrative parts of quantitative research reports. For our purposes a research report can be a thesis or dissertation.

You should “instinctively” recognise possible codes and groupings by just focusing on the research problem statement or hypothesis. Even so, the following hints, or fundamentals on collecting and analysing data remain more or less the same, regardless of which data analysis method and dedicated computer software you may use:

  1. Always start by engaging in close, detailed reading of a sample of your data. Close, detailed reading means looking for key, essential, striking, odd, interesting, repetitive things people or texts say or do. Try to identify a pattern, make notes, jot down remarks, etc.
  2. Always read and systematically code your collection of data. Code key, essential, striking, odd, linked or related and interesting things that are relevant to your research topic. You should use the same code for events, concepts or phenomena that are repeated many times or are similar in terms of one or more characteristics. These codes can be drawn from ideas emerging from your close, detailed reading of your collection of data, as well as from your prior reading of empirical and theoretical works. Review your prior coding practices with each new application of a code and see if what you want to code fits what has gone before. Use the code if it is still relevant or create a new code if the old one is no longer of value for your purposes. You may want to modify your understanding of a code if it can still be of value, even if the original reason why you adopted it changed or has diminished in significance.
  3. Always reflect on why you have done what you have done. Prepare a document that lists your codes. It might be useful to give some key examples, explain what you are trying to get at, what sort of things should go together under specific codes. Dedicated computer software offers you a multitude of additional functions with which you can sort, arrange, and manipulate objects, concepts, events or phenomena, for example memoranda, quotations, super codes, families, images, etc.

Memoranda can be separate “objects” in their own right that can be linked to any other object.

Quotations are passages of text which have been selected to become free quotations.

Super codes can be queries that typically consists of several combined codes.

And families are clusters of primary documents (PDs)), images that belong together, etc.

  • Always review and refine your codes and coding practices. For each code, accumulate all the data to which you gave the code. Ask yourself whether the data and ideas collected under this code are coherent. Also ask yourself what the key properties and dimensions of all the data collected under the code are. Try to combine your initial codes, look for links between them, look for repetitions, exceptions and try to reduce them to key ones. This will often mean shifting from verbatim, descriptive codes to more conceptual, abstract and analytical codes. Keep evaluating, adjusting, altering and modifying your codes and coding practices. Go back over what you have already done and recode it with your new arguments or ideas.
  • Always focus on what you feel are the key codes and the relationship between them. Key codes should have a direct bearing on the purpose of your research. Make some judgements about what you feel are the central codes and focus on them. Try to look for links, patterns, associations, arrangements, relationships, sequences, etc.
  • Always make notes of the thinking behind why you have done what you have done. Make notes on ideas that emerge before or while you are engaged in coding or reading work related to your research project. Make some diagrams, tables, maps, models that enable you to conceptualise, witness, generate and show connections and relationships between codes.
  • Always return to the field with the knowledge you have already gained in mind and let this knowledge modify, guide or shape the data you want to collect next. This should enable you to analyse the data that you collected and sorted, to do some deconstruction and create new knowledge. Creating new knowledge requires deep thinking and thorough background knowledge of the topic of your research.

How data analysis should be approached

When undertaking data analysis, you need to be prepared to be led down novel and unexpected paths, to be open to new interpretations and to be fascinated. Potential ideas can emerge from any quarter – from your reading, your knowledge of the field, engagements with your data, conversations with colleagues or people whom you interview. You need to be open-minded enough to change your preconceived ideas and to let the information change your mind. You also need to listen to and value your intuition. Most importantly, you need to develop the ability to come to logical conclusions from the information at your disposal.

Do not try to twist conclusions on the data that you gather to suit your opinion or preferences. Your computer allows you to return to what you previously wrote and to change it. This will often be necessary if you are to develop scientifically founded new knowledge. Your conclusions and ideas might change repeatedly as you collect new information.       

Do not be frustrated if, as you progress with your research, you find that the codes on which you decided initially no longer work. Again, you can easily change your codes on computer or cards. You must do this in the interests of conducting scientific research. You will typically allocate primary codes to the issues that you regard as important and sub-codes to less important data or further elaborations on your main arguments. You can change this and change your coding structure if necessary.

The process of coding requires skill, confidence and a measure of diligence. Pre-coding is advisable, but you still need to accept that the codes that you decided upon in advance will probably change as you work through the data that you collect.

At some point you need to start engaging in a more systematic style of coding. You can work on paper when starting with the coding, although there is no reason why you can’t start to work on computer from the word go, seeing that you can change your codes on computer at any time with relative ease. Besides, you can make backups of your coding on computer. This can be valuable if, at some stage, you discover that your initial or earlier codes work better than the new ones after all. You can then return to a previous backup without having to redo all the work that you already did.

You need to understand how the computer software that you are using works and what it can provide you with. Different software has different purposes and ways in which codes can be used. It serves no purpose claiming to have used a particular software if you do not really understand how it works, how you should use it and what it can offer you. Previous students will not always be able to teach you the software because most of the software is rewritten all the time. Rather do a formal course on the latest version of the software that you wish to use.

Summary

Most data analysis methods share a common focus.

Data analysis is simplified by coding the data and making use of dedicated computer software.

You can also use coding with simple data analysis methods, for example Microsoft Excel or a card system.

The fundamentals of data analysis apply to qualitative and quantitative research.

You should code data by focusing on the purpose of your research and the research problem statement, question or hypothesis.

The following are the fundamentals of data analysis through coding: Always:

  1. Start by engaging in close, detailed reading of a sample of your data.
  2. Read and systematically code your collection of data.
  3. Reflect on why you have done what you have done.
  4. Review and refine your codes and coding practices.
  5. Focus on what you feel are the key codes and the relationship between them.
  6. Make notes of the thinking behind why you have done what you have done.
  7. And always return to the field with the knowledge you have already gained in mind and let this knowledge modify, guide or shape the data you want to collect next.

In addition to the fundamentals, you should also adhere to the following requirements for the analysis and coding of data:

  1. Be flexible and keep an open mind.
    1. Learn how to come to objective and logical conclusions from the data that you analyse.
    1. Change your codes at any stage during your research if it becomes necessary.
    1. Develop your data analysis coding skills, confidence and diligence.
    1. Acquire a good understanding of the computer software that you will use for data analysis.
    1. Work systematically.

Close

You will use the fundamentals of data analysis and coding with most data analysis methods.

Almost all recent dedicated data analysis software use coding.

I will discuss the following analysis methods in my next seven or eight videos:

  1. Analytical induction.
  2. Biographical analysis.
  3. Comparative analysis.
  4. Content analysis.
  5. Conversation and discourse analysis.
  6. Elementary analysis.
  7. Ethnographic analysis.
  8. Inductive thematic analysis (ITA).
  9. Narrative analysis.
  10. Retrospective analysis.
  11. Schema analysis.
  12. Situational analysis.
  13. Textual analysis.
  14. Thematic analysis.
Continue Reading

ARTICLE 86: Research Method for Ph. D. and Master’s Degree Studies: Preparing for Data Collection

Written by Dr. Hannes Nel

How long do you think will it take you to complete a thesis or dissertation?

You probably know the old saying that work expands to fill the time available for its completion.

That is also true for academic research.

I guess the average student will need one or two years for a thesis towards a master’s degree and two to ten years for a dissertation towards a Ph. D.

This is not a lot of time.

Believe me, you will need every second that you can spare to complete the work in the available time.

And you will need to plan your research project as accurately as you possibly can.

I discuss how to plan and organise data collection for research in this article.

Organising data collection. Once you have decided on the research approach and data collection instruments you will use you should be able to draw up a draft schedule for your research. This will relate the time you have available in which to carry out the research – a given number of hours, days, weeks or perhaps years – to your other responsibilities and commitments. You can then slot in the various research activities you will need to engage in at times when you expect to be both free and, in the mood, to work on your research.

Many universities require of students to report on their progress at specific stages. This is not in line with the principles of adult learning, although it is often necessary. Even if such due dates are not set by a study leader, it is still a good idea to draft a schedule for your research work. You know that you will probably have only limited time in which to do the work, so sketch out what you will be doing, month by month or week by week, in order to achieve your objectives. Remember to leave yourself some flexibility and some ‘spare time’, for when things do not go exactly as planned. This means that you need to do some contingency planning as well.

Just because you have drawn up a schedule, however, does not mean that you will go to jail if you do not keep strictly to it. It is difficult, even with experience, to precisely estimate the time that different research activities will take. Some will take longer than expected, whereas others may need less time. Some will be abandoned, whereas other unanticipated activities will demand attention. It is a good idea to allow for some spare time and flexibility in your scheduling. You should also revisit your schedule from time to time, and make revisions, to allow for such changes and to keep yourself on track.

One thing you must avoid is to put off work until the last minute. If you drag your feet you will not be able to submit on the due date. Rather try to work ahead of your schedule so that you will have some spare time, should you need it because of unforeseen eventualities.

There are several ways of scheduling your research time. Project management software offer sophisticated ways in which to illustrate your research project diagrammatically or graphically. 

Such charts suggest a simplified, rational view of research. They are useful in conveying the overlap of concurrences between the tasks to be carried out and can serve as a guide to monitor your progress. In practice there will be numerous minor changes to your plans as set out, and perhaps some major ones as well.

Piloting instruments for the collection of data. It is advisable to pilot your data collection instruments before you use them on your actual target group. In this manner you can save lots of time and money, because it would be a catastrophe if, for example, you were to send out 10 000 questionnaires of which you receive 2 000 back only to find that you cannot use any because of some simple technical error.

Rather carry out a couple of interviews with friends or colleagues in advance or have them fill out some questionnaires or observe some organisational activities – or whatever else you plan on using to gather data with. You will learn a great deal from the activity, for example the amount of time that collecting data can take. You will also know if your instruments work or not. You need to pilot your instruments early enough so that you will still have time to change them or even your data collecting strategy if necessary.

Do not underestimate the value of pilot research. Things never work out quite the way you expected them to, even if you have done them many times before, and they have a nasty habit of turning out differently from how you expected them to. If you do not pilot your data collection instruments and procedure first, you will probably find that your initial period of data collection turns into a pilot study in any case. 

And yes, the surprise that you get will be a pleasant one if you planned and conducted your data collection well.

Summary

You can use the following steps to plan and organise your data collection:

  1. Decide which research approach and method or methods you will use.
  2. Choose the data collection methods and instruments that you will use.
  3. Decide how much time you will need to conduct your research I you did not decide already.
  4. Fit your research activities into the time that you have available for research.
  5. Draft a schedule for your research work.
  6. Do contingency planning if you did not do so already.
  7. Allow for some spare time in your schedule.
  8. Allow for time to meet with your study leader.
  9. Pilot the data collection instruments that you will use.

Close

Do not put off any research work until the last minute.

Pilot the data collection instruments early enough so that you will have time to correct and improve them.

Keep your study leader informed about your progress.

Enjoy your studies.

Continue Reading

ARTICLE 85: Research Methods for Ph. D. and Master’s Degree Studies: Data Collection Methods: Written Documents

Written by Dr. Hannes Nel

I guess many of us were conditioned from the day we started school to regard what is written on paper or what we can see on our computer monitors or cell phones as knowledge.

And as we grew older, we were taught to write things.

Probably most teachers and, later, lecturers, teach pupils and students that there are other sources of data apart from written documents.

But somehow, when we need to write a report of any kind, we fall back on written documents as our main, sometimes only, sources of data.

It is a good starting point.

Just keep in mind that what is written on paper or electronically is already old.

And the world is dynamic.

And we must be able to adapt to changes in the environment rapidly.

And especially on doctoral level, we need to develop new knowledge.

And existing knowledge is sometimes an obstacle in the way of progress.

I discuss the use of written documents as a data source in this article.

Almost all research projects involve, to a greater or lesser extent, the use and analysis of documents. You are expected to read, understand and critically analyse the writing of others, whether fellow researchers, practitioners or policymakers. For some research projects the focus of data collection is wholly or almost entirely, on documents of various kinds.

Documents are records of past events that are written or printed; they may be anecdotal notes, letters, diaries, reports and, of course, books. Official documents include internal papers, communications to various public, student and personal files, programme descriptions, and institutional statistical data.

In interactive data collection techniques, you can find these documents at the site or a participant may offer to share these personal records with you. Documents are the most important data source in concept analysis and historical studies. Documents are usually catalogued and preserved in archives, manuscript collection repositories, or libraries. Documents might, for example:

  1. Be library-based, aimed at producing a critical synopsis of an existing area of research writing.
  2. Be computer-based, consisting largely of the analysis of previously collected data sets.
  3. Be work-based, drawing on material produced within an organisation.
  4. Have a policy focus, examining material relevant to a particular set of policy decisions.
  5. Have a historical orientation, making use of available archival and other surviving documentary evidence.

Using documents can be a relatively unobtrusive form of research, one which does not necessarily require you to approach respondents directly. Reading documents is usually supplemented by other data collection methods. You will probably make considerable use of secondary data, i.e. data which has already been collected, and possibly also analysed, by somebody else. Technically speaking most documents are secondary data, the most common forms of which are official statistics collected by governments and government agencies.

One needs to be cautious when analysing secondary data. The questions you need to ask of any existing document are:

  1. What were the conditions of its production? For example, why, and when, was the document written and for whom?
  2. If you are using statistical data sets, have the variables changed over time?
  3. Have the indicators of statistical data sets that you used to measure variables changed? For example, the measurement of unemployment has undergone many changes in the past two decades, and even today different research agencies use different definitions of unemployment as well as different statistical models, so that they produce different figures for what should be the same thing. This makes comparison of figures difficult and sometimes unrealistic.

You will invariably make use of written documents, including books and your own notes taken during interviews and fieldwork (observation), as sources of information. You will already have used such documents when you prepared your study proposal. Another important data collection activity where you will extensively use written documents will be when you do your literature review. 

The notes taken during interviews need to be a true and accurate reflection of what has been said by the interviewees. You need to distinguish between capturing the exact words and paraphrasing. This is important for showing proof that you did not commit plagiarism and that the way you used the data is accurate, valid and consistent.

Notes that you take can be rather voluminous. However, you should still pay attention to taking accurate notes. You should also note the names of people who you interviewed or spoke to, who said what during focus group meetings and the time, dates and places where the interviews or discussions took place.

Treat the opportunity to review any material as if it were your only opportunity to access and read the documents. By doing so, you will reduce the frustration created by having to return to the material later. You will also minimise inconveniencing any people who may have had to retrieve the material for you.

People whom you interviewed will not always be available or willing to repeat the interview, should you lose your notes. It is always a good idea to make duplicates of written documents. This, of course, is easier if you could just make a backup of an electronic version of the documents. Also type a copy of a voice recording as soon as you have an opportunity to do so. Check that the recorder is working before you do an interview or focus group.

Summary

Almost all research projects use written documents as a source of data.

A document can be anything that is written or printed.

Documents can be found in libraries, personal collections, in bookshops, archives and many more.

The contents of documents are often supplemented by other sources of data.

Most documents are secondary data.

You must corroborate written data.

Be careful of not committing plagiarism when using documents as a source of data.

Make sure that your notes and other documents that you prepare are accurate and relevant to your research.

It is advisable to store your notes electronically or to make printed or written duplicates.

Close

The requirements for the collection and use of documents as sources of data are in many ways the same as for most data collection methods.

  1. All data should be corroborated, regardless of how it was collected.
  2. All data that you collect, and use must be relevant to your research.
  3. You must acknowledge the origin of data that you use in your thesis or dissertation.
  4. You must be able to provide evidence of the data that you refer to in your thesis or dissertation, should your study leader, an external evaluator or any other stakeholder question it.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 84: Research Methods for Ph. D. and Master’s Degree Studies: Data Collection Methods: Online Data Sources Part 2 of 2 Parts

Written by Dr. Hannes Nel

Did you notice how many experts of all shapes and sizes post videos on the internet in which they predict the end of the world?

Some of them even give a date and time for the ultimate catastrophe.

And when it does not happen, they simply shift the date.

Do you believe them?

And if you do, what criteria should they meet for you to regard them as trustworthy prophets of doom?

I discuss the requirements for accuracy and authenticity of data in this video.

The internet can be a valuable source of information. However, people doing research need to be careful when using information obtained from internet sources. Any individual can upload information on the internet and all web site hosts are not equally responsible when it comes to accepting contributions. All information that you use in your research should be corroborated. This is an important consideration because electronic documents are not always quality assured, and documents are sometimes distributed electronically because publishers are not interested in them.

The accuracy an authenticity of information can be evaluated by checking the following:

  1. Checking the author.
  2. Checking the purpose.
  3. Checking for objectivity.
  4. Checking for accuracy.
  5. Checking for reliability and credibility.
  6. Checking for coverage.
  7. Checking for currency.
  8. Checking links.

Checking the author. You can check personal homepages on the World Wide Web, campus directory entries and information retrieved through search engines to find relevant information about an author. You can also check print sources in the library reference area and other biographical sources, for information such as the following:

  1. Is the name of the author/creator on the page?
  2. Is the page signed?
  3. Are his/her relevant profile and credentials listed, including occupation, years of experience, position or education? Stated differently, is the author suitably qualified to write on the given topic?
  4. Is there contact information, such as an email or web site address on the page?
  5. If there is a link to a web site address, is it for an individual or for an organisation?
  6. What is the relationship of the author with the organisation, if the address is that of an organisation, for example a university or consulting company?

Checking the purpose. It is often easier to judge the contents of a source if you know for what purpose the article was written. Also check for whom the article is intended.

Checking for objectivity. Objectivity is a prerequisite of any research. Even so, political and social issues are strong temptations to misuse and misinterpret information to fit a particular agenda. The following questions can be used to check if your interpretation of data is objective or not:

  1. Is there any indication if the information is claimed to be factual, just the opinion of an individual, or the propaganda of a body with ulterior motives, for example a political body, radical or religious group?
  2. Judging from the formulation and tone of the document, does the author’s point of view appear to be objective and impartial or not?
  3. Is the language and tone in which the document is written free of or loaded with emotional words and bias?
  4. Is the author affiliated with an organisation, the values and objectives of which might render the information biased and subjective?
  5. Is the document free of or cluttered with advertisements or sponsored links?

Checking for accuracy. Accuracy can mean different things to different people, depending on which paradigmatic approach you follow. However, data and the interpretation of data need to be valid, authentic, free of deliberate, accidental or coincidental misrepresentations and logical, to be of value for research purposes. The following questions can be used to check the accuracy of data and your research findings:

  1. Are the sources of factual information clearly and accurately acknowledged so that the information can be verified?
  2. Is it clear who has the ultimate responsibility for the accuracy of the content of the material and is the profile of this individual or group of experts known?
  3. Is the information corroborated by other sources or can you verify the information from your own knowledge?
  4. Has the information been reviewed or referred by an individual or group of experts with the necessary knowledge to conduct professional evaluation?
  5. Has the document been written on an acceptable academic level and is the information free of grammatical, spelling, and typographical errors?

Checking for reliability and credibility. Reliability and credibility go hand in hand with accuracy. Reliable information is information that is consistently the same over time and across at least the target group for the research, although it should ideally be the same in as wide a context as possible. Credibility is dependent on the authenticity, accuracy and trustworthiness of the data and research findings. You, as the researcher, will be responsible for credibility, which means that you will need to conduct the research in an accountable, honest and ethical manner. Reliability and credibility can be checked by asking and answering the following questions: 

  1. Why should anyone believe information from this source?
  2. Judging from the content and layout, does the information appear to be valid and well-researched, or is it written in an unstructured manner and inaccurately supported or not supported at all by evidence of authenticity?
  3. Has the document been published by a publisher with a good reputation for only publishing quality content that has been checked for accuracy, authenticity and objectivity?
  4. Are quotations and other strong statements or claims backed by sources that you could check through other means, and are the sources acknowledged where the statements or claims are made?
  5. Which university, college, research experts or scientists support or endorse the information?
  6. Do the university, college, research experts or scientists who support or endorse the information have a good reputation as being objective and an authority in the field of the document?
  7. Is the electronic material also available in hard (book or magazine) format?

Checking for coverage. Coverage refers to the notion of saturation. Your research findings should not be biased or rendered inaccurate because you did not consult enough, or the wrong sources of information. Although the extent of research is always limited by factors such as capacity, available time, funds, and the co-operation of members of the target group, you should at least achieve the purpose of your research. Coverage can be checked by asking and answering the following questions:

  1. Is the information relevant to the topic of your research?
  2. Does the document have information that is not available elsewhere?
  3. How in-depth is the material?

Checking for currency. The most important factor determining currency is, of course, recency. The more recent the information that you collected is, the more accurate, valid and relevant will your research analysis and findings be. Currency can be checked by asking and answering the following questions:

  1. Is the document reviewed regularly by someone who has the relevant knowledge and skills?
  2. Does the document or posting show when it was originally written, when and how often it was reviewed and when it was last reviewed?

Checking links. Each web site should be checked independently because the quality of web pages linked to the original web page may vary. This can be done by asking and answering the following questions:

  1. Are the links related to the topic of the document and are the web sites that are linked articulated to the purpose of the site and the content of the document?
  2. Are the links still current, or have some or all of them been deactivated or simply abandoned?
  3. What kinds of other sources are linked and are they in any way related to the contents and purpose of the document in which you are interested?
  4. Are the links maintained, evaluated and reviewed and do they show growth in terms of traffic volume, quality of content and the user-friendliness and professional and attractive layout of the sites?

Summary

Anybody can post data on the internet. Therefore, you need to be careful when using such data in your research.

The accuracy and authenticity of information can be evaluated by checking the following:

  1. The author. The author should be a known and reputable authority in the field of study. Also, the author should be acknowledged in the data source that you consult.
  2. The purpose of the data source. The data source should be relevant to your research topic.
  3. Objectivity. Be wary of articles, videos and other data source on the internet that were posted with ulterior, possibly damaging motives in mind.
  4. Accuracy. Data must be valid, authentic, free of misinterpretation and logical.
  5. Reliability and credibility. Data should be consistently the same over time and context.
  6. Coverage. Data should answer at least part of your research question and add value to your thesis or dissertation. On doctoral level the data should lead to new and improved knowledge.
  7. Currency. Data must still be relevant to the field of your research.
  8. Links. Quality data will mostly be shared and supported by more than one authority in the field of study. The more academic web pages deal with the topic and agree with the arguments, the more likely it is to be valid, authentic, accurate and recent.

Close

You will ultimately be responsible for the quality of data that you collect and use in your thesis or dissertation.

You will also be accountable for the way you use the data.

It serves no purpose checking the accuracy and authenticity of the data that you collect if you bend the meaning of the original author to serve your purpose.

Or if you use accurate data to achieve ulterior, damaging motives.

As with all data that you collect and use, ethics is a critically important requirement for your research.

Enjoy your studies.

Thank you.

Continue Reading