ARTICLE 97: Research Methods for Ph. D. and Master’s Degree Studies: The Layout of the Thesis or Dissertation Part 1 of 9 Parts

Written by Dr. J.P. Nel

This article is an introduction to eight more articles on how to structure your thesis or dissertation.

I already pointed out in my initial articles that a thesis for master’s degree studies and a dissertation for doctoral studies are not the same.

Even so, there are enough similarities so that we can discuss them together.

Besides, it is a good idea to use the thesis that you write for your master’s degree as a learning opportunity for when you embark on doctoral studies.

And most universities will not object if you write and approach your thesis as you would a dissertation.

I will point out salient differences between a thesis and a dissertation.

Research without writing is of little purpose. There are, of course, other ways of communicating your research findings, most notably through oral presentation, but putting them on paper remains of paramount importance. The thesis or dissertation remains the major means by which you should communicate your findings.

It is something of a paradox, therefore, that many researchers are reluctant to commit their ideas to paper. Then again, not all people like writing and some might claim that it requires writing talent. For those who enjoy writing, this can be the most enjoyable part of the research process, because when compiling your research findings, you need to take what you wrote in the body of your document and create something new from it. Even though you have your research data to fall back on, you still need to think creatively. This takes some courage, hard work and lots of self-discipline.

It is always important to do immaculate and professional research. However, your biggest challenge is to develop an interesting and well-structured thesis or dissertation from the research data. Any research paper is based upon a four-step process. Firstly, you need to gather lots of general, though relevant, information. Secondly, you need to evaluate, analyse and condense the information into what is specifically relevant to a hypothesis, problem statement or problem question. Thirdly, you need to come to conclusions about the information that you analysed, and formulate findings based on your conclusions. Finally, your thesis or dissertation should again become more general as you try to apply your findings to the world in general or at least more widely than the target group for the research.

Different disciplines will use slightly different thesis or dissertation structures, so the structure described in the following nine articles is based on some basic principles. The steps given here are the building blocks of constructing a good thesis or dissertation.

A thesis or dissertation should clearly and thoroughly indicate what you have done to solve a problem that you identified. In addition, it should be factual, logical and readable. A good thesis or dissertation should be comprehensive and precise. Most importantly, though, it must be professionally researched.

Some of the contents of your research proposal will not have changed and should be included in your dissertation, as should some of the information that did change, but in the improved format or content. Your problem statement, question or hypothesis, for example, might have changed. The literature and other sources of information that you consulted will have changed and should include many more sources than the original list.

You should ensure that the time set aside for writing sessions is sufficient, as constant restarting and trying to find out where you left off when you last worked on the thesis wastes time and interferes with your thinking processes. If you are fully employed you should write after hours at least one hour per day, five days a week. Even then you will need to catch up by working over weekends, long weekends and holidays.

It is when writing a thesis or dissertation that you will really come to appreciate your desktop or laptop computer. When writing a thesis or dissertation, you should:

  1. Manage your time well.
  2. Make electronic backups of your work as often as possible.
  3. Plan each chapter in detail and structure your thesis or dissertation before you start writing. The layout of your thesis or dissertation may change over your period of study. Even so, good preparation is still important.
  4. First write your draft, then edit it critically and eliminate unnecessary material. Do not expect to get it right the first time around. Review is part of post graduate studies.
  5. Motivate the necessity of the study and explain the goal clearly.
  6. Give your study leader and anybody else who might read your thesis or dissertation a clear understanding of the research problem. The implications should be explained in such a way that everyone reading the thesis or dissertation has the same orientation towards the problem.
  7. Provide sufficient theoretical background to base the study on.
  8. Clearly describe the data collection methods and aids used.
  9. Provide sufficient and accurate data and indicate exactly how the data was used to solve the research problem.
  10. Conform to the university’s requirements for typing, printing and binding, and also meet the requirements set out formally in the learning institution’s post graduate policy and procedure.

We have come full circle from discussing the research process, all the concepts that you should apply and the tools that are available, to unpacking the research in the form of a thesis or dissertation. The following nine articles, therefore, return to the beginning of the research process and deal with the entire process, the only difference being that now we focus on putting the thesis or dissertation on paper.

Summary

The thesis or dissertation is the major means by which to communicate research findings.

Writing a thesis or dissertation requires creative thinking, some courage, hard work and lots of self-discipline.

You must find out in advance what the university’s requirements, rules, regulations and procedures for master’s or doctoral studies are and abide by them.

And you must manage time well.

Conducting research and writing a thesis or dissertation mostly consist of four main steps:

  1. Gather information.
  2. Evaluate, analyse and condense the information.
  3. Come to conclusions and findings.
  4. Apply your findings in practise.

The requirements for a thesis or dissertation are:

  1. It must clearly and thoroughly indicate what you have done to solve a problem.
  2. It must be comprehensive and precise.
  3. You must research the topic of your research professionally.
  4. In the case of doctoral studies your dissertation must align with your initial study proposal.
  5. You should continually make electronic backups of your work.
  6. You must plan and structure your thesis or dissertation before you start writing.
  7. You should review your work regularly.
  8. You should do enough literature study.
  9. You must clearly motivate the importance and value of your research.
  10. You must explain the research problem.
  11. You must clearly describe how you will collect and analyse data.
  12. You must show how you use the data that you collect in your thesis or dissertation.

Close

The eight articles following on this one are critically important for your further studies.

You can use them to guide your research process.

You can also use them to do a self-evaluation of your work before you submit the final manuscript for your thesis or dissertation.

Enjoy our studies.

Thank you.

Continue Reading

ARTICLE 96: Research Methods for Ph. D. and Master’s Degree Studies: Methods for Organising and Analysing Data: Part 2 of 2 Parts

Written by Dr. Hannes Nel

Research has shown that most people seek for excuses to fail rather than for ways in which to achieve success.

Nobody throws in the towel without rationalising about why their decision is justified.

And that is the difference between a winner and a loser.

Success always requires perseverance.

The most important decision that you must make before embarking on master’s degree or doctoral studies is that you will succeed.

Do not even think of failure as an option.

I discuss memoing and reflection on the analysis process in this article.

Memoing. Memos are an extremely versatile tool that can be used for many different purposes. It refers to any writing that you do in relation to the research other than your field notes, transcription or coding. A memo can be a brief comment that you type or write in the margin of your notes on an interview, notes on observations that you made during field work, your own impressions or ideas inspired by field work or literature study, an essay on your analysis of data, provisional conclusions and even possible findings. The basic idea behind memoing is to get ideas, observations and impressions down on paper to serve as the foundation for reflection, analytical insight and remembering spur of the moment ideas. Memos can also be coded in order to save them as part of the other data that you collected for further analysis.

Memos capture your thoughts on the main information that you recorded and can be most useful for creating new knowledge and findings. In dedicated computer software that uses it, memos are similar to codes, but usually contain longer passages of text. They, furthermore, differ from quotations in that quotations are extracts from primary documents, while memos represent your personal observations and impressions.

Although mostly recorded independently, a memo may refer to other memos, quotations, and codes. They can be grouped according to types (method, theoretical, descriptive, etc.), which is helpful in organizing and sorting them. Memos may also be assigned to primary documents so that they can be analysed with associated other coded data.

Memos are one of the most important techniques you have for recording and developing your ideas. You should, therefore, think of memos as a way of recording or presenting an understanding you have already reached. Memos should include reflections and conclusions on your reading and ideas as well as your fieldwork. They can be analytical, conceptual, theoretical or philosophical in nature. Memos can be written on almost anything that might have a positive impact on your research findings, including methodological issues, ethics, personal reactions, sudden understanding of previously complex concepts, misconceptions, etc. Memos should, therefore, be written in narrative format, including logical reasoning about the elements of your research. 

Writing memos by means of dedicated computer software is an important task in every phase of the qualitative research process. The ideas captured in memos are often the “pieces of the puzzle” that are later put together when you make conclusions and compile findings. Memos might be rather short in the beginning and become more elaborate as you gain more clarity on your arguments and the nature of the data or observations that you are investigating.

Memos can stand alone, in the event of which they would explain data that deals with a particular and important issue relevant to the purpose of the research. Memos can also be linked to other memos, quotations, or codes, in the event of which linked objects should refer to associated data and arguments to form a new, reconstructed or deconstructed narrative. Such associated memos, quotations and codes can contain methodological notes; they can be used as a bulletin board to exchange information between team members; they can be used to write notes about the analytical process, keeping a journal of to-dos; conclusions and findings can be deduced from them. Memos may also serve as a repository for symbols, text templates, and embedded objects (photos, figures, diagrams, graphs, etc.) that you may want to insert into primary documents or other memos.

The difference between memos and codes. A code can be just one word or a heading, forming a succinct, dense descriptor for a concept or argument emerging when you study data closely with the intent of identifying data elements relative to the purpose and topic of your research. Complex findings can be reduced to markers of important and relevant data.

A memo is normally longer than a code. A memo is a record of the process of cognitive thinking that you would go through when collecting data through observation, literature study, interviewing, etc. Words and short sections of a memo can be coded. Like codes, memos have short and concise names. These names, or titles, are used for displaying memos in browsers, and help to find specific memos.

The similarity and difference between memos and comments. The best way in which to compare memos and comments is probably to compare them with codes. Codes should be seen as “headings” for concepts. Memos and comments both refer to lengthy texts and both are generated by you as the researcher.

However, comments belong with just one entity or argument. You can, for example comment on a particular primary data source, such as a book, a report, minutes of a focus group meeting, etc. Memos, on the other hand, can be associated with more than one object or source of information. Memos, furthermore, can contribute to your collection of data in more than one way, for example as theoretical data, philosophical data, descriptions of methods, general comments, etc. Memos can be free-standing while comments must always be linked to other data. Memos can be associated with more than one object and be used for a variety of purposes, for example to discuss, analyse and process theoretical data, to describe methods, to comment, to inform, etc.

Reflection. The last step in data analysis is reflection. Reflection has to do with the ability to stand back from and think carefully about what you have done or are doing. The following questions will help you develop your ability to reflect on your analysis:

1.         What was your role in the research?

2.         Did you feel comfortable or uncomfortable? Why?

3.         What action did you take? How did you and others react?

4.         Was it appropriate? How could you have improved the situation for yourself, and others?

5.         What could you change in the future?

6.         Do you feel as if you have learnt anything new about yourself or your research?

7.         Has it changed your way of thinking in any way?

8.         What knowledge, from theories, practices and other aspects of your own and other’s research, can you apply to this situation?

9.         What broader issues – for example ethical, political or social – arise from this situation?

10.       Have you recorded your thoughts in your research diary?

Summary

Memos are versatile tools that you can use in the analysis of data. You can use memos to do the following:

  1. Integrate data in your thesis or dissertation.
  2. Consolidate your impressions and ideas into provisional conclusions and possible findings.
  3. To serve as the foundation for reflection, analytical insight and to remember spur of the moment ideas.
  4. To store interrelated ideas as codes.
  5. To capture your thoughts on the main information that you recorded.
  6. To develop new knowledge and findings.

Memos:

  1. May refer to other memos, quotations and codes.
  2. Can be grouped according to type.
  3. May be assigned to primary documents.
  4. Is a way of recording or presenting an understanding that you have already reached.
  5. Should include reflections and conclusions on your reading, ideas and fieldwork.
  6. Can be analytical, conceptual, theoretical or philosophical.
  7. Can be written on almost anything that might add value to your research.
  8. Should be written in a narrative format.
  9. May serve as a repository for symbols, text templates and embedded objects.

Memos are similar to codes, but usually contain longer passages of text.

Memos differ from comments in that comments belong with just one entity or argument, while memos can be associated with more than one object or source of information.

Also, memos can be free standing while comments must always be linked to other data.

The last step in data analysis is reflection.

Close

With this article we cross the bridge from data analysis to the layout of the thesis or dissertation.

Once you know how to structure a thesis or dissertation, you should be able to write and submit it.

There is one more step before you submit your thesis or dissertation for assessment, and that is to review your work.

The people who successfully completed a thesis or dissertation in the past are pretty much the same as you.

They are intelligent, creative and willing to work hard.

But they are not super human beings.

And there is no reason why you cannot achieve what they did.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 95: Research Methods for Ph. D. and Master’s Degree Studies: Methods for Organising and Analysing Data Part 1 of 2 Parts

Written by Dr. Hannes Nel

Data needs to be organised before it can be analysed.

Depending on whether a qualitative or quantitative approach is followed, the data needs to be arranged in a logical sequence or quantified.

This can be done by quantifying, sequencing, coding or memoing the data.

I discuss quantifying, sequencing and coding data in this article.

I will discuss memoing data in my second video on methods for organising and analysing data.

Quantifying data. Most data analysis today is conducted with computers, ranging from large, mainframe computers to small, personal laptops. Many computer programs are dedicated to analysing social science data, and it would be worth your while obtaining and learning to use such software if you need to write a thesis or dissertation, even if you do not exclusively use quantitative research methodology, because you might need to interpret some statistics or you might use some quantitative methods to enhance, support or corroborate your qualitative findings. However, you will probably need not much more than office software if you need to do largely qualitative research.

Almost all research software requires some form of coding. This can differ substantially from one software program to the next, so you will need to find out exactly how it works even before you purchase the software. Your study leader will probably know which software will be the most suitable for your research and give you advice on this. You will only quantify data if statistical analysis is necessary, so do not do this unless you know that you will need it in your thesis or dissertation.

Many people are intimidated by empirical research because they feel uncomfortable with mathematics and statistics. And indeed, many research reports are filled with unspecified computations. The role of statistics in research is quite important, but unless you write an assignment or thesis on statistics or mathematics you will not be assessed on your statistical or mathematical proficiency. That is why most universities offer statistical services. There are several private and public universities also offering such services, so use them. There is also nothing wrong with purchasing dedicated software to do your statistical analysis with, although it might be necessary to do a course on the software before you will be able to utilise it properly.

Sequencing the data. Many researchers are of the opinion that organising the data in a specific sequence offers the clearest available picture of the logic of causal analysis in research. This is called the elaboration model. Especially using contingency tables, this method portrays the logical process of scientific analysis.

When collecting material for interpretive analysis, you experience events, or the things people say in a linear, chronological order. When you then immerse yourself in field notes or transcripts, the material is again viewed in a linear sequence. This sequence can be broken down by inducing themes and coding concepts so that events or remarks that were far away from each other in a document, or perhaps even different documents, are now brought close together. This gives you a fresh view on the data and allows you to carefully compare sections of text that appear to belong together. At this stage, you are likely to find that there are all sorts of ways in which extracts that you grouped together under a single theme, differ, or that there are all kinds of sub-issues and themes that come to light.

Exploring themes more closely in this way is called elaboration. The purpose is to capture the finer nuances of meaning not captured by your original, possibly crude, coding system. This is also an opportunity to revise the coding system – either in small ways or drastically.  If you use software it might even be necessary to start your coding all over again. This can be extremely time-consuming, but at least every time you start over you end up with a much better structured research report.  

Coding. In most qualitative research, the original text is a set of field notes, data obtained through literature study, interviews, and focus groups. One of the first steps that you will need to take before studying and analysing data is to code the information. You can use cards for this, but dedicated computer software can save you time, effort and costs. Codes are typically short pieces of text referencing other pieces of text, graphical, audio, or video data. From a methodological standpoint, codes serve a variety of purposes. They capture meaning in the data. They also serve as tools for finding specific occurrences in the data that cannot be found by simple text-based search techniques. Codes also help you organise and structure the data that you collected.

Their main purpose is to classify many textual or other data units in such a manner that the data that belongs together can be grouped as such for easy analysis and structuring. One can, perhaps, think of coding as “indexing” your data. You can also see it as a way to mark keywords so that you can find, retrieve and group them more easily at a later stage. The length of a code should be restricted and should not be too long-winded.

Codes can also be used to classify data at different levels of abstraction, to group sets of related information units together for the purpose of comparison. This is what you would often use to consider and compare related arguments to make conclusions that can be the motivation for new knowledge. Dedicated computer software does not create new knowledge; it only helps you as the researcher to structure existing knowledge and experiences in such a manner that it will be easier for you to think creatively, that is to create new knowledge.

Formal coding will be necessary if you make use of dedicated research software. Even if you do not use research software you probably will need a method of coding to arrange your data according to the structure of your thesis or a dissertation. Your original data will probably include additional data, such as the time, date and place where the data was collected.

It is also a purpose of coding data to move to a higher conceptual level. The codes will inevitably represent the meanings that you infer from the original data, thereby moving closer towards the solution of your problem statement, or confirmation or rejection of your null hypothesis. By coding data, you will, of course, rearrange the data that you collected under different headings representing steps in the research process.

Five coding procedures are popularly used: open coding, in vivo coding, coding by list, quick coding and free coding.

With most qualitative research software, you can create codes first and then link them to sections in your data. Creating new codes is called open coding. The nature of the initial codes, which can be referred to as Level 1 codes or open codes, can vary and might change as you progress with your research. You should give a name for each new code that you open, and you can usually create one or more codes in a single step. These codes can stick closely to the original data, perhaps even reusing the exact words in the original data. Such codes can be deduced from research questions. In vivo coding is mostly used for this purpose. 

In vivo coding means creating a code for selected text as and when you come across text, or just a word in the text, that can and should serve as a code. This would normally be a word or short piece of text that would probably appear in other pieces of data that should be linked and grouped with the data in which you identified the code.

If you know where you are going with your study, you will probably create codes first (up front), then link them to sections of data. This would be coding by list. Coding by list allows you to select existing codes from a code list that you prepared in advance. You would typically select one or more codes associated with the current data selection.

You can also create codes as you work through your data, which would then be quick coding. In the case of quick coding, you will continue with the selected code that you are working with. This is an efficient method for the consecutive coding of segments using the most recently used code.

You can create codes that have not yet been used for coding or creating networks. Such codes are called free codes and they are a form of quick coding, although they can be prepared in advance. The reasons why you would create free codes can be:

  1. To prepare a stock of predefined codes in the framework of a given theory. This is especially useful in the context of teamwork when creating a base project.
  2. To code in a “top-down” (or deductive) way with all necessary concepts already at hand. This complements the “bottom-up” (or inductive) open coding stage, in which concepts emerge from the data.
  3. To create codes that come to mind during normal coding work and that cannot be applied to the current segment but will be useful later.

It will be easier to code data if you already have a good idea of what you are trying to achieve with your research. Sometimes the data will actually “steer” you towards codes that you did not even think of in the beginning. This is typical of a grounded theory approach, although you should always keep an open mind about your research, regardless of which approach you follow. Coding also helps to develop a schematic diagram of the structure of your thesis or dissertation. This can be based on your initial study proposal. A mindmap can, for example be used to structure your research process and to identify initial codes to start with.

A code may contain more than a single word but should be concise. There should be a comment area on your screen that you can use to write a definition for each code, if you need one. As you progress in doing the first level coding, you may start to understand how your data might relate to broader conceptual issues. Some of your field experiences may in fact be sufficiently similar so that you might be able to group different coded data together on a higher conceptual level. Your coding has then proceeded to a higher set of codes, referred to as Level 2 or category codes.

After a code has been created, it appears as a new entry in several locations (drop-down list, code manager). In this respect the following are important to remember:

  1. Groundedness: Groundedness refers to the number of quotations associated with the code. Large numbers indicate strong evidence already found for this code.
  2. Density: The number of codes connected to this code is indicated as the density. Large numbers can be interpreted as a high degree of theoretical density.
  3. Comment: The tilde character “~” can, as an example, be used to flag commented codes. It is not used for codes only but for all commented objects.

It is not only text that can be coded. You can also code graphic documents, audio and video material. There are many other ways in which codes can be utilised, for example they can be sorted, modified, renamed, deleted, merged and of course reported.

Axial coding. Axial coding is the process of putting data back together after it has been restructured by means of open coding. Open coding allows you to select data that belong together (under a certain code or sub-code) taken from a variety of sources containing the original or primary data. Categories of data are, thus, systematically developed and linked with subcategories. You can then develop a new narrative through a process of reconstruction. The new narrative might apply to a different context and should be articulated to the purpose of your research.

The articulation of selected data can typically relate to a condition, strategy or consequences. Data relating to a condition or strategy should address conditions that lead to the achievement of the purpose of the study. The purpose of the study will always be to solve a problem statement or question or to prove or disprove a null hypothesis. Consequential data include all outcomes of action or interaction.

Selective coding. Selective coding refers to the process of selecting a core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development. Categories are, thus, integrated and refined. The core category would be the central phenomenon to which all the other categories are linked. To use a romantic example, in a novel you will identify the plot first, then the storyline, which you should analyse to identify the elements of the storyline that relate to the plot. From this you should be able to deduce lessons learned or a moral for the story.

Summary

Data is mostly organised by making use of dedicated computer programmes.

Most such computer programmes require some form of coding.

Data can be sequenced by following an elaboration model.

Contingency tables are mostly used to achieve logic in scientific analysis.

Data is often analysed in a linear, chronological order.

Codes are typically short pieces of text referencing other pieces of text, graphical, audio or video data.

Codes:

  1. Capture meaning.
  2. Serve as tools for finding specific occurrences in the data.
  3. Help you to organise and structure the data.
  4. Classifies textual or other data units in related groups and at different levels of abstraction.

Dedicated computer software does not create new knowledge.

Five coding procedures are popularly used.

They are open coding, in vivo coding, coding by list, quick coding and free coding.

Open coding means creating new codes.

In vivo coding means creating a code for elected text as and when you come across text, or just a word in text, that can and should serve as a code.

Coding by list is used when you know where you are going with your study so that you can create the codes even before collecting data.

Quick coding means creating codes as you work through your data.

Free codes are codes that have not been used yet. They can be the result of coding by list or quick coding.

To the five coding procedures should be added axial coding and selective coding.

Axial coding is the process of putting data back together after it has been restructured by means of open coding.

Selective coding refers to the process of electing a core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development.

You should always keep an open mind about your research and the codes that you create.

Close

If what I discussed here sounds confusing and alien, then it is probably because of what we discussed under schema analysis in my previous video.

It is unlikely that the level of language used here is beyond you.

If that were the case, you would not have watched this video.

No doubt you will understand everything if you watch this video again after having tried out one or two of the computer programmes that deal with especially qualitative research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 94: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Part 7 of 7 Parts

Written by Dr. Hannes Nel

What, do you think, is the biggest challenge for somebody who embarks on doctoral or master’s degree studies?

Well, the answer to this question will probably be different for different people, depending on their circumstances, perceptions, value systems and culture.

If we were to combine all the possible challenges, we will probably arrive at “to understand”.

In my opinion that is the biggest challenge facing any post-graduate student.

Not only do you need to understand endless concepts, phenomena, theories and principles, you also must explain them in your thesis or dissertation.

And on doctoral level you will be required to define and explain new concepts, phenomena, theories and principles.

Data analysis is necessary for such elucidation.

I discuss the following data analysis methods in this article:

  1. Schema analysis.
  2. Situational analysis.
  3. Textual analysis.
  4. Thematic analysis.

Schema analysis

Schema analysis requires that you simplify cognitive processes to understand complex concepts and narrative information more readily. In this manner a narrative that might otherwise be difficult to understand because of the level of language used, cultural differences or any other reason, is made easier to understand for those who might find the language challenging or the cultural context alien.

Schema analysis might require additional explanation, interpretation and reconstruction of the message. An individual who grew up in the city might not know how to milk a cow and a farmer might not know how to obtain food from a street vending machine. 

Today schema analysis is also used in computer programming, where a schema is the organisation or structure for a database. A schema is developed by modelling data.  The purpose remains the same as when you would have done schema analysis manually – it is a process of rendering data more user-friendly.

Situational analysis

As opposed to comparative analysis, situational analysis focuses more on non-human elements. It implies the analysis of the broad context or environment in which an event takes place. It can include an analysis of the state and condition of people and the ecosystem, including the identification of trends; the identification of major issues related to people and ecosystems that require attention and an analysis of key stakeholders.

Textual analysis

Textual analysis, also called ‘content analysis’, is a data collection technique as well as a data analysis technique. It helps us to understand information on symbolic phenomena. It is used to investigate symbolic content such as words that appear in, for example, newspaper articles, comments on a blog, political speeches, etc. It is a qualitative technique in which the researcher attempts to describe the denotative meaning of content in an objective way.  

There are two levels of meaning, namely denotative and connotative meaning. The denotative meaning of a word refers to the literal meaning that you will find in a dictionary. This meaning is free from any form of interpretation. The connotative meaning of a word refers to the connotation that we ascribe to a particular word, based on the feeling or idea that the word invokes in us, which is often based on our prior experiences.

For example, the denotative meaning of the word ‘host’ is ‘one who lodges or entertains a stranger or guest at his or her house’. However, a woman who was abused by a host in whose guest house she stayed in her youth might conjure up in her mind a host as being a dangerous and sly human being who takes advantage of vulnerable people. The connotative meaning of ‘host’ is, therefore, largely the opposite of what the word is supposed to mean. In textual analysis we only work with the denotative meaning of words to make valid and reliable assumptions of the data within context.

You can only work with what was reported when doing qualitative research and you should not make any assumptions about the originator’s intended meaning. The context in which the information was used, however, also needs to be taken into consideration.

Textual analysis can be subjective because its interpretation is done by fallible people. It can include the analysis of freshly collected data as well as transcribed data. You should transcribe all the raw data that you collected from the written and verbal responses of participants during conversations, interviews, focus groups, meetings, etc. Electronically recorded interviews will need to be retyped word for word to facilitate textual analysis.

Thematic analysis

Also known as concept analysis or conceptual analysis, it is actually a coding regime, according to which data is reduced by means of identifying certain themes. Thematic analysis uses deductive coding by grouping concepts under one of a prepared list of themes.

In thematic analysis you first need to familiarise yourself with the data before you can even select themes. You should list the themes that you would like to cover in your research when you do your literature review. After having listed themes, the next step would be to generate codes. Codes serve as an important foundation for the structuring and arrangement of data by means of qualitative computer software. Even though one might not call it coding, capturing information on cards is also a form of coding, even though rather simple and limited in usability.

You can also search for themes now if you did not do so as a first step already. This is done by collating the codes that you identified into potential themes. Themes are actually “headings” under which related or linked codes are grouped, or clustered. Most qualitative research computer software allows you to review and edit your codes and themes when necessary, which will inevitably happen as you progress with your research.

Summary

Schema analysis:

  1. Requires that you simplify cognitive processes.
  2. Might require additional explanation, interpretation and reconstruction of selected data.
  3. Is also used in computer programming.

Situational analysis:

  1. Focuses on non-human elements.
  2. Analysis the broad context or environment for the research.
  3. Can include an analysis of the state and condition of people and the ecosystem.

Textual analysis

  1. Combines data collection and analysis.
  2. Helps to understand information on symbolic phenomena.
  3. Attempts to objectively describe the denotative meaning of content.
  4. Takes the context in which information was used into consideration.
  5. Can be subjective.
  6. Can include the analysis of freshly collected as well as transcribed data.

Thematic analysis

  1. Is a coding regime.
  2. Reduces data in terms of certain themes.
  3. Requires the identification of themes before coding can be done.

Close

That concludes my articles on data analysis and all the other concepts and theories behind doctoral and master’s degree studies.

In the remaining 14 articles I will focus more on the structure and layout of a thesis or dissertation.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 93: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 6 of 7 Parts

Written by Dr. Hannes Nel

In academic research we need to think inductively and deductively.

Inductive thinking is used to develop a new theory.

Therefore, it is what you would mostly use when writing a dissertation for a doctoral degree.

And you should use inductive thematic analysis to analyse the data that you collect.

Deductive thinking is used to test existing theory.

Therefore, it is what you would mostly use when writing a thesis for a master’s degree.

And you should use retrospective analysis to analyse the data that you collect.

Narrative analysis uses both inductive and deductive thinking more or less equally.

That is why both a dissertation and a thesis can be written in a narrative format.

I will discuss the nature of inductive thematic analysis, narrative analysis and retrospective analysis in this article.

Inductive thematic analysis (ITA)

Inductive thematic analysis draws on inductive analytic methods. It involves reading through textual data and identifying and coding emergent themes within the data.

ITA requires the generation of free-flow data. The most common data collection techniques associated with ITA are in-depth interviews and focus groups. You can also analyse notes from participant observation activities with ITA, but interview and focus group data are better. ITA is often used in qualitative inquiry, and non-numerical computer software, specifically designed for qualitative research, is often used to code and group data.

Paradigmatic approaches that fit well with ITA include post-structuralism, rationalism, symbolic interactionism, and transformative research.

Narrative analysis

The word “narrative” is generally associated with terms such as “tale”, or “story”. Such stories are mostly told in the first person, although somebody else might also tell the story about a different character, that is in the second or third person. First person will apply if an interview is held. Every person has his or her own story, and you can design your research project to collect and analyse the stories of participants, for example when you study the lived experiences of somebody who is a member of a gang on the Cape Flats.

There are different kinds of narrative research studies ranging from personal experiences to oral historical narratives. Therefore, narrative analysis refers to a variety of procedures for interpreting the narratives obtained through interviews, questionnaires by email or post, perhaps even focus groups. Narrative analysis includes formal and structural means of analysis. One can, for example, relate the information obtained from a gang member in terms of circumstances and reasons why he or she became a gang member, growth into gang activities, the consequences of criminal activities for his or her personal life, career, etc. One can also do a functional analysis looking at gang activities and customs (crime, gang fights, recruiting new members, punishment for transgression of gang rules, etc.)

In the analysis of narrative, you will track sequences, chronology, stories or processes in the data, keeping in mind that most narratives have a backwards and forwards nature that needs to be unravelled in the process of analysing the data.

Like many other data collection approaches, narrative analysis, also sometimes called ‘narrative inquiry’, is based on the study and textual representation of discourse, or the analysis of words. The type of discourse or text used in narrative analysis is, as the name indicates, narratives.

The sequence of events can be generated and recorded during the data collection process, such as through in-depth interviews or focus groups; they can be incidentally captured during participant observation; or, they can be embedded in written forms, including diaries, letters, the internet, or literary works. Narratives are analysed in numerous ways and narrative analysis can be used in research within a substantial variety of social sciences and academic fields, such as sociology, management, labour relations, literature, psychology, etc.

Narrative analysis can be used for a wide range of purposes. Some of the more common usages include formative research for a subsequent study, comparative analysis between groups, understanding social or historical phenomena, or diagnosing psychological or medical conditions. The underlying principle of a narrative inquiry is that narratives are the source of data used, and their analysis opens a gateway to better understanding of a given research topic.

In most narratives meaning is conveyed at different levels, for example informational content level that is suitable for content analysis; textual level that is suitable for hermeneutic or discourse analysis, etc.

Narrative analysis has its own methodology. In narrative analysis you will analyse data in search of narrative strings (present commonalities running through and across texts), narrative threads (major emerging themes) and temporal/spatial themes (past, present and future contexts).

Retrospective analysis

Retrospective analysis is sometimes also called ‘retrospective studies’ or ‘trend analysis’ or ‘trend studies’. Retrospective analysis usually looks back in time to determine what kind of changes have taken place. For example, if you were to trace the development of computers over the past three decades, you would see some remarkable changes and improvements.

Retrospective analysis focuses on changes in the environment rather than in people, although changes in the fashions, cultures, habits, values, jobs, etc. are also often analysed. Each stage in a chronological development is represented by a sample and each sample is compared with the others against certain criteria.

Retrospective analysis examines recorded data to establish patterns of change that have already occurred in the hope of predicting what will probably happen in the future. Predicting the future, however, is not simple and often not accurate. The reason for this is that, as the environment changes, so do the variables that determine or govern the change. It, therefore, stands to reason that, the longer ahead one tries to predict the future, the more inaccurate will your predictions probably be.

Retrospective analysis does not include the same respondents over time, so the possibility exists for variation in data due to the different respondents rather than the change in trends.

Summary

Inductive thematic analysis, or ITA:

  1. Draws on inductive analytical methods.
  2. Involves reading textual data.
  3. Identifies and codes emergent themes within the data.
  4. Requires the generation of free-flow data.
  5. Favours in-depth interviews and focus groups.
  6. Can also use participant observation.
  7. Fits well with qualitative research and critical or interpretive paradigms.

Narrative analysis:

  1. Tells stories related by people.
  2. Ranges from personal experiences to historical narratives.
  3. Can use a wide range of data collection methods.
  4. Includes formal, structural and functional analysis.
  5. Tracks sequences, chronology, stories or processes in data.
  6. Is based on the textual representation of discourse, or the analysis of words.
  7. Is used by a substantial variety of social sciences.
  8. Can be used for a wide range of purposes.
  9. Conveys meaning on different levels.
  10. Has its own methodology.

Retrospective analysis:

  1. Looks back in time to identify change.
  2. Focuses on change in the environment.
  3. Represents and compares change in samples.
  4. Sometimes tries to predict the future.
  5. Does not include the same respondents over time.

Close 

It is a good idea to mention and explain how you analysed the data that you collected in your thesis or dissertation.

Ph. D. students will already do so in their research proposal.

That is why you need to know which data analysis methods are available and what they mean.

It will also help to ensure that you use the data that you collect efficiently and effectively to achieve the purpose of your research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 92: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 4 of 7 Parts: Ethnographic analysis

Written by Dr. Hannes Nel

I wonder if ethnographic research was ever as vitally important as now.

The COVID-19 pandemic has dramatically changed the way people live, interact, socialise and survive.

No doubt, research on how to combat the virus is still the priority.

However, while numerous researchers are frantically working on finding an effective and safe vaccine, life goes on.

And it will take long before everybody is vaccinated anyway.

And we need to determine what the impact of unemployment, financial difficulties, famine, crime and the loss of loved ones on our psychological health is.

And we need to find ways in which to cope with the new reality.

I discuss ethnographic analysis in this article.

Ethnographic analysis typically addresses the issue of ‘what is going on’ between the participants in some segment (or segments) of the data, in great analytical depth and detail. Ethnographic studies aim to provide contextual, interpretive accounts of their participants’ social worlds.

Ethnographic analysis is rarely systematic or comprehensive: rather, it is selective and limited in scope. Its main advantage is to permit a detailed, partially interpretive, account of mundane features of the social world. This account may be limited to processes within the focus group itself, or (more typically) it may take the focus group discussion as offering a ‘window’ on participants’ lives.

Ethnographic analysis aims to ground interpretation in the particularities of the situation under study, and in ‘participants’ (rather than ‘analysts’) perspectives. Data are generally presented as accounts of social phenomena or social practices, substantiated by illustrative quotations from the focus group discussion. Key issues in ethnographic analysis are:

•           how to select the material to present,

•           how to give due weight to the specific context within which the material was generated, while retaining some sense of the group discussion as a whole, and

•           how best to prioritise participants’ orientation in presenting an interpretive account.

Researchers using ethnographic research, such as observing people in their natural settings, often ask the question what role the researcher should adopt when conducting research: an overt and announced role or a covert and secret role? The most common roles that you as the researcher may play are complete participation, participation as an observer, observer as a participant and complete observer.

The complete participant seeks to engage fully in the activities of the group or organisation being researched. Thus, this role requires you to enter the setting covertly so that the participants will not be aware of your presence or at least not aware that you are doing research on them. By doing research covertly you are supposed to be able to gather more accurate information than if participants were aware of what you are doing – they should act more naturally than otherwise. The benefit of the covert approach is that you should gain better understanding of the interactions and meanings that are held important to those regularly involved in the group setting. Covert research can, however, expose you to the risk that your efforts might prove unsuccessful, especially if the participants find out that you were doing research on them without them being informed and without their agreement. Such research can also lead to damage to the participants in many ways, for example by embarrassing them, damaging their career prospects, damaging their personal relationships, etc.

You will act ethically and more safely if you, as the researcher observe a group or individual and participate in their activities. In this case you formally make your presence and intentions known to the group being studied and you ask for their permission. This may involve a general announcement that you will be conducting research, or a specific introduction as the researcher when meeting the various people who will form part of the target group for the research.

This approach requires of you to develop sufficient rapport with the participants to gain their support and co-operation. You will need to explain to them why the research is important and how they will benefit from it. The possibility exists that you may become emotionally involved in the activities and challenges of the target group, which might have a negative effect on your ability to interpret information objectively.

The researcher as observer only is, as we already discussed, an etic approach. Here you will distance yourself from the idea of participation but still do your research openly and in agreement with the target group. Such transparent research often involves visiting just one site or a setting that is offered only once. It will probably be necessary to do relatively formal observation. The risk exists that you may fail to adequately appreciate certain informal norms, roles, or relationships and that the group might not trust you and your intentions, which is why the period of observation should not be too long.

The complete and unannounced observer tends to be a covert role. In this case, you typically remain in the setting for a short period of time but are a passive observer to the flow of activities and interactions.

Summary

Ethnographic analysis:

  1. Analyses events and phenomena in a social context.
  2. Is selective and limited in scope.
  3. Delivers a detailed interpretation of commonplace features of the social world.
  4. Focuses on specific aspects of the target group’s lives.

Key issues of ethnographic analysis are:

  1. How data to analyse is selected.
  2. The context on which the collection and analysis focuses.
  3. Interpretation and description of the findings by focusing on the target group’s orientation.

Observation is often used for the collection of data.

An emic or etic approach can be followed.

An etic approach is often also executed covertly.

Covert collection of data can promote accuracy because the target group for the research will probably behave naturally if they do not know that they are being observed.

A covert approach can be rendered inadvisable because of ethical considerations.

An overt approach requires gaining the trust of the target group for the research.

Close

You probably noticed that it is near impossible to discuss data collection and data analysis separately.

Besides, ethnography is a research method, and ethnographic data collection and analysis are part of the method.

Natural scientists will probably only use it to trace the ontology of scientific concepts or phenomena.

And then the data will be historical in nature.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 91: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 4 of 7 Parts: Elementary Analysis

Written by Dr. Hannes Nel

Most social qualitative research requires the analysis of several variables simultaneously (called “multivariate analysis”), for example the analysis of the simultaneous association of age, education, and gender would be an example of multivariate analysis. Specific techniques for conducting a multivariate analysis include factor analysis, multiple correlation, regression analysis, and path analysis. All techniques are based on the preparation and interpretation of comparative tables and graphs, so you should practise doing this if you do not already know how.

These are largely quantitative techniques. Fortunately, the statistical calculations are done for you by the computer, so just be aware of the definitions.

Factor analysis. Factor analysis is a statistical procedure used to uncover relationships among many variables. This allows numerous inter-correlated variables to be condensed into fewer dimensions, called factors. It is possible, for example, that variations in three or four observed variables mainly reflect the variations in a single unobserved variable, or in a reduced number of unobserved variables. Clearly this type of analysis is mostly numerical in nature. Factors are analysed inductively to determine trends, relationships, correlations, causes of phenomena, etc. Factor analysis searches for variations in response to variables that are difficult to observe and that are suspected to have an influence on events or phenomena.

Multiple correlation. Multiple correlation is a statistical technique that predicts values of one variable based on two or more other variables. For example, what will happen to the incidence of HIV AIDS (variable that we are doing research on) in a particular area if unemployment increases (variable 1), famine breaks out (variable 2) and the incidence of TB (variable 3) increases? 

Multiple correlation is a linear relationship among more than two variables. It is measured by the coefficient of multiple determination, which is a measure of the fit of a linear regression. A linear regression falls somewhere between zero and one (assuming a constant term has been included in the regression); a higher value indicates a stronger relationship between the variables, with a value of one indicating a perfect relationship and a value of zero indicating no relationship at all between the independent variables collectively and the dependent variable.

Path analysis. Path analysis can be a statistical method of finding cause/effect relationships, a method for finding the trail that leads users to websites or an operations research technique. We also have “critical path analysis” which is mostly used in project management and is a method by means of which activities in a project are planned to be executed in a logical sequence of events to ensure that the project is completed in an efficient and effective manner. We are concerned about path analysis as an operations research technique here.

Path analysis is a method of decomposing correlations into different pieces of interpretation of effects (e.g. how does parental education influence children’s income when they are adults?). Path analysis is closely related to multiple regression; you might say that regression is a special case of path analysis. It is a “causal model” because it allows us to test theoretical propositions about cause and effect without manipulating variables.

Regression analysis. Regression analysis can be used to determine which factors influence events, phenomena, or relationships.

Regression analysis includes a variety of techniques for modelling and analysing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. If, for example, you wish to determine the effect of tax, legislation and education on levels of employment, levels of employment will be the dependent variable while tax, legislation and education will be the independent variables. More specifically, regression analysis helps one understand how to maintain control over a dependent variable. In the level of employment example, you might wish to know what should be done in terms of tax, legislation and education to improve employment or at least to maintain a healthy level of employment. In this example it is of interest to characterise the variation of the dependent variable around the regression function, which can be described by a probability distribution (how much the level of employment would change and in what direction if all, some or one of the independent variables change by a particular value).

Regression analysis typically estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are held fixed. Seen from this perspective, the example of employment levels would mean investigating what would happen if tax, legislation and education remain unchanged.

Regression analysis is widely used for prediction and forecasting, although this should be done with circumspection. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, to explore the forms of these relationships. Regression analysis presupposes causal relationships between the independent and dependent variables, although investigation can also show that such relations do not exist. An example of using regression analysis, also called “multiple regression” is to determine which factors from colour, paper type, number of advertisements and content (independent variables) have the biggest effect on the number of magazines sold (dependent variable).

Summary

Multivariate analysis can be used for the analysis of several variables simultaneously.

Techniques that can be used for conducting multivariate analysis include factor analysis, multiple correlation, path analysis and regression analysis.

Factor analysis is used to uncover relationships among many variables.

Factors are analysed inductively to determine trends, relationships, correlations, cause of phenomena, etc.

Multiple correlation predicts values of one variable based on two or more other variables.

Multiple correlation is a linear relationship among more than two variables.

Path analysis seeks cause/effect relationships.

It can also be used to find data or to manage projects.

Regression analysis can be used to determine which factors influence events, phenomena or relationships.

It includes a variety of techniques for modelling and analysing several variables when the focus is on the relationship between a dependent variable and one or more independent variables.

Regression analysis helps us to understand how to maintain control over a dependent variable.

Close

Statistics are a wonderfully flexible way in which to analyse data.

Dedicated computer software can do the calculations for us and show us the numbers in tabular and graphic format.

All we need to do, is to analyse the numbers or graphs.

It is mostly quite easy to interpret visual material.

And you will impress your study leader, lecturer and other stakeholders in your research if you use such analysis techniques.

Most importantly, it will be so much easier and faster to come to conclusions and to derive valid and accurate findings from your conclusions.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 90: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Part 3 of 7 Parts

Written by Dr. Hannes Nel

I discuss conversation and discourse analysis as data collection methods in this article.

Conversation and discourse analysis

Both conversation and discourse analysis approaches stem from the ethnomethodological tradition, which is the study of the ways in which people produce recognisable social orders and processes. Both of these approaches tend to examine text as an “object of analysis”. Discourse analysis is a rather comprehensive process of evaluating the structures of conversations, negotiations and other forms of discourse as well as how people interact when communicating with one another. The sharing of meaning through discourse always takes place in a particular context so the social construction of such discourse can also be analysed.

Conversation and discourse analysis both study “naturally” occurring language, as opposed to text resulting from more “artificial” contexts, such as formal interviews. The purpose is to identify social and cultural meanings and phenomena from the discourse studied, which is why the process is suitable for almost any culture-related research.

The name “discourse” shows that it is language that is analysed while language is also used to do research. It can be a complex process and is often better suited to those more interested in theorising about life than those who want to research actual life events.

Discourse analysis focuses on the meaning of the spoken and written word, and the reasons why it is the way it is. Discourse refers to expressing oneself using words and to the variety and flexibility of language in the way language is used in ordinary interaction.

When doing research, we often look for answers in places or sources that we can easily reach when the real answers might lie somewhere else. Discourse analysis is one method which allows us to move beyond the obvious to the less obvious, although much more relevant sources of data.

Discourse analysis analyses what people say apart from just picturing facts. Discourses are ever-present ways of knowing, valuing and experiencing the world. Different people have different discourses. Gangs on the Cape Flats, for example, use words and sentences that the ordinary man on the street will find difficult to understand. Discourses are used in everyday texts for building power and knowledge, for regulation and normalisation, for the development of new knowledge and power relations.

As a language-based analytical process, discourse analysis is concerned with studying and analysing written texts and spoken words to reveal any possible relationships between language and social interaction. Language is analysed as a possible source of power, dominance, inequality and bias. Processes that may be the subject of research include how language is initiated, maintained, reproduced and transformed within specific social, economic, political and historical contexts. A wide variety of relationships and context can be investigated and analysed, including ways in which the dominant forces in society construct versions of reality that favour their interests, and to uncover the ideological assumptions that are hidden in the words of our written text or oral speech in order to resist, overcome or even capitalise on various forms of power. Criminals in a correctional facility will, for example, be included or excluded from gangs on account of certain ways of speech and codes that only they know.

Discourse analysis collects, transcribes and analyses ordinary talk and everyday explanations for social actions and interaction. It emphasizes the use of language as a way to construct social reality. Yin[1] defines discourse analysis as follows:

“Discourse analysis focuses on explicit theory formation and analysis of the relationships between the structures of text, talk, language use, verbal interaction or communication, on the one hand, and societal, political, or cultural micro- and macro-structures and cognitive social representations, on the other hand.”

Discourse analysis examines a discourse by looking at patterns of the language used in a communication exchange as well as the social and cultural contexts in which these communications occur. It can include counting terms, words, and themes. The relationship between a given communication exchange and its social context requires an appreciation and understanding of culturally specific ways of speaking and writing and ways of organising thoughts.

Oral communication always fits into a context which lends meaning to it. It always has a double structure, namely the propositional context (ontology) and the performatory content (epistemological meaning). Oral communication can, for example, be used with good effect to understand human behaviour, thought processes and points of view. 

The result of discourse analysis is a form of psychological natural history of the phenomena in which you are interested. To be of value for research purposes oral communication must be legitimate, true, justified, sincere and understandable. It should also be coherent in organisation and content and enable people to construct meaning in social context. Participants in oral communication should do so voluntarily and enjoy equal opportunity to speak.

Discourse analysis is a form of critical theory. You, as the researcher, need to ensure that the discourse and the participants in the discussion meet the requirements for such interaction. It will also be your duty to eliminate or at least reduce any forces or interventions that may disrupt the communication. Such discourse can also be taken further by having other participants in the research process elaborate and further analyse the results of initial communications. For this purpose, you need to be highly sensitive to the nuance of language.

Any qualitative research allows you to make use of coding and structuring of data by means of dedicated research software, such as ATLAS.ti or CAQDAS. This will enable you to discover patterns and broad areas of salient argumentation, intentions, functions, and consequences of the discourse. By seeking alternative explanations and the degree of variability in the discourse, it is possible to rule out rival interpretations and arrive at a fair and accurate comprehension of what took place and what it meant. 

Discourse analysis can also be used to analyse and interpret written communication on condition that the written communication is a written version of communication relevant to the topic being researched. This requires a careful reading and interpretation of textual material.

Discourse analysis has been criticized for its lack of system, its emphasis on the linguistic construction of a social reality, and the impact of the analysis in shifting attention away from what is being analysed and towards the analysis itself. Discourse is in actual fact a text in itself, with the result that it can also be analysed for meaning and inferences, which might lead to the original meaning of oral communication being eroded at the expense of accuracy, authenticity, validity and relevance. 

Conversation analysis is arguably the most immediate and most frequently used form of discourse analysis in the sense that it includes any face-to-face social interaction. Social interaction inevitably includes contact with other people and contact with other people mostly includes communication. People construct meaning through speech and text, and its object of analysis typically goes beyond individual sentences. Data on conversations can be collected through direct communication, which needs to be recorded by taking notes, making a video or electronic recording.

Conversation analysis is the study of talk in interaction and generally attempts to describe the orderliness, structure and sequential patterns of interaction, whether this is universal or a casual conversation. Conversation analysis is a way of analysing data and has its own methodological features. It studies the social organisation of two-way conversation through a detailed inspection of voice recordings and transcriptions made from such recordings, and relies much more on the patterns, structures and language used in speech and the written word than other forms of data analysis.

Conversation analysis assumes that it is fundamentally through interaction that participants build social context. The notion of talk as action is central to its framework. Within a focus group we can see how people tell stories, joke, agree, debate, argue, challenge or attempt to persuade. We can see how they present particular ‘versions’ of themselves and others for particular interactional purposes, for example to impress, flatter, tease, ridicule, complain, criticise or condone.

Participants build the context of their talk in and through the talk while talking. The talk itself, in its interactional context, provides the primary data for analysis. Further, it is possible to harness analytical resources intrinsic to the data: by focusing on participants’ own understanding of the interaction as displayed directly in their talk, through the conversational practices they use. In this way, a conversation analytic approach prioritises the participants’ (rather than the analysts’) analysis of the interaction.

Naturally occurring data, i.e. data produced independent of the researcher, encompass a range of universal contexts (for example classrooms, courtrooms, doctors’ surgeries, etc.), in which talk has been shown both to follow the conversations of ‘every-day’ conversation and systematically to depart from these.

Conversation analysis tends to be more granular than classical discourse analysis, looking at elements such as grammatical structures and concentrating on smaller units of text, such as phrases and sentences. An example of conversation analysis is where a researcher “eavesdrops” on the way in which different convicted criminals talk to other inmates to find a pattern in their cognitive thinking processes.

While conversation and discourse analysis are similar in several ways, there are some key differences. Discourse analysis is generally broader in what it studies, utilising pretty much any naturally occurring text, including written texts, lectures, documents, etc. An example of discourse analysis would be if a researcher were to go through transcripts or listen in on group discussions between convicted serial murderers to examine their patterns of reasoning.

The implications of discourse and conversation analysis for data collection and sampling are twofold. The first pertains to sample sizes and the amount of time and effort that goes into text analysis at such a fine level of detail, relative to thematic analysis. In a standard thematic analysis, the item of analysis may be a few sentences of text, and the analytic action would be to identify themes within that text segment. In contrast, linguistic-oriented approaches, such as conversation and discourse analysis, require intricate dissection of words, phrases, sentences and interaction among speakers. In some cases, tonal inflection is included in the analysis. Linguistic analysis, be it transcripts of conversations, interviews or any other form of communication, often consists of an abundance of material to analyse, which requires detailed analysis. This requires substantial time and effort, with the result that not too many samples can be processed in a reasonable time.

The data source inevitably determines the type and volume of analysis that can be done. Both discourse analysis and conversation analysis are interested in naturally occurring language. In-depth interviews and focus groups can be used to collect data, although they are not ideal if it is important to analyse social communication. Analysis of such data often requires reading and rereading material to identify key themes and other wanted information which would lead to meanings relevant to the purpose of the research. 

Existing documents, for example written statements made by convicted criminals, are excellent sources of data for discourse analysis as well as conversation analysis. In terms of field research, participant observation is ideal for capturing “naturally occurring” discourse. Minutes of meetings, written statements, transcripts of discussions, etc. can be used for this purpose. During participant observation, one can also record naturally occurring conversations between two or more people belonging to the target population for the study, for example two surviving victims of attacks by serial killers, two security guards who had experiences with attempted serial killings, etc. In many cases legal implications might make listening in to conversations difficult to do without running the risk of encountering legal problems.

Text can be any documentation, including personal reflections, books, official documents and many more. In action research this is enhanced with personal experiences, which can also be put on paper so that they often become historical data. In action research the research is given a more relevant cultural “flavour” by engaging participants from the community directly in the data collection and analysis. The emphasis is on open relationships with participants so that they have a direct say in how data is collected and interpreted. If participants decide that technical procedures such as sampling or skilled tasks such as interviewing should be part of the data collection and analysis process, they could draw on expert advice and training supplied by researchers.

Paradigmatic approaches that fit well with discourse and conversation analysis include constructivism, hermeneutics, interpretivism, critical theory, post-structuralism and ethnomethodology.

Summary

Discourse analysis:

  1. Evaluates the structures of conversations, negotiations and other forms of communication.
  2. Is dependent on context.
  3. Analyses and uses language.
  4. Focuses on the meaning of the spoken and written word.
  5. Allows the researcher to move from the obvious to the less obvious.
  6. Is concerned with studying and analysing written texts and spoken words to reveal the relationships between language and social interaction.
  7. Examines a discourse by looking at patterns of the language used.
  8. Delivers a form of psychological natural history of the phenomena being investigated.
  9. Is a form of critical theory.
  10. Is criticised for its lack of system, emphasis on the linguistic construction of social reality and the lack of focus on the research problem.

Conversation analysis:

  1. Is a form of discourse analysis.
  2. Includes face-to-face social interaction.
  3. Attempts to describe the orderliness, structure and sequential patterns of interaction.
  4. Has its own methodological features.
  5. Assumes that it is fundamentally through interaction that participants build social context.

Discourse and conversation analysis:

  1. Stem from the ethnomethodological tradition.
  2. Examine text as the object of analysis.
  3. Study naturally occurring language.
  4. Identify social and cultural meanings and phenomena.
  5. Require intricate dissection of words, phrases, sentences and interaction between people.

Close

The differences between discourse and conversation analysis are subtle.

Discourse analysis is broader than conversation analysis in the range of its analysis.

While conversation analysis tends to go into finer detail than discourse analysis.

Enjoy your studies.

Thank you.


[1] 2016: 69.

Continue Reading

ARTICLE 89: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis, Part 2 of 7

Written by Dr. Hannes Nel

Hello, I am Hannes Nel and I discuss comparative and content analysis in this article.

Although quite simple, comparative and content analysis are most valuable for your research towards a master’s degree or a Ph. D.

It does not matter what the topic of your research is – you will compare concepts, events or phenomena and you will study the content of existing data sources.

What you need to know, is how to analyse and use such data.

Comparative analysis

Comparative analysis is a means of analysing the causal contribution of different conditions to an outcome of interest. It is especially suitable for analysing situations of causal complexity, that is, situations in which an outcome may result from several different combinations of causal conditions. The diversity, variety and extent of an analysis can be increased, and the significance potential of empirical data can be improved through comparative analysis. The human element plays an important role in comparative research because it is often human activities and manifestations that are compared.

Although theoretical abstractions from reality can be and, in some instances are the only way in which to do valid comparison, the units of analysis can also be whole societies or systems within societies. Comparative research does not simply mean comparing different societies or the same society over time – it might involve searching systematically for similarities and differences between the cases under consideration.

Comparative researchers usually base their research on secondary sources, such as policy papers, historical documents or official statistics, but some degree of interviewing and observation could also be involved. A measure of verification is achieved by consulting more than one source on a particular issue.

Qualitative research approaches are most suitable for the conduct of comparative analysis, with the result that many paradigmatic approaches can be used. Examples include behaviourism, critical race theory, critical theory, ethnomethodology, feminism, hermeneutics and many more.

Content analysis

Content analysis is a systematic approach to qualitative data analysis, making it suitable to serve as the foundation of qualitative research software. It is an objective and systematic way in which to identify and summarise message content. The term ‘content analysis’ refers to the analysis of such things as books, brochures, written or typed documents, transcripts, news reports, visual media as well as the analysis of narratives such as diaries or journals. Although mostly associated with qualitative research approaches, statistical and other numerical data can also be analysed, making content analysis suitable for quantitative research as well. Sampling and coding are ubiquitous elements of content analysis.

The most obvious example of content analysis is the literature study that any researcher needs to do when preparing a research proposal as well as when conducting the actual research for a doctoral or master’s degree.

Especially (but not only) inexperienced students often think that volume is equal to quality, with the result that they include any content in their thesis or dissertations without even asking themselves if it is relevant to the research that they are doing. The information that you include in your thesis or dissertation must be relevant and it must add value to your thesis or dissertation.

We analyse the characteristics of language as communication regarding its content. This means examining words or phrases within a wide range of texts, including books, book chapters, essays, interviews and speeches as well as informal conversation and headlines. By examining the presence or repetition of certain words and phrases in these texts you are able to make inferences about the philosophical assumptions of a writer, a written piece, the audience for which a piece is written, and even the culture and time in which the text is embedded. Due to this wide array of applications, content analysis is used in literature and rhetoric, marketing psychology and cognitive science, etc.

The purpose of content analysis is to identify patterns, themes, biases and meanings. Classical content analysis will look at patterns in terms used, ideas expressed, associations among ideas, justifications, and explanations. It is a process of looking at data from different angles with a view to identifying key arguments, principles or facts in the text that will help us to understand and interpret the raw data. It is an inductive and iterative process where we look for similarities and differences in text that would corroborate or disprove theory or a hypothesis. A typical content analysis would be to evaluate the contents of a newly written academic book to see if it is on a suitable level and aligned with the learning outcomes of a curriculum.

Content analysis can also be used to analyse ethnographic data. Ethnographic data can be used to prove or disprove a hypothesis. However, in this case validity might be suspect, primarily because a hypothesis should be proven or rejected on account of valid evidence. Quantitative analysis is often regarded as more “scientific” and therefore more accurate than qualitative data. This, however, is a perception that only holds true if the quantitative data can be shown to be objective, accurate and authentic. Qualitative data that is sufficiently corroborated is often more valid and accurate than quantitative data based on inaccurate or manipulated statistics.

Content analysis would typically comprise of three stages: stating the research problem, collecting and retrieving the text and employing sampling methods, interpretation and analysis. Stating the problem will typically be done early in the thesis or dissertation. Collecting and retrieving text and employing sampling methods are typically the actual research process, which may include interviewing, literature study, etc.

It is a good idea to code your work as you write. Find one or more key words for every section and keep record of it. In this manner you will be able to find arguments that belong together more easily, and you will be able to avoid duplication of the same content at different places in your thesis or dissertation. Most dedicated computer software enables you to not only keep content with the same code together, but also to access and even print it. This is especially valuable for structuring the contents of your thesis or dissertation in a logical narrative format and to come to conclusions without contradicting yourself.

Content analysis sometimes incorporates a quantitative element. It is based on examining data for recurrent instances, i.e. patterns, of some kind. These instances are then systematically identified across the data set and grouped together. You should first decide on the unit of analysis: this could be the whole group, the group dynamics, the individual participants, or the participant’s utterances. The unit of analysis provides the basis for developing a coding system, and the codes are then applied systematically across a transcript. Once the data have been coded, a further issue is whether to quantify them via counting instances. Counting is an effective way in which to provide a summary or overview of the data set as a whole.

Interviewing is mostly used prior to doing content analysis, although literature study can also be used. Analysing data obtained through interviewing includes analysing data obtained from a focus group. This variation of content analysis usually begins by examining the text of similarly used words, themes, or answers to questions. Analysed data need to be arranged to fit the purpose of the research. This can, for example, be achieved by indexing data under certain topics or subjects or by using dedicated research software. In addition to individual ideas, the flow of ideas throughout the group should also be examined. It is, for example, important to determine which ideas enjoy the most support and agreement.

Paradigmatic approaches that fit well with content analysis include feminism, hermeneutics, interpretivism, modernism, post-colonialism and rationalism.

Summary

Comparative analysis:

  1. Analyses the conditions that lead to an outcome.
  2. Involves searching systematically for similarities and differences.
  3. Mostly uses secondary data sources.
  4. Is mostly used with qualitative research.

Theoretical abstracts can be used for comparative analysis.

Comparative analysis is used:

1.      To increase the diversity, variety and extent of an analysis.

2.      To analyse human activities.

3.      To analyse whole societies and systems within societies.

Content analysis:

  1. Can serve as the foundation for qualitative research.
  2. Can be used with qualitative and quantitative research.
  3. Extensively uses literature as data.
  4. Can also be used to analyse ethnographic data.

The purpose of content analysis is to identify patterns, themes, biases and meanings.

It would typically comprise of three stages: stating the research problem, collecting data, and analysing data.

Coding can be used with good effect in content analysis.

Close

You probably already noticed that the differences between different data analysis methods are just a matter of emphasis.

They share many elements.

For example, both comparative analysis and content analysis use literature as sources of data.

Both fit in better with qualitative research than with quantitative research.

This means that you can use more than one data analysis method to achieve the purpose of your research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 88: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Methods Part 1 of 7 Parts

Written by Dr. Hannes Nel

Isn’t life strange?

There are so many ways in which we can learn.

And the interrelatedness of events, phenomena and behaviour can be researched in so many ways.  

And we can discover truths and learn lesson by linking data, paradigms, research methods, dada collection and analysis methods.

And by changing the combination of research concepts, we can discover new lessons, knowledge and truths.

Research often deals with the analysis of data to discriminate between right and wrong, true and false.

Furthermore, people and life form a system with a multitude of links and correlations.

Consequently, we can learn by conducting research on even just an individual. 

I discuss the following two data analysis methods in this article:

  1. Analytical induction.
  2. Biographical analysis.

Analytical induction

Induction, in contrast to deduction, involves inferring general conclusions from particular instances. It is a way of gaining understanding of concepts and procedures by identifying and testing causal links between them. Analytical induction is, therefore, a procedure for analysing data which requires systematic analysis.

It aims to ensure that the analyst’s theoretical conclusions cover the entire range of the available data.

Analytical induction is a data analysis method that is often regarded as a research method. It uses inductive, as opposed to deductive reasoning. Qualitative data can be analysed without making use of statistical methods. The process to be explained and the factors that explain the phenomenon are progressively redefined in an iterative process to maintain a perfect relationship between them.

The procedure of analytical induction means that you, as the researcher, form an initial hypothesis or a series of hypotheses or a problem statement or question and then search for false evidence in the data at your disposal and formulate or modify your conclusions based on the available evidence. This is especially important if you work on a hypothesis, seeing that evidence can prove or refute a hypothesis.

Data are studied and analysed to generate or identify categories of phenomena; relationships between these categories are sought and working typologies and summaries are written based on the data that you examined. These are then refined by subsequent cases and through analysis. You should not only look for evidence that corroborates your premise but also for evidence that refutes it or calls for modification. Your original explanation or theory may be modified, accepted, enlarged or restricted, based on the conclusions to which the data leads you. Analytical induction will typically follow the following procedure:     

  1. A rough definition of the phenomenon to be explained is formulated.
  2. A hypothetical explanation of the phenomenon is formulated.
  3. A real-life case is studied in the light of the hypothesis, with the object of determining whether the hypothesis fits the facts in the case.
  4. If the hypothesis does not fit the facts, either the hypothesis is reformulated or the phenomenon to be explained is redefined, so that the case is excluded.
  5. Practical certainty may be attained after a small number of cases have been examined, but the discovery of negative evidence disproves the explanation and requires a reformulation.
  6. The procedure of examining cases, redefining the phenomenon, and reformulating the hypothesis is continued until a universal relationship is established, each negative case calling for a redefinition or a reformulation.
  7. Theories generated by logical deduction from a priori assumptions.

Paradigmatic approaches that can be used with analytical induction include all paradigms where real-life case studies are conducted, for example transformative research, romanticism, relativism, rationalism, post-structuralism, neoliberalism and many more.

Biographical analysis

Biographical analysis focuses on an individual. It would mostly focus on a certain period in a person’s life when she or he did something or was somebody of note. Biographical analysis can include research on individual biographies, autobiographies, life histories and the history of somebody told by those who know it. Data for a biographical analysis will mostly be archived documents or at least documents that belong in an archive. Interviews can also be used if the person is still alive or by interviewing people who knew the individual well when still alive.

Although biographical analysis mostly deals with prominent individuals, it can also deal with humble people, people with tragic life experiences, people from whose life experiences lessons can be learned, etc. Regardless of whether the individual is or was a prominent person or not, you as the researcher will need to collect extensive information on the individual, develop a clear understanding of the historical and contextual background, and have the ability to write in a good narrative format.

You can approach an autobiographical analysis as a classical biography or as an interpretive biography. A classical biography is one in which you, as the researcher, would be concerned about the validity and criticism of primary sources so that you will develop a factual base for explanations. An interpretive biography is a study in which your presence and your point of view are acknowledged in the narrative. Interpretive biographies recognise that in a sense, the writer ‘creates’ the person in the narrative.  

Summary

Analytical induction:

  1. Is a procedure for analysing data.
  2. Requires systematic analysis.
  3. Identifies and tests causal links between phenomena.
  4. Ensures complete coverage of data through theoretical conclusions.
  5. Is regarded as a research method by some.
  6. Progressively refines the explanation of phenomena.
  7. Searches for false information through hypothesis testing.
  8. Searches for relationships between phenomena.
  9. Modifies wrong conclusions.
  10. Identifies categories of phenomena.
  11. Enables the researcher to write and summarise working typologies.

Biographical analysis:

  1. Focuses on the individual.
  2. Can include research on individual biographies, autobiographies and life histories.
  3. Mostly fall back on archival documents.
  4. Can deal with anybody’s experiences from which others can gain value and learn lessons.
  5. Can be a classical or interpretive biography.

Close

In this video, we saw how we can gain knowledge by testing the validity, authenticity and accuracy of data.

We also saw that we can learn from the experiences of others.

There are many other ways in which we can discover knowledge by analysing existing data.

We will discuss them in the six articles following on this one.

Enjoy your studies.

Thank you.

Continue Reading