You are not logged in. Login to the Hub

William of Occam

Online investigative journalism to bring you the Truth.

Welcome to William of Occam

Visit the brand new
Resistance Research Hub!
or articles by our awesome
Contributing Reporters!

Follow us on Socials!

- WoO Broadcasts -

Popular Categories

  You are Here:    Home          Methodology

Methodology

My approach to this investigation is multi-modal, both theoretically and methodologically. If you are currently reading this section and have not yet read my breakdown of my theoretical frame, then I strongly suggest that you read “The Current State of Affairs” first. With respect to my study, I began with a general proposition that there is more behind the accusations leveled against Johnny Depp than what the public was being told. I do not believe that the viability of this proposition is dubious and therefore it is not something worthy of testing. Beginning from this proposition, I knew that this investigation would be considered what is known as ‘”exploratory”, which essentially means that I did not begin with any kind of hypothesis to test. I began with an irrefutable proposition. Thereafter, I needed to appropriately source out data. I discuss the sourcing of data later on in this section. First, I believe it is important to describe the sophisticated research instruments I employ throughout this investigation.

Research Instruments

NVivo

NVivo is a state-of-the-art qualitative data analysis and storage software suite. It allows the importation of content from literally any available source and offers complex data management, analysis, and storage options. It is a robust tool that can capture and analyze almost any kind of content, from transcribed interviews to published articles. It also offers diverse options for advanced data queries, visualizations, and more. NVivo is the primary tool I used for capturing, coding, and analyzing online publications and all other documentation. You can find out more about NVivo here.

SAS

I also use another software suite known as the Statistical Analysis System (SAS), the leading analytic software currently available. SAS allows users to mine, alter, manage, retrieve, and perform highly advanced statistical analyses on almost any kind of data. Whether performing a simple Chi-Squared test to uncover whether a difference between two groups is significant, or a more complex test, such as Hierarchical Linear Modeling, SAS is a must-have. You can learn more about SAS here.

Data Sources

Data Collection 1: Online Articles and Available Evidence

The first direction I took was to compile online articles from any publication that dealt with the US and/or UK trials, Depp’s relationship with Heard, either party individually, and any available evidence. By using various search engines and the appropriate advanced search operators (more on Boolean operators can be found here), I located any and every available article that had been published on the subjects between 1990 and 2020. I vetted these articles depending on the journalistic integrity and credibility of the publisher, the size of their audience, and the relevance of the content (n=1,244). Relevance was not determined arbitrarily but, rather, defined in this case in terms of whether or not the publication’s content was largely ‘click-bait’, that is, whether or not the online publication was designed primarily for increased traffic and ad revenue or to publish at least minimally valuable content. Using NVivo’s capture tool, all articles and documents were captured and transformed into text-recognizable PDF documents.

Using NVivo, this first set of data was thematically coded, frequencies of word occurrences established, and cluster-analyses run. Thematic coding is a process by which paragraphs and sentences within any content are grouped by their implied sentiment and a cluster analysis is a procedure that determines levels of similarity between themed content. Throughout this process it was necessary to eliminate myriad words due to irrelevance, credibility, and so forth. Again, this was not arbitrarily determined. As can be expected from any comprehensive research, these results merely led to more questions about who was involved, in what way, and how. The names of persons or entities that began to appear with greater frequency and with greater similarity within and across the coded themes were: “Harvey Weinstein”, “Ryan Kavanaugh”, “Relativity Media”, “Proxima”, “Art of Elysium”, “ACLU”, “Elon Musk”, and more. Consequently, this meant that I needed to perform another round of data collection.

Data Collection 2: Online Articles, Chapter 11s, and Security Exchange Documents

After interpreting the results from analyzing the first set of data, I then began collecting a second set of documents to compile with the first. Using the same Boolean operators to refine searches even further, I used NVivo to capture every available online publication, Chapter 11 document, SEC file, and so forth, that were related to the growing list of persons and entities attached to this case. These captures were then transformed into text-recognizable PDF documents (n=3,376). Once again these data were thematically coded, word occurrence frequencies established, and cluster-analyses run. Similar to my first data collection and analyses, certain words needed to be manually removed from these analyses due to irrelevance or what-have-you. Having reached a satisfying level of data saturation, which is a fancy science term for the point in an investigation when no new information is discovered, I was certain that the data collection process was complete.

Data Analysis and Summary

Since I established word occurrence frequencies and cluster analysis results for myriad combinations of articles, themes, names, entities, and thematic codes, then it would be best for me to remain true to the scientific nature of this investigation and make transparent my reliance on my theoretical frameworks for analyzing these data and interpreting their results. Rather than present you, dear reader, with an endless cascade of tables and graphs, I’ve instead chosen to offer only those that pertain to the threads I followed based on the theoretical foundations I’ve already laid out for you in “The Current State of Affairs“.

First, I will present you with 3 examples of the cluster trees that illustrate the branches of thematically similar words across articles.

Cluster Tree 1
Cluster Tree 2
Cluster Tree 3

In laymen’s terms, what these cluster trees illustrate is the thematic similarity of the frequency of the occurrence of a word and the sentiment within which it is embedded. In this case, ‘sentiment’ is understood and coded in terms of the adjectives and verbs that are used in the sentence and paragraph structure that scaffold it. Again, the above graphs are merely examples of the multitudes of graphs I’ve created from these data. They are presented here for the purpose of educating you, dear reader, and to ensure comprehension of my methods.

Similarly, these Cluster Trees can be organized into a ‘Circle Graph’ that can more accurately illustrate the strength of the similarities in thematic sentiment. I present to you, dear reader, the following Circle Graphs outlining weak and strong levels of thematic similarity:

Circle Graph Representing Weak Thematic Similarity
Circle Graph Representing Strong Thematic Similarity

Following from this, I present two Circle Graphs representing weak and strong levels of thematic dissimilarity. A good way to think of ‘dissimilarity’ is to consider it as a way to describe the way in which the patterns of word occurrences and their sentiment are distinct from one another.

Circle Graph Representing Weak Dissimilarity
Circle Graph Representing Strong Dissimilarity

The most important consideration while observing these graphs is the strength of the relationship. Strongly dissimilar patterns and sentiments don’t necessarily indicate a lack of involvement. On the contrary, the more strongly the sentiment is distinct, the more likely it is that the data are closely overlapping but in different sentimental tones. In laymen’s terms, they are distinct in sentiment but strongly related in occurrence and theme. I feel that this is a very important point for my readers to understand clearly and, consequently, requires further explanation.

In several seminal publications by Granovetter, social network theory was expanded to consider the differences between strong and weak ties. To crudely summarize this great work, he concluded that individual social, financial, symbolic, and ideological involvement between acquaintances have a broader and more profound societal effect than those between family or close friends. The sound reasoning behind this argument is that acquaintances are much more likely to serve as a bridge between two distinct broader networks of people. That is to say that one broad network of individuals is far more likely to gain access to and be affected by information, values, norms, etcetera shared between two acquaintances than between two close friends. Although this theory has received a great deal of empirical support, it is a notion that most people can easily agree upon anecdotally: it is far more likely that two disparate communities will be connected through acquaintances than through close friends. Following from that, corruption has never been succinctly reducible to who a social or political figurehead knows best, but, rather, who they have in their ‘back pocket’. Almost exclusively, these relationships are colloquially known as ‘associates’ or ‘acquaintances’.

Having adequately summarized my research methodology, I now turn to a discussion of how these methods, in tandem with my theoretical frame, enable a clearer view of some nefarious corporate actors that served to profit from the persecution of Johnny Depp and any other men who might be targeted by false allegations of sexual assault.

Data Collection by Theoretical Direction

July 1, 2022

There are many approaches to research. Qualitative methods might consist of fieldwork whereby researchers engage with participants to perform interviews, observe behaviors, administer tests, and so forth. Quantitative methods might consist of gathering large data sets through surveys, accessing secondary data sets for analysis, or some other super-geek awesomeness (which, dear reader, if you haven’t noticed yet, geeky shit is my bag). Just like the multimodal theoretical framework I’ve developed, in my research I’ve almost exclusively used mixed methods approaches. The current investigation of the case of Johnny Depp demanded that I perform what academics and professional researchers call, Content Analysis, among many other variations of the term. Essentially, this process is exactly what it sounds like: analyzing content; the results of which can then be analyzed using quantitative statistical methods.

With the demandingly large volume of content to collect and analyze, it was necessary to refine the scope of the data by way of my theoretical frame. Any valid data collection and analysis requires the formulation of theoretically informed inquiry. The difference between ‘research questions’ and the the concept of Inquiry is that the latter places greater demand on a researcher to (1) adhere to group consensus on the ethics of research, that is, the treatment and effects of the involvement of participants and investigators; (2) rigorous methodology that avoids bias, some examples of which are: (a) confirmation bias: where one finds what one is looking for; (b) selection bias: which occurs when a research sample group is not representative of the wider population; (c) and, perhaps most applicable in the case of this investigation, framing bias: which occurs when access to data affects decision-making (i.e., the way in which MSM ‘frames’ their stories with positive or negative connotations) and also, inversely, the way that the decision-making process for acquiring the data is affected by positive or negative beliefs held by researchers.

As you can imagine, my friends, avoiding bias is a near impossible task. The most effective means for doing so is by remaining as true to one’s theoretical framework as possible. As I have asserted, I am very confident that I began this exploratory investigation with an irrefutable proposition: There is more to this than what we are being told. I am very confident that my inquiry into this matter followed the meticulously designed theoretical elements I’ve previously outlined. Another important aspect that you must know, dear reader, is how these concepts are generally ubiquitously utilized among philosophers, academics, and laypersons alike but just under different nomenclature. Pierre Bourdieu would refer to much of what I’ve already outlined as fields, that is, those social formations within which actors operate and the ways in which they affect and are affected by this interface. Foucault would have called them regimes or epistemes. Regardless of what we call them, these social formations hold much sway over individual decision-making processes. This brings me to another question which must be burning in everyone’s mind.

How can you reject mainstream media because you claim it’s heavily biased and yet use it for the content of an analysis without being biased yourself?

My answer to this question is simple. The reliability of my research methods and instruments has been tested and refined over the years. In this case, for example, analyzing the frequency of word occurrences and their thematic (dis)similarities allows us to empirically measure the negative or positive sentiment within which those words occur. Thus, we are protected from the emotional or psychological effects intended by publishers and instead can evaluate the ways in which the information is presented and the ways in which these presentations vary over time.

Consequently, faithful reader, I was faced with very serious and complex decisions on directions for collecting, perceiving, analyzing, processing, and presenting enormous amounts of data. By capturing thousands of articles, observing word frequency tables, following thematic sentiment (dis)similarity, and applying advanced statistical tests, I was faced with choices to refine my exploration of these data. I’ve refined my work to present publicly; to assert my credibility and allow for any other researcher to follow my methods and thus test my results. Through creating and interpreting hundreds of cluster trees and frequency tables, I applied my theoretical lens and have now refined for you, my friends, the first subset of data in accordance with my theoretical frame and methodology (n=232). There is much more data I’ve worked with and will continue working with (n=3,376), but this subset of data ought to satisfy the public, anyone critical of my sources, journalists, and Johnny supporters alike. In short, the enormity of these data demand that I divide this investigation into smaller, more manageable subsets.

The data set I’ve chosen to share primarily surrounds certain persons of interest that were chosen because of analyses results. There are numerous other directions I will take. You can download my select data set of content here.

In the name of even greater transparency, I am working on a codebook to illustrate the ways in which the words and sentences were thematically coded. This should be ready in a few weeks time. For the time being I am working on data visualizations to present to you.

Data Collection and Analysis

July 2, 2022

The results of the abovementioned processes revealed many different names that are of interest. Producers, directors, hedge-fund billionaires and others whose social fields and interaction milieus would overlap with Johnny, Amber, and others involved in this case. Based on the coded sentiments and thematic (dis)similarities and with the theoretical frame I have described, I have narrowed the data and subsequent analyses into subsets or constituent smaller investigations that will eventually operate in chorus. I hope that this makes sense to you, dear reader. In sum, many smaller investigations, data collection, analyses, and so forth will combine into a final broader report.

You can find my first presentation of analyses of these selected and refined data here. To help grasp the information through comprehensive infographics, you can find a field map and and interactive visual module HERE.

More coming soon…

Back to top
PHP Code Snippets Powered By : XYZScripts.com
[class^="wpforms-"]
[class^="wpforms-"]