While studying for my MSc in Knowledge Discovery and Data Mining, I was fortunate to work on a project for the Office of the Police and Crime Commissioner, Norfolk, UK. The project involved analysis of textual data, which was generated from a local policing survey from the relatively serene county of Norfolk.
Towards the end of the project, I considered how I could shift the power of R and Qualitative analyses into the hands of the OPCC, Norfolk. I realised that the OPCC were neither Programmers nor Statisticians, so expecting them to master R and Statistical concepts was unfair. At this point, I stumbled upon the Shiny package from R, and decided to use it and transform most of my analyses into an interactive web application for use in future surveys.
The present application is a modified version of the original developed for the OPCC, Norfolk. It contains the following functionalities:
1. Importing custom corpus
Users may import their own corpus, as long as these are .txt files, and that the corpus is either a single .txt file, or the corpus is composed of a directory of several .txt files.
2. Importing sample corpus
Users not having a corpus may like to use any of the three corpora included with the application. These include i) the UAE Expat forum corpus, ii) the UAE TripAdvisor corpus, and iii) the Middle East Politics’ forum corpus. The code used in scraping these can be accessed by clicking here.
3. Importing custom stopwords and thesauri
In addition to the stopwords shipped with the tm package in R, users may include their own corpus-specific stopwords or thesauri into the pre-processing phase.
4. SMART term-weighting scheme
As provided in the tm package, the SMART notation used in assigning importance to words in our corpus can be utilised. For example, users may wish to assign importance to words simply based on their frequency of occurrence; the terms occurring most often in the corpus would have the highest importance. Alternatively, users may wish to assign greater importance to words that occur rarely in the corpus. In such instances, the SMART weighting notation can be used to achieve our objectives.
5. Calculating Word Frequency Distributions
Users may wish to see whether the word frequency distribution resulting from their corpus follows the Zipf’s law, or simply to record the most frequent words in the corpus. For these purposes, the Rank-Frequency and Word-Frequency plots are provided, which may also be downloaded by users.
It is often the case that in a corpus, certain documents can be identified as belonging to the same topic, while others belong to a different one. It can be quite helpful to identify such groups of documents, as we can then differentiate and assign the documents to their respective groups and get a rough idea of our corpus.
In the present application, such clustering can be performed by making use of words and their frequencies. That is, it is assumed that documents containing the same words belong to the same group. Thus, the presence, absence, and frequency of words are taken into account to form clusters of documents.
The resulting plot is downloadable.
7. Clustering Words
Similar to clustering documents, we may also wish to identify groups of words by assuming that words which occur together — that is, occur in similar contexts — belong to the same group.
This application supports two forms of clustering, i) Partitional, and i) Hierarchical, where the former identifies ‘flat’ groups and the latter identifies hierarchies of groups.
The graphs produced for these purposes (Associative Word cloud and the Dendrogram) are downloadable.
8. A Words’ network: an experimental plot
As an experiment, I have included another graph in the application — the Words’ network. This graph only seeks to complement the Associative word cloud from the ‘Clustering Words’ section, in that it links words to each other and assigns colours to the links based on their weights. The greater number of times a word occurs with another, the darker its link to that word.
I realise that such graphs can sometimes be confusing due to the fact that each word would have some link with another. However, in my personal analyses, I have witnessed cases where disparate groups of words were identified. This is the reason why I take the opportunity to include the Words’ network graph in this application.
The Words’ network graph is also downloadable.
Presently, I am endeavouring to push the Shiny App to the Shiny Server, a great initiative by the RStudio team to enable useRs to host their Shiny Apps.
As soon as I succeed, I shall share the link to the online App here. In the meanwhile, I share the GitHub page for the App, and hope that useRs find it interesting.