The proliferation of fake news has severe effects on society and individuals on multiple fronts. With fast-paced online content generation, has come the challenging problem of fake news content. Consequently, automated systems to make a timely judgment of fake news have become the need of the hour. The performance of such systems heavily relies on feature engineering and requires an appropriate feature set to increase performance and robustness. In this context, this study employs two methods for reducing the number of feature dimensions including Chi-square and principal component analysis (PCA). These methods are employed with a hybrid neural network architecture of convolutional neural network (CNN) and long short-term memory (LSTM) model called FakeNET. The use of PCA and Chi-square aims at utilizing appropriate feature vectors for better performance and lower computational complexity. A multi-class dataset is used comprising 'agree', 'disagree', 'discuss', and 'unrelated' classes obtained from the Fake News Challenges (FNC) website. Further contextual features for identifying bogus news are obtained through PCA and Chi-Square, which are given nonlinear characteristics. The purpose of this study is to locate the article's perspective concerning the headline. The proposed approach yields gains of 0.04 in accuracy and 0.20 in the F1 score, respectively. As per the experimental results, PCA achieves a higher accuracy of 0.978 than both Chi-square and state-of-the-art approaches.
* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.