Show simple item record

dc.contributor.authorChai, Kevin Eng Kwong
dc.contributor.supervisorDr. Vidyasagar Potdar

Web 2.0 platforms such as forums, blogs and wikis allow users from its community to contribute content. However, users often received little if any professional training in content creation and content is commonly published without peer review. Excessive low quality user contributions can lead to information overload, which describes the situation when a user feels overwhelmed with unwanted information. Information overload can cause users to withdraw from using a website therefore decreasing a website's overall sustainability through the loss of users from its community.Many Web 2.0 websites have relied on its users to manually rate the quality of User Generated Content (UGC) to deal with this problem. However, the major problems with this approach is that rating is voluntary so a large percentage of content often receives a lack of rating and UGC is often created at a faster rate than which it can be sufficiently rated. Therefore, automated content quality assessment models are required to address the problems caused by manual user rating.A number of automated models have been proposed in recent years for Web 2.0 platforms. However, we identified many limitations with these existing models in our literature review. For example, the majority of models are only suitable for a specific language such as English and have not effectively considered how content is used by the user community in the assessment process. Therefore, we propose a novel and language independent model that evaluates content, usage, reputation, temporal and structural dimensions of UGC for quality assessment to address these limitations..We developed our model using Web technologies and a supervised machine learning approach. More specifically, we employed a rule learner, a fuzzy logic classifier and Support Vector Machines. We validated our model on three operational Web forums and outperformed existing models in the literature in our experiments. We used the Friedman Test and Nemenyi test to verify our results and discovered that the performance improvements generated by our model are statistically significant over the existing models.

dc.publisherCurtin University
dc.subjectuser generated content
dc.subjectautomated quality assessment
dc.subjectmachine learning-based approach
dc.subjectweb forums
dc.titleA machine learning-based approach for automated quality assessment of user generated content in web forums
curtin.departmentDigital Ecosystems and Business Intelligence Institute
curtin.accessStatusOpen access

Files in this item


This item appears in the following Collection(s)

Show simple item record