69-101Įvaluation, Experimentation, Recommender systems, Reproducibility National CategoryĬomputer Systems Research subject Computer and Information Sciences Computer Science, Computer Science Identifiers URN: urn:nbn:se:lnu:diva-56225 DOI: 10.1007/s1125-x ISI: 000373021900003 Scopus ID: 2-s2.0-84960395171 OAI: oai:DiVA. Place, publisher, year, edition, pages2016. © 2016, Springer Science+Business Media Dordrecht. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. For instance, the optimal size of an algorithms’ user model depended on users’ age. Some of the determinants have interdependencies. Docear provides plenty of support and useful instructions through their official user manual. It is available for Windows, Mac, and Linux computers. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Docear is an open source mind mapping, reference, and citation management software for those who want a visual way to keep their research organized. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Docear provides a single platform that can support almost every aspect of the research process. Note: You can also find this basic HTML template on the MDN Learning Area GitHub repo. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. If you want to experiment with writing some HTML on your local computer, you can: Copy the HTML page example listed above. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. In this article, we examine the challenge of reproducibility in recommender-system research. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. Numerous recommendation approaches are in use today. 69-101 Article in journal (Refereed) Published Abstract Show others and affiliations 2016 (English) In: User modeling and user-adapted interaction, ISSN 0924-1868, E-ISSN 1573-1391, Vol.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |