Recommender Systems

New Paper: Utilizing Mind-Maps for Information Retrieval and User Modelling

We recently submitted a paper to UMAP (The Conference on User Modelling, Adaptation, and Personalization). The paper was about how mind-maps could be utilized by information retrieval applications such as recommender systems. The paper got accepted, which means we will be in Aalborg, Denmark from July 7 until July 11 to present the paper. If you are a researcher in the field of information retrieval, or user modelling, or mind-mapping, you might be interested in the pre-print. Btw. if you find any errors in it, we would highly appreciate if you told us (ideally today). Similarly, if you are interested in a research partnership, or if you are also at UMAP 2014 and would like to discuss our research, please contact us. Abstract. Mind-maps Read more…

By Joeran Beel, ago
Help Wanted

Wanted: Participants for a User Study about Docear’s Recommender System

We kindly ask you to participate in a brief study about Docear’s recommender system. Your participation will help us to improve the recommender system, and to secure long-term funding for the development of Docear in general! If you are willing to invest 15 minutes of your time, then please continue reading. Participate in the Study Start Docear Click on the “Show Recommendations” button. Click on all recommendations, so they open in your web-browser. Click on them even if you know a paper already, or if a paper was recommended previously. For each recommended paper, please read at least the abstract. You may also read the entire paper if you like, or at least skim through it. Rate the recommendations: The better the current recommendations are, the more stars Read more…

By Joeran Beel, ago
Jobs & Internships

Do a paid internship at Docear in Germany, summer 2014 (US, UK, and CA Bachelor students only)

Like the previous years, we offer the opportunity for Bachelor students from the US, UK or Canada to do a paid internship at Docear in summer 2014 (if you are from Germany, please read here). The internship should last for 8-12 weeks with the earliest start date being May and the latest being August. You will be paid 650 Euros a month, a 160 Euro allowance for travelling, and health insurance. International travel costs will not be covered. You will be placed with Docear’s core team in Magdeburg, close to Berlin. So, if you love Docear as much as we do, are a passionate software developer or statistician (or want to become one), and are interested to spend your next summer Read more…

By Joeran Beel, ago
Recommender Systems

New paper: “A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation”

Yesterday, we published a pre-print on the shortcomings of current research-paper recommender system evaluations. One of the findings was that results of offline and online experiments sometimes contradict each other. We did a more detailed analysis on this issue and wrote a new paper about it. More specifically, we conducted a comprehensive evaluation of a set of recommendation algorithms using (a) an offline evaluation and (b) an online evaluation. Results of the two evaluation methods were compared to determine whether and when results of the two methods contradicted each other. Subsequently, we discuss differences and validity of evaluation methods focusing on research paper recommender systems. The goal was to identify which of the evaluation methods were most authoritative, or, if some methods are unsuitable in general. By ‘authoritative’, we mean which evaluation method one should trust when results of different methods contradict each other.

Bibliographic data: Beel, J., Langer, S., Genzmehr, M., Gipp, B. and Nürnberger, A. 2013. A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation. Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys) (2013), 7–14.
Our current results cast doubt on the meaningfulness of offline evaluations. We showed that offline evaluations could often not predict results of online experiments (measured by click-through rate - CTR) and we identified two possible reasons. The first reason for the lacking predictive power of offline evaluations is the ignorance of human factors. These factors may strongly influence whether users are satisfied with recommendations, regardless of the recommendation’s relevance. We argue that it probably will never be possible to determine when and how influential human factors are in practice. Thus, it is impossible to determine when offline evaluations have predictive power and when they do not. Assuming that the only purpose of offline evaluations is to predict results in real-world settings, the plausible consequence is to abandon offline evaluations entirely. (more…)

By Joeran Beel, ago
Recommender Systems

New pre-print: “Research Paper Recommender System Evaluation: A Quantitative Literature Survey”

As you might know, Docear has a recommender system for research papers, and we are putting a lot of effort in the improvement of the recommender system. Actually, the development of the recommender system is part of my PhD research. When I began my work on the recommender system, some years ago, I became quite frustrated because there were so many different approaches for recommending research papers, but I had no clue which one would be most promising for Docear. I read many many papers (far more than 100), and although there were many interesting ideas presented in the papers, the evaluations... well, most of them were poor. Consequently, I did just not know which approaches to use in Docear. Meanwhile, we reviewed all these papers more carefully and analyzed how exactly authors conducted their evaluations. More precisely, we analyzed the papers for the following questions.

  1. To what extent do authors perform user studies, online evaluations, and offline evaluations?
  2. How many participants do user studies have?
  3. Against which baselines are approaches compared?
  4. Do authors provide information about algorithm’s runtime and computational complexity?
  5. Which metrics are used for algorithm evaluation, and do different metrics provide similar rankings of the algorithms?
  6. Which datasets are used for offline evaluations
  7. Are results comparable among different evaluations based on different datasets?
  8. How consistent are online and offline evaluations? Do they provide the same, or at least similar, rankings of the evaluated approaches?
  9. Do authors provide sufficient information to re-implement their algorithms or replicate their experiments?
(more…)

By Joeran Beel, ago
Docear

Docear at JCDL 2013 in Indianapolis (USA), three demo papers, proof-reading wanted

Three of our submissions to the ACM/IEEE Joint Conference on Digital Libraries (JCDL) were accepted. They relate to recommender systems, reference management, and pdf metadata extraction:

Docear4Word: Reference Management for Microsoft Word based on BibTeX and the Citation Style Language (CSL) In this demo-paper we introduce Docear4Word. Docear4Word enables researchers to insert and format their references and bibliographies in Microsoft Word, based on BibTeX and the Citation Style Language (CSL). Docear4Word features over 1,700 citation styles (Harvard, IEEE, ACM, etc.), is published as open source tool on http://docear.org, and runs with Microsoft Word 2002 and later on Windows XP and later. Docear4Word is similar to the MS-Word add-ons that reference managers like Endnote, Zotero, or Citavi offer with the difference that it is being developed to work with the de-facto standard BibTeX and hence to work with almost any reference manager.
(more…)

By Joeran Beel, ago
Information Extraction

Metadata retrieval and recommendations deactivated due to heavy server load

We are experiencing a very high server load due to several reasons (many people are using our services, we are doing some extensive research analyses, etc.). Therefore we decided to deactivate the metadata retrieval and recommendations for a while, hopefully only a few days. We will let you know as soon as the services are available again.   UPDATE (April 15th): Service is online again!

By Joeran Beel, ago