Docear 1.1.1 Beta with Academic Search Feature

As you may know, Docear features a recommender system for academic literature. To find out which papers you might be interested in, the recommender system parses your mind maps and compares them to our digital library with currently about 1.8 million academic articles. While this is helpful and might point you to papers relevant for your general research goals, you will sometimes have to find information on a specific topic and hence search directly.

Based on our knowledge about recommender systems and some user requests, we decided to implement a direct search feature on our digital library. I am very grateful to Keystone, who supported me in visiting Dr. Georgia Kapitsaki at the University of Cyprus (UCY) in Nicosia for a full month to work on this idea. Dr. Kapitsaki’s has already supported us in our work on Docear’s recommender system in July 2013. Her knowledge about the inner mechanics and her ideas on the the search engine were essential for the implementation and the research part of the project.

How to use it

You can access the search feature from Docear’s ribbon bar (“Search and Filter > Documents > Online search”) or by double-clicking the “Online search” entry in Docear’s workspace panel. Since both the recommender system and the personalized search engine make use of your mind maps. you need to enable the recommendation service in Docear.

Screenshot from 2014-07-07 15:19:39

After opening the search page, you will see

  • a text box for your search query,
  • a “Search” button, and
  • several buttons below the text box reflecting search terms you might be interested in. If Docear does not have enough data to decide about your interests, this part remains empty.

Docear-online-search-interface

(more…)

Docear 1.0.3 Beta: rate recommendation, new web interface, bug fixes, …

Update: February 18, 2014: No bugs were reported, as such we declare Docear 1.03 with its recommender system as stable. It can be downloaded on the normal download page.


With Docear 1.0.3 beta we have improved PDF handling, the recommender system, provided some help for new users and enhanced the way how you can access your mind maps online.

PDF Handling

We fixed several minor bugs with regard to PDF handling. In previous versions of Docear, nested PDF bookmarks were imported twice when you drag & dropped a PDF file to the mind map. Renaming PDF files from within Docear changed the file links in your mind maps but did not change them in your BibTeX file. Both issues are fixed now. To rename a PDF file from within Docear you just have to right-click it in Docear’s workspace panel on the left hand side and it is important that the mind maps you have linked the file in, are opened. We know, this is still not ideal, and will improve this in future versions of Docear.

Rate Your Recommendations

You already know about our recommender system for academic literature. If you want to help us improving it, you can now rate how good a specific set of recommendations reflects your personal field of interest. Btw. it would be nice if you do not rate a set of recommendations negatively only because it contains some recommendations you received previously. Currently, we have no mechanism to detect duplicate recommendations.

rate a literature recommendation set

(more…)

New paper: “A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation”

Yesterday, we published a pre-print on the shortcomings of current research-paper recommender system evaluations. One of the findings was that results of offline and online experiments sometimes contradict each other. We did a more detailed analysis on this issue and wrote a new paper about it. More specifically, we conducted a comprehensive evaluation of a set of recommendation algorithms using (a) an offline evaluation and (b) an online evaluation. Results of the two evaluation methods were compared to determine whether and when results of the two methods contradicted each other. Subsequently, we discuss differences and validity of evaluation methods focusing on research paper recommender systems. The goal was to identify which of the evaluation methods were most authoritative, or, if some methods are unsuitable in general. By ‘authoritative’, we mean which evaluation method one should trust when results of different methods contradict each other.

Bibliographic data: Beel, J., Langer, S., Genzmehr, M., Gipp, B. and Nürnberger, A. 2013. A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation. Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys) (2013), 7–14.

Our current results cast doubt on the meaningfulness of offline evaluations. We showed that offline evaluations could often not predict results of online experiments (measured by click-through rate – CTR) and we identified two possible reasons.

The first reason for the lacking predictive power of offline evaluations is the ignorance of human factors. These factors may strongly influence whether users are satisfied with recommendations, regardless of the recommendation’s relevance. We argue that it probably will never be possible to determine when and how influential human factors are in practice. Thus, it is impossible to determine when offline evaluations have predictive power and when they do not. Assuming that the only purpose of offline evaluations is to predict results in real-world settings, the plausible consequence is to abandon offline evaluations entirely.

(more…)

New pre-print: “Research Paper Recommender System Evaluation: A Quantitative Literature Survey”

As you might know, Docear has a recommender system for research papers, and we are putting a lot of effort in the improvement of the recommender system. Actually, the development of the recommender system is part of my PhD research. When I began my work on the recommender system, some years ago, I became quite frustrated because there were so many different approaches for recommending research papers, but I had no clue which one would be most promising for Docear. I read many many papers (far more than 100), and although there were many interesting ideas presented in the papers, the evaluations… well, most of them were poor. Consequently, I did just not know which approaches to use in Docear.

Meanwhile, we reviewed all these papers more carefully and analyzed how exactly authors conducted their evaluations. More precisely, we analyzed the papers for the following questions.

  1. To what extent do authors perform user studies, online evaluations, and offline evaluations?
  2. How many participants do user studies have?
  3. Against which baselines are approaches compared?
  4. Do authors provide information about algorithm’s runtime and computational complexity?
  5. Which metrics are used for algorithm evaluation, and do different metrics provide similar rankings of the algorithms?
  6. Which datasets are used for offline evaluations
  7. Are results comparable among different evaluations based on different datasets?
  8. How consistent are online and offline evaluations? Do they provide the same, or at least similar, rankings of the evaluated approaches?
  9. Do authors provide sufficient information to re-implement their algorithms or replicate their experiments?

(more…)

Docear welcomes Georgia M. Kapitsaki, a visiting researcher from Cyprus, who will be supporting us with our recommender system

Yesterday we welcomed Dr. Georgia M. Kapitsaki here in our main office in Magdeburg, Germany. Georgia is from the University of Cyprus and will stay one month with us. Her main interest lies in our research-paper recommender system and she will support us in improving our recommender system and performing some research. We Read more…