Research discovery

Has someone answered that question I have not worked out how to ask yet?



Recommender systems for academics are hard and in particular, I suspect they are harder than normal because definitionally the content should be new and hard to relate to existing stuff. Indeed, finding connections is a publishable result in itself. Hard-nosed applied version of the idea of knowledge topology.

Complicated interaction with systems of peer review. Could this integrate with peer review in some useful way? Can we have services like Canopy, Pinterest or keen for scientific knowledge? How can we trade of recall and precision for the needs of academics?

Moreover the information environment is challenging. I am fond of Elizabeth Aceso’s summary::

assessing a work often requires the same skills/knowledge you were hoping to get from said work. You can’t identify a good book in a field until you’ve read several. But improving your starting place does save time, so I should talk about how to choose a starting place.

One difficulty is that this process is heavily adversarial. A lot of people want you to believe a particular thing, and a larger set don’t care what you believe as long as you find your truth via their amazon affiliate link […] The latter group fills me with anger and sadness; at least the people trying to convert you believe in something (maybe even the thing they’re trying to convince you of). The link farmers are just polluting the commons.

My paraphrase: knowledge discovery would likely be intrinsically difficult in a hypothetical beneficent world with great sharing mechanisms, but the economics of attention, advertising and weaponised media mean that we should be suspicious of the mechanisms that we can currently access.

Theory

In particular, Aceso makes me worry that my scattershot approach to link sharing is possibly detracting from the value of this blog to the wider world.

José Luis Ricón, a.k.a. Nintil, wonders about A better Google Scholar based in experience trying to make a better Meta Scholar for Syntopic reading. Robin Hanson, of course, has much to say on potentially better mechanism design for scientific discovery. I have qualms about his implied cash rewards system crowding out reputational awards; I think there is something to be said for that particular economy using non-cash currency; but yes why not try it out?

Projects

connected papers

Connected Papers in action

Connected Papers | Find and explore academic papers

  • To create each graph, we analyze an order of ~50,000 papers and select the few dozen with the strongest connections to the origin paper.
  • In the graph, papers are arranged according to their similarity. That means that even papers that do not directly cite each other can be strongly connected and very closely positioned. Connected Papers is not a citation tree.
  • Our similarity metric is based on the concepts of Co-citation and Bibliographic Coupling. According to this measure, two papers that have highly overlapping citations and references are presumed to have a higher chance of treating a related subject matter.
  • Our algorithm then builds a Force Directed Graph to distribute the papers in a way that visually clusters similar papers together and pushes less similar papers away from each other. Upon node selection we highlight the shortest path from each node to the origin paper in similarity space.
  • Our database is connected to the Semantic Scholar Paper Corpus (licensed under ODC-BY). Their team has done an amazing job of compiling hundreds of millions of published papers across many scientific fields.

papr

papr — “tinder for preprints”

We all know the peer review system is hopelessly overmatched by the deluge of papers coming out. papr reviews use the wisdom of the crowd to quickly filter papers that are considered interesting and accurate. Add your quick judgements about papers to those of thousands of scientists around the world.

You can use the app to keep track of interesting papers and share them with your friends. Spend 30 min quickly sorting through the latest literature and papr will keep track of the papers you want to come back to.

With papr you can filter to only see papers that match areas that interest you, keywords that match your interest, or papers that others have rated as interesting or high quality. Make sure your literature review is productive and efficient.

I appreciate the quality problem is important, but I am unconvinced by their topic keywords idea. Quality is only half the problem for me and the topic-filtering problem looks harder.

Daily papers

Daily Papers seems to be similar to arxiv-sanity but they are more actively maintained and less coherentlty explained. Their paper rankings seem to incorporate… twitter hype?

arxiv sanity

Arxiv-sanity

Aims (aimed?) to prioritise the arxiv paper-publishing firehose so that you can discover papers of relevance to your own interests, at least if those interests are in machine learning.

Arxiv Sanity Preserver

Built by @karpathy to accelerate research. Serving last 26179 papers from cs.[CV|CL|LG|AI|NE]/stat.ML

Includes twitter-hype sorting, TF-IDF clustering, and other such basic but important baby steps towards web2.0 style information consumption.

The servers are overloaded of late, possibly because of the unfavourable scaling of all the SVMs that it uses, or the continued growth of Arxiv, or epidemic addiction to intermittent variable rewards amongst machine learning researchers. That last reason is why I have opted out of checking for papers.

I could run my own installation — it is open source — but the download and processing requirements are prohibitive. Arxiv is big and fast.

trendingarxiv

trendingarxiv (source):

Keep track of arXiv papers and the tweet mini-commentaries that your friends are discussing on Twitter.

Because somehow some researchers have time for twitter and the opinions of such multitasking prodigies are probably worthy of note. That is sadly beyond my own modest capacities. Anyway, great hack, good luck.

The syllabus

The Syllabus:

I wonder if this techno-editorial system works?

Each week we publish curated syllabi featuring pieces that cut across text, video and audio. The curation runs either along thematic lines — e.g. technology, political economy, arts & culture — or by media type such as Best of Academic Papers, Podcasts, Videos. You can also build your own personalised syllabus centered around your interests.

Our approach rests on a mix of algorithmic and human curation: each week, our algorithms detect tens of thousands of potential candidates — and not just in English. Our human editors, led by Evgeny Morozov, then select a few hundred worthy items.

It is run by a slightly crazy sounding guy, Evgeny Morozov.

The way in which Morozov collects and analyses information is secret, he says. He doesn’t want to expand on how he compares his taxonomies with the actual content of videos, podcasts, books and articles. "That’s where our cutting-edge innovation lies."

… The categorisation and scoring of all information is an initial screening. Everything is then assessed by Morozov and his assistants, several times, ultimately resulting in a selection of the very best and most relevant information that appears during a week, sorted by theme.

I do not believe this solves a problem I personally face, but perhaps it solves a useful problem generally? I am somewhat bemused by the type of scientific knowledge diffusion process that this implies. Is converging on a canonical list of this week’s hottest think pieces what we can and/or should be doing?

Reading groups and co-learning

The Journal Club is a web-based tool designed to help organize journal clubs, aka reading groups. A journal club is a group of people coming together at regular intervals, e.g., weekly, to critically discuss research papers. The Journal Club makes it easy to keep track of information about the club’s meeting time and place as well as the list of papers coming up for discussion, papers that have been discussed in previous meetings, and papers proposed by club members for future discussion.

Paper analysis/annotation

Academic reading workflow problem?.

Finding copies

unpaywall and oadoi seem to be indices of non-paywalled preprints of paywalled articles. oadoi is a website, unpaywall is a browser extension. Citeseer also.


No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.