We investigate whether it is possible to infer from implicit
feedback what is relevant for a user in an information retrieval task.
Eye movement signals are measured; they are very noisy but potentially
contain rich hints about the current state and focus of attention of
the user. In the experimental setting relevance is controlled by giving
the user a specific search task, and the modeling goal is to predict
from eye movements which of the given titles are relevant. We extract a
set of standard features from the signal, and explore the data with
statistical information visualization methods including standard
self-organizing maps (SOMs) and SOMs that learn metrics. Relevance of
document titles to the processing task can be predicted with reasonable
accuracy from only a few features, whereas prediction of relevance of
specific words will require new features and methods.