Because of the wide variability of the items in the stream, the potential dynamism of the important dimensions, and the extremely small sample of the whole item space actually seen at any one time, item classification must be probabilistic. To accomplish anything of significance, the helper must continually generate and test hypotheses about what dimensions of variation are most important in the classification of the current set of items and it must simultaneously attempt to identify what ranges of values make for acceptable versus non-acceptable items along each of those dimensions.
(Note: Obviously helpers cannot solve the classification problem if they can't deduce the true dimensions of variation used to classify items. Classifying paintings based on other paintings you like might be a plausible helper problem, but classifying them based on what childhood events they remind you of isn't.)
To be widely applicable, helpers should have three stages of development: The first, and longest, stage would be to develop a general-purpose helper shell, which is then adapted in the second stage to particular instances of the general problem. Only in the third stage is it used by a particular user and is allowed to adapt to that user. Thus there should be a distinction between people who develop such shells and people who modify the front ends of such shells to make application systems on top of that basic level of functionality.
At least initially, the total space of dimensions of variation the system can explore to explain user behavior is preset by the second-stage developer. The system can't work outside of that set of dimensions. However, it can modify the relations between those dimensions as it gains more information about their current relative importance. Such a system is midway between a rigid program as written today and a formless blob that shifts with every tide.