Flexible Collaborative Filtering In JAVA With Mahout Taste

Mahout-logo-164x200 I recently had to build quickly a prototype of recommendation engine for a promising start-up company. I wanted to first test state of the art collaborative filtering algorithms before to build a customized solution (potentially on top of those algorithms). Most importantly, I wanted to be able to compare quickly all the different algorithm configurations with which I would come up with. Mahout Taste (previously a sourceforge project but recently promoted to the Apache Mahout project) was simply answering all those needs in one place.

I describe below how in few easy steps, I was set up to express my creativity without having to reinvent the wheel. This tutorial is based on the 0.2 release of Mahout.

Step 1: Set up your environment with mahout taste

I usually use Eclipse with Maven to simply add a dependency but the mahout pom had some repository issues by the time I tried, so I worked around it by adding the required libraries in eclipse manually (all the libraries found in the directory lucene/mahout/trunk/core/target/dependency of their latest release).

Step 2: Familiarize yourself by building a simple recommendation engine based on the movie lens data

To see a recommender engine in action, you can for instance download one of the movie Lens ratings data sets (I choose the one with one million ratings). Unzip the archive somewhere. The file that will interest you is ratings.dat. Its format is as follows:

userId::movieId::rating::timestamp

The basic mahout taste FileDataModel only accept the simple following format:

userId,movieId,rating

There are many ways to transform your original file in that format, I used the simple following perl command:

perl -F"::" -alne 'print "@F[0],@F[1],@F[2]"' ratings.dat > ratingsForMahout.dat

You think: “what about the timestamp information???”. Yes, you right, it is a pretty crucial information given that it is based on temporal dynamics that the winning team of the Netflix prize made the difference (BTW, if you’re interested in the subject, you must see this video of Yehuda Koren’s lecture at KDD).

So, don’t worry, you can customize later your own DataModel class that parse any information you want, you’ll just have to implement the DataModel interface (you can also extends the FileDataModel class).

To obtain your first recommendations in few lines of code, you can use

import java.io.File;
import java.io.FileNotFoundException;
import java.util.List;

import org.apache.mahout.cf.taste.common.TasteException;
import org.apache.mahout.cf.taste.impl.model.file.FileDataModel;
import org.apache.mahout.cf.taste.impl.recommender.CachingRecommender;
import org.apache.mahout.cf.taste.impl.recommender.slopeone.SlopeOneRecommender;
import org.apache.mahout.cf.taste.model.DataModel;
import org.apache.mahout.cf.taste.recommender.RecommendedItem;

public class MahoutPlaying {
	public static void main(String[] args) throws FileNotFoundException, TasteException {
		DataModel model;
		model = new FileDataModel(new File("/home/padjiman/data/movieLens/mahout/ratingsForMahout.dat"));
		CachingRecommender cachingRecommender = new CachingRecommender(new SlopeOneRecommender(model));

		List recommendations = cachingRecommender.recommend(1, 10);
		for (RecommendedItem recommendedItem : recommendations) {
			System.out.println(recommendedItem);
		}

	}
}

which creates in few lines of code a slope one recommendation engine and print the 10 first recommendations for user 1. You’ll see there only movieIds so you’ll have to check the file movies.dat to see the actual movie title (you can also write a simple method or script that shows you directly the movie title if you want to play with several users or to create your own user).

You can replace the slope one recommender with whatever other recommendation engine provided in the package. For instance, let’s say you want to use a classic user based recommender algorithm using the Pearson correlation similarity with a nearest 3 users neighborhood, simply replace the line that build the recommender in the above code by the code below:

UserSimilarity userSimilarity = new PearsonCorrelationSimilarity(model);
UserNeighborhood neighborhood = new NearestNUserNeighborhood(3, userSimilarity, model);
Recommender recommender = new GenericUserBasedRecommender(model, neighborhood, userSimilarity);
Recommender cachingRecommender = new CachingRecommender(recommender);

Few issues you might have during step 2:
– OutOfMemoryError: the slope recommender is pretty greedy and on the 1 Million Movie Lens Dataset, you may have to set the -Xmx VM option to 1024m (in eclipse, just add -Xmx1024m to the VM arguments in the run configuration options).
– Some errors during the FileDataModel initialization: pay attention that the directory containing your file to parse does not contains other files starting with the same name; for some reasons it disturbs the DataModel initialization in some cases.

Step 3: Test the relevance of the algorithms

In my opinion the most valuable part of the whole process. To feel immediately if your intuition of choosing a particular algorithm is a good one, or to see the good or bad impact of your own customized algorithm, you need a way to evaluate and compare them on the data.

You can easily do that with mahout RecommenderEvaluator interface. Two different implementations of that interface are given: AverageAbsoluteDifferenceRecommenderEvaluator and RMSRecommenderEvaluator. The first one is the average absolute difference between predicted and actual ratings for users and the second one is the classic RMSE (a.k.a. RMSD).

Since I’m playing with a movie dataset and that Netflix evaluation process was based on RMSE, I put here an example of use of the RMSRecommenderEvaluator:

import java.io.File;
import java.io.IOException;

import org.apache.commons.cli2.OptionException;
import org.apache.mahout.cf.taste.common.TasteException;
import org.apache.mahout.cf.taste.eval.RecommenderBuilder;
import org.apache.mahout.cf.taste.eval.RecommenderEvaluator;
import org.apache.mahout.cf.taste.impl.eval.RMSRecommenderEvaluator;
import org.apache.mahout.cf.taste.impl.model.file.FileDataModel;
import org.apache.mahout.cf.taste.impl.recommender.CachingRecommender;
import org.apache.mahout.cf.taste.impl.recommender.slopeone.SlopeOneRecommender;
import org.apache.mahout.cf.taste.model.DataModel;
import org.apache.mahout.cf.taste.recommender.Recommender;

public final class EvaluationExample{
	public static void main(String... args) throws IOException, TasteException, OptionException {

		RecommenderBuilder builder = new RecommenderBuilder() {
			public Recommender buildRecommender(DataModel model) throws TasteException{
				//build here whatever existing or customized recommendation algorithm
				return new CachingRecommender(new SlopeOneRecommender(model));
			}
		};

		RecommenderEvaluator evaluator = new RMSRecommenderEvaluator();
		DataModel model = new FileDataModel(new File("/home/padjiman/data/movieLens/mahout/ratingsForMahout.dat"));
		double score = evaluator.evaluate(builder,
				null,
				model,
				0.9,
				1);

		System.out.println(score);
	}
}

Note that the evaluator need a RecommenderBuilder provided here as an inline implementation of the interface.
For a detailed description of the parameter of the evaluator, look at the javadoc in the sourcecode (as of today, the one that you’ll found on the web is outdated since it concern mahout release 0.1). But basically:
– 0.9 here represents the percentage of each user’s preferences to use to produce recommendations, the rest are compared to estimated preference values to evaluate.
– 1 represent the percentage of users to use in evaluation (so here all users).

Result?

RMSE = 0.8988.
To give you a point of comparison, the Netflix baseline predictor (called Cinematch) had an RMSE of 0.9514 and the Grand Prize was for the team providing an improvement of 10% (not that this tutorial is not based on netflix data but on Movie Lens data).

The number not really matters here: the important thing is that it provide you an easy way to compare different algorithms or the same algorithm with different settings (thresholds or other parameters).

Step 4: Now start the real work…

You guessed that you won’t win any Prize using the recommenders given by Mahout as-is :).
Depending on your data and on your needs, you may have either to simply customize an existing algorithm or to plug any specific similarity measure or to create your very own recommender from scratch. All of those are pretty easy to do in Mahout.

Let’s say for instance that you want to exploit the category of the movies to build a specific user similarity that includes this information.
What you will have to do is first to be able to capture the new information about categories.

To do so you can for instance extends the FileDataModel class to another class that also parses the movies.dat file and build relevant data structures to store the data about categories. I found more convenient to build my own Statistics object. Then you will have to build a new User similarity. It is as simple as that:

import org.apache.mahout.cf.taste.common.Refreshable;
import org.apache.mahout.cf.taste.common.TasteException;
import org.apache.mahout.cf.taste.model.DataModel;
import org.apache.mahout.cf.taste.similarity.PreferenceInferrer;
import org.apache.mahout.cf.taste.similarity.UserSimilarity;

import com.padjiman.algo.Statistics;

public class ProfileSimilarity implements UserSimilarity {

	private final Statistics stats;
	private final DataModel dataModel;

        public ProfileSimilarity(Statistics stats, DataModel dataModel) {
		super();
		if (stats == null) {
			throw new IllegalArgumentException("stats is null");
		}
		if (dataModel == null) {
		      throw new IllegalArgumentException("dataModel is null");
		    }
		this.dataModel = dataModel;
		this.stats = stats;
	}

	@Override
	public double userSimilarity(long userID1, long userID2)
	throws TasteException {
		//build your similarity function here
		//exploiting the stats and dataModel object as you wish
	}

	@Override
	public void refresh(Collection alreadyRefreshed) {
		// TODO Auto-generated method stub
	}

	@Override
	public void setPreferenceInferrer(PreferenceInferrer inferrer) {
		// TODO Auto-generated method stub
	}
}

Complete the method userSimilarity with you own secret sauce. Et voila: you can now plug your new user similarity measure in a GenericUserBasedRecommender, for instance instead of the Pearson correlation similarity measure (showed in step 2) and simply compare which one is the best using your evaluator.

You’re not satisfied with the GenericUserBasedRecommender or any other recommender provided by Mahout? No problem, try to implement your own. You’ll just have to start with a class declaration of this kind:

public class MostPopularItemUserBasedCombinedRecommender extends AbstractRecommender implements Recommender {
       //override the necessary methods
}

Here again, you can use as member of the class any customized object containing any statistics that you would judge relevant to build a better recommender. Then, again, plug your new recommender in the evaluator and compare, experiment, improve.

Conclusion

Mahout Taste is a very flexible platform to experiment collaborative filtering algorithms. It certainly won’t provide you an immediate solution to your recommendation problem, but you’ll be easily able to either tune the existing algorithms or plug your own creative ones into the mahout taste interfaces set.

By doing so, you’ll immediately get the benefit of a platform allowing you to compare, tune and improve iteratively the results of your different algorithm configurations. Last but not least, Mahout taste provide an external server which exposes recommendation logic to your application via web services and HTTP.

Other ressources:

  • After reading this quick start guide, I recommend you to have a look at the official mahout taste documentation. As of today it is not updated with the release 0.2 so you might find some old method signatures there but you’ll find useful and complementary information about the big picture of Mahout Taste design.
  • A nice article on mahout in general (not only the taste part). I felt that Taste was not enough detailed there, in particular on the testing part, this is why I wrote this tutorial.

viagra for sale canada This tablet of penegra needs to be taken an hour prior to your love making session. Postural changes will affect the alignment of the column and bones of the pelvis, improving the function of the spleen, which creates lymphocytes and this in turn helps in the modulation and regeneration of hair.Candidates For This TherapyIndividuals at the early stage of androgenic alopecia, or those just started to lose hair. best cialis online Nowadays, impotency among men and dryness among women is crippling free viagra samples our society. Keen or not ergo keen, players of all ages will deeprootsmag.org viagra in the usa debunk to distinguish dinosaurs by name, body shape, food preferences, environment, and even a pronunciation guide (a sound plus for older players who don’t want to observation utterly foolish).

2 thoughts on “Flexible Collaborative Filtering In JAVA With Mahout Taste

  1. I am not sure where you are getting your information, but great topic. I needs to spend some time learning more or understanding more. Thanks for great information I was looking for this information for my mission.

Comments are closed.