Tuning WebSphere Commerce Solr Search

You have implemented Solr search with WebSphere Commerce, but how good are the results that users of the site get?  When they search on ‘green xmas sprount’ (yes a typo) is your top selling christmas vegetable going to be returned or will they see something else or nothing at all?  Hopefully by the end of the article you will see why they might have got nothing. So the chances are the search does need tuning, and in this article after the understanding WebSphere Commerce Solr integration we look across the different options.

Test your on site WebSphere Commerce Solr Search

The first step before we can do anything is to identify how search is running on the site right now,  and there are a few tools that can help with that.

Analytics – An important first step is to make sure that your eCommerce site is tagged in the right way or that you are using the tools that are within WebSphere Commerce to show you how search is performing.  If you have the correct configuration within your WebSphere Commerce setup, you can use Management Centre, to see the top search hits and the top search misses that are taking place.  If you are migrating into FEP7 from a previous feature pack with Solr and are running these stats then you need to move the search configuration to the component.xml file that is in your search project, from the one in your Commerce project.

The reports are useful you can see below that the top search misses show us the terms, as well as the suggested keyword that has come back and the number of hits the keyword has got.  In our case we searched for ‘toy’ Solr returned no matches but the spellchecker returned ‘top’ as the closest word match.  However that also got no matches, these are the kind’s of things we want to look at.

WebSphere Commerce -Top Search Misses

WebSphere Commerce – Top Search Misses

Based on the results of the analytics the following areas can then be examined.

  • How relevant are the products that are being returned for your top search terms, and how many sales do they lead to.
  • When you get no matches, what are the top suggestions being offered to the shopper?
  • If you get no matches and no suggestions why is this?
  • Do you need to be looking at your synonyms, replacement terms and search rules to help with the search or perhaps the data?

The best way to manage this process is through some hard work and analyse the results, this could also become a scripted process so it’s easy to repeat and test. Take the top 100 search terms and then examine what is produced from the first 10 search results, identify the state of the product to the search ‘relevant’, ‘fairly relevant’, ‘irrelevant’, ‘nothing returned’.  This will show what the end user is seeing on the site, you should know better than anyone how good those products actually are.

This analysis will then become the basis of the changes we are going to be making, as we want to be able to see if we get improvements.

How can we further analyse the Results?

Now we have analysed what the shopper is seeing we can try and understand why that is taking place.  The most important aspect here is the relevancy score that Solr is calculating for us.  You can view this within the store preview environment, if the code is within the page so you can see the relevancy of the result.

Solr will generate this score based on the search terms the user entered and how they match against the fields in the Solr index.  This is a specialised area to understand, you can boost certain fields so that if the search term occurs in the short description it is worth more than the long description.  How close the terms exist to each other can also impact the score.  We can set up synonyms and replacement terms and we can also build search rules, all aimed at producing more relevancy.

The following image shows the Aurora starter store in preview mode at FEP7.  The search was on ‘pink’ and you can see the relevancy score that the products have been given.  As part of the analysis you initially do it is worth capturing what these scores are, because we can see later how they will adjust.

Solr Relevancy Score

Solr Relevancy Score

The other way you can see the scoring used by Solr and also get a lot more detail is by looking at the query being issued and then running it direct against the Solr server, with debugQuery=true.  Once you get the hang of this it is quite simple to do and the Solr explain output once you understand it, will help answer questions such as why is this product ahead of this.  To find the query that is being used by Commerce will depend on the environment and feature pack you are using.  The simplest is in the development environment where typically you will run everything in the workspace.  I don’t need to enable any trace statements because my Solr core requests appear in systemout.  If I was doing this in a server environment then I need to be looking on my SOlr server to pick up the requests that are being made.

When you make a search request in the store, what you are then looking for is the search query that has been issued against the CatalogEntry solr core.  If you have auto suggest turned on you may see several other requests going into Solr, look for the full search term being in the requested rather than the auto suggest lookups.  When you review the log as well depending on how many facets there are you could have a much bigger query, but there are only certain parts that we are interested in.

Solr Systemout Content

Solr Systemout Content

Once we have the query we can then begin to pull it apart.  A useful tool to help with this is the URL encoder/decoder , it will help decode the output in the log for the Solr query.  In the following two screenshots we have the encoded query and the decoded query, everything between the { and } when we look at the systemout.  You can see how much easier it is to read when decoded and we can easily identify all the parameters that are being passed under this FEP7 query.

Encoded Solr Query

Encoded Solr Query

Decoded Solr Query

Decoded Solr Query

A couple of things to look out for in that query if you want to run it directly against your Solr server.  Look out for the &wt parameter in this case setting the response as JSON, but in earlier version it will define a java type.  That will cause issues when you run it in the browser and also look for the version parameter being specified.  Both can be happily be removed to run the statement.

You can then either directly put the parameters after your hostname and Solr cell for example in dev you might have something like thisfor the hostname.

http://localhost/solr/MC_10351_CatalogEntry_en_US/select?<then the Solr query goes here>

or you might use a plugin like postmaster with Chrome.  The screenshot that follows shows the JSON output at FEP7 when I have entered a request.

Postmaster for Chrome with Solr QUery

Postmaster for Chrome with Solr Query

Lower down in the output we then can see the Solr explain that is generated, on how the returned products have been matched.  Now this will still look confusing but there are a few important points that you want to take a look at, because it will help when you think about the documents returned.

term’s Frequency (tf) – the higher the terms occur in the document the better the relevancy

Inverse Document Frequency (idf) – how rare is the term across all the documents, if we search on ‘Black Piano’ then black may occur across a lot of documents.  However Piano would be much rarer and score higher.

fieldNorm – the more words in a filed the lower it’s score, this is why you sometimes must look at how your product is described in something like short description.

boost – the boosts that have been made on the fields.

Solr Explain output from a query

Solr Explain output from a query

So that response can look very complicated and difficult to understand one place that might help is an explain interface that has been created at explain.solr.pl.  Make sure that the output being produced has debugQuery=true and that you are producing XML, it also works with V4 of Solr.  Take the output from the query and cut and paste and run it.  It will give you a breakdown of what has been found and how it worked out.  The example below is from a search for “blue” “jean” on the Aurora starter store, and the rating of the two top products, at least we can see in a simpler fashion what the debug means.

Visual Explain on the Solr Debug Output

Visual Explain on the Solr Debug Output

What can we change?

Search Configuration

  • How is your search in WebSphere Commerce setup for example are you matching ALL terms a user puts in or ANY terms.  There will be a big difference there in what is returned with ANY it will match each search term and get results.  For example if I searched for ‘Red Table’ I would get all the matches not only that contained ‘Red Table’, but also those with just ‘Red’ or ‘Table’ so I would get a ‘Green Table’ if one existed is that good for the shopper?  If I say ALL I am going to get matches only on those that have ‘Red Table’ within the document, but is that going to limit what I see in terms of long tail searching?
  • The search configuration type is setup in the searchsetup.jspf file (unless you have customised it) you can see in there a description of the various configuration options.  You will also notice that you can bring back various options in the data such as just getting just SKU’s.  This is done by altering the query issues to Solr so it can filter out the type being returned.
  • Phrase Slop (PS) this indicates on the query how many additional terms can be in between what I am searching for.  So if I am searching on black couch, and the description of the product is ‘long black leather and velvet couch’ and I have a PS=3 I would get a match.  Also the closer the terms are together in a field the better ranking that field would have so the ‘leather black couch’ would receive a higher ranking.
  • Minimum Match (mm) – this is potentially the biggest influencer in terms of improving your results.  The mm option allows you to specify on a search how many terms have to match based on what the shopper has searched for.  This allows us to get away from the main IBM settings of ANY and ALL as far as term matching goes.  If you want to use mm then you cannot use ALL because ALL means the query terms will be generated with an AND against each one.  By using ANY you then allow the Solr query parser to look at how many terms have matched before a result is returned for example mm=4<-1 4<3, this means any query with 4 or less terms must match 4 terms and over 4 it must match at least 3.  It opens up the long tail search to us where a shopper may know what they want but has a slight spelling mistake, which with ALL they would have had no matches. There is a new minimum match article available in the search cookbook on using minimum match with FEP6, we have been asking a few questions on there.  However this is a really good piece of functionality they are bringing into the mix.
  • Boosting can be defined on both the Query Fields (qf) and on the Phrase Fields (pf) to aim in increasing the relevancy score.  If you then get matches on those fields so the relevancy score calculation will be increased.  For example you might boost the shortDescription field ahead of the longDescription field.

Management Centre

This was covered in the earlier Solr article but a run through again of some of the aspects is always useful.  It is very easy to get these things wrong, and end up attempting to do everything perhaps as synonyms.  What we are hoping to see from IBM soon as well, is a way that you can manage your synonyms and replacement terms in Management Centre but use them in way that does not impact your Solr query.  This is because they are changed before query time in Solr so it just see’s them as query terms, refer to the IBM cookbook post above on minimum match and have a read of the comments.


Use them to setup the relationship between terms that have a similar meaning, especially if we are selling internationally.  A term in one country might have a different match somewhere else.  So a ‘chook’ in Australia would refer to a ‘chicken’ if you were in the UK, and we want to be able to search on both.

Do not use Synonyms for spelling mistakes such as the following (taken from a customer)-

Magazine, Brochure, Leaflet, Panflet, Panflit, Catalogue

Instead and if they are common misspellings you are seeing on the site they should be be setup as Replacement terms so Panflit ‘instead search for’ Pamphlet.

Then the synonym rule would be altered to be

Magazine, Brochure, Leaflet, Pamphlet, Catalogue

That way when the customer searches on Panflit, the replacement runs first and then the search is expanded for the synonyms.  Even if Pamphlet does not exist we would get matches on the other terms.

Replacement Terms

The replacement terms should be correcting spelling mistakes that are common within the users or directing users to appropriate search terms should they use a term that you know has a replacement.

For example you may have been changing products and shoppers are used to looking under a specific product code for your bestseller.  You can use the ‘instead search for’ functionality to take the term or product code they are looking for and replace it with the new code or product name.

Search Rules

The search Rules in WebSphere Commerce are very powerful, you can generate a lot of different options that can impact how results are returned, boost products within the search or lower them or alter what the user is searching for.

Product Data

Getting the product data and categorisation right is important when you look at how the products are being scored by Solr.  For example if you use the categoryName as a boosted field and then have the users doing a search that matches exactly that category name, products will get a good increase in the relevancy score.  So if the user searches on ‘3d tv’ and you have a category called ‘3d tv glasses’ then all the products in the category would match and appear in the search results.  They would get a boost both from the category and you would also expect they have ‘3d tv’ in the title and in the long description.  Suddenly TV’s are not at the top of the list but the 3d tv glasses are.

The same can happen when you look at the length of descriptions with the fieldNormalisation, so you search on ‘Red Table’ and on one product the short description says ‘Beautiful looking red table that is smooth to the finish and has a gloss layer’, that will have a lower relevancy than ‘Large Red Table’ because it has more words in the short description.

Also when you see random products appearing in your search results it may be down to a field such as the long description containing a term you did not expect, so it’s always worth having a look at what is in the data.

Tuning your Search environment is Complicated

If you have got to the bottom then you will have realised that tuning your search, analysing the results and giving the best answers to your customers takes time.  It is an ongoing process analysing those top terms, creating synonyms, replacements and search rules.  Solr is not the most obvious environment to work with but the IBM integration is improving as they plug in aspects such as minimum match.  But it does take specific skills on the Solr side to get the most from it.

And a couple of reasons why ‘green xmas sprount’ might have got no answers but we can fix them all.

  • They searched on xmas and the product has Christmas in its title, but we forgot the synonym of christmas and xmas
  • They spelt sprout wrong of course searching for ‘sprount’  and we did not have a replacement term.  Because we heave picked up people making that spelling mistake we add one in sprount ‘instead search for’ sprout.
  • We had a match of ALL in our Commerce setting instead of using minimum match so our Christmas Sprout would not be returned as it is also searching for green, even with a synonym and a replacement term set.  Instead we will set ANY and to use the Solr minimum match option so we can just pick two terms.

Leave a Reply

Your email address will not be published. Required fields are marked *