Category Archives: WebSphere

Create a Clean WebSphere Commerce Development Environment

This may not be something that bothers you but for me why can IBM not create a new development environment, which does not contain every starter store.  Do I really need consumer direct these days, should the default should be to just put the Aurora versions in, and this also applies to FEP8.  It is mentioned at the bottom of the enabling starter store feature, but IBM don’t even provide a link to their own content, so here are some steps to tidy that development environment.

The image that follows shows the file setup after installing a new FEP8 Enterprise Commerce workspace using the IBM Install Manager.  What a load of junk is in there, just to confuse what a new developer might see for example 3 versions of Dojo supporting the various stores.

Stores Directory in new FEP8 Commerce Developer Environment

Stores Directory in new FEP8 Commerce Developer Environment

What we need to do is tidy this up so that all we have the relevant store models that we want to work with, those that contain all the new functionality in FEP8.

Some of steps are listed on the IBM Knowledge Base for Commerce V7. The first step is quite strange because when you read it’s output (below) then you would think some of the following steps have been completed (it says the database and workspace have been restored).  But you must run each step in turn, to get the environment to a cleaner level.

RestoreDefault Script Run

RestoreDefault Script Run

When you get to the point of enabling the features from the feature pack on the environment, enable the store-features first because that will also enable the foundation and management centre options.  Removing the need to do each one individually these three are the main components that I would be enabling.  It is worth noting in the dev environment you will get some very long pauses where nothing seems to happen, the environment we were using spent 30 minutes appearing to do nothing.  We could see it had started the application server in headless mode then it sat there not much processor usage with the JVM creeping up memory wise, then finally it started deploying assets into the RAD workspace.

Enable Store Features

Enable Store Features

Now when we restart the workspace for the RAD environment we see far fewer directories, we will be getting rid of some of the Dojo directories as we only need dojo18.

File system after cleaning up the stores

File system after cleaning up the stores

When the workspace restarts you first need to rebuild the OpenLazlo project for Management Centre, right click it and select build the project, it will take a little while to rebuild.

Next start the application server and then access the admin console (useful links) so that the the new store model can be published the default password will be wcsadmin/wcsadmin.  Go into store archives and because this is on FEP8 under B2B direct we can see the new Aurora option, this is what we want to work with.  If you are on FEP7 or using the professional environment or don’t want the B2B functionality enabled then look under Consumer Direct for Aurora.  IBM could do with updating the titles in this area, it might make more sense than it does now.

Select Store Archive

Select Store Archive

You then need to set the options you want to publish with.  In this case we have changed the name to AuroraB2B and also set it so it is stand alone rather than published as an eSite.

Store Archive Publish Options

Store Archive Publish Options

Once published you should see this, while it publishes you will see the console changing in the RAD environment.

Store Publish has worked

Store Publish has worked

Now there are two things to do first tidy Dojo up, we only need one version in the dev environment. To do this you need to do a file search within the dojo18 folder for references to ‘dojo131’.  Amazing as it seems there are several, and this issue has been there quite a long time, we had a PMR in May 2013 so it’s still not fixed.  Edit each of the files found that references transparent.gif in dojo131 so its dojo18.  Then you can remove the old dojo folders for 131 and 15.

Cleaning up Dojo removing 131 references

Cleaning up Dojo removing 131 references

In order to see your store folder you might need to do a refresh of the file system right click on WebContent and click Refresh or hit F5.

Refresh File System

Refresh File System

You should now have a much cleaner environment, you don’t have all the sample stores and code deployed in the database.  The file system has fewer directories and you know what you are working with you can see the AuroraB2B store we have deployed.

Clean File System

Clean File System

The next step is to restart the server otherwise you will get some errors on the home page and then access the site.  Then we can access my developer AuroraB2B store that has been deployed on store http://localhost/webapp/wcs/stores/servlet/en/aurorab2b, exciting a clean environment and the new FEP8 starter store.

WebSphere Commerce FEP8 Aurora B2B

WebSphere Commerce FEP8 Aurora B2B

WebSphere Commerce CategoryDataBean and Solr

An problem we have recently come across is the way that the CategoryDataBean works when you have Solr involved in your configuration.  Rather than as you might expect that the databean is getting data from the database, it in facts gets it data from Solr if you are using it.

We found this out because we had a some code that performed an extract of all the products in a sales catalogue.  Those products when we first setup the sales catalogue belonged in categories that were all linked to the master catalogue, and the extract worked fine.

The sales catalogue was then altered creating new categories that only existed inside that sales catalogue so were not under the master catalogue, and the products were moved into them until no master catalogue categories existed.  When we finished this process suddenly the extract had stopped working, it had probably been a gradual process but it was noticed that it contained no products and the files were empty.

We looked and could see nothing obvious that was wrong, the extract process first built a list of all the categories in the sales catalogue and then took each category in turn and got a list of the products.  The statements we had for tracing showed the category code was being picked up but as soon as it was called we got no products.  The following was the piece of code we were using, nothing complicated we would set and initialise the category databean.  the storeId and the catalogId and the langId were being passed in on the scheduled job.

Initial CategoryDataBean Code

Initial CategoryDataBean Code

We then looked more at what was going on and noticed that when the Command was running we were seeing requests made to Solr and it was then we saw what appeared to be the problem.  The following is the Solr request and we could then see that the catalog_id was being set as 10001 the master catalogue.  When in fact the Sales catalogue was 10251 that should have been used.  We took the Solr query and ran it directly against the Solr server and tweaked the options to see it really was the problem, some info on doing this in this article on tuning Solr.

[12/11/14 09:18:45:820 GMT] 00000097 SolrDispatchF 1 org.apache.solr.servlet.SolrDispatchFilter doFilter Closing out SolrRequest: {{params(q=*:*&start=0&debugQuery=false&fl=catentry_id,storeent_id,childCatentry_id,score&facet=true&version=2&rows=5000&fq=storeent_id:(“10151″+”10012″)&fq=catalog_id:“10001”&fq=parentCatgroup_id_search:(+”10001_60844″)&timeAllowed=15000&wt=javabin),defaults(echoParams=explicit)}}

So we then opened a PMR (thanks Mateja) because the IC was giving no clues as to what was going on and started looking at the trace statements taking place and noticed the following. Which shows the wrong catalogue ID being used.  It was being set to 10251 but then became 10001.

[12/11/14 09:18:43:289 GMT] 00000948 ServiceLogger 3   Command parameters: [jobId=333423] [langId=-1] [catalogId=10251][storeId=10053] [jobInstanceId=889982]

SolrSearchByCategoryExpressionProvider  gets the CatalogId from CatalogContext:

[12/11/14 09:18:43:321 GMT] 00000948 CatalogCompon > getCatalogContext() ENTRY
[12/11/14 09:18:43:321 GMT] 00000948 CatalogContex > getCatalogID ENTRY
[12/11/14 09:18:43:321 GMT] 00000948 CatalogContex < getCatalogID RETURN 10001

[12/11/14 09:18:43:321 GMT] 00000948 CatalogCompon < getCatalogContext() RETURN [bDirty = false][bRequestStarted = true][iOriginalSerializedString = null&null&false&false&false][iToken = 2646180:true:true:0]
[12/11/14 09:18:43:321 GMT] 00000948 CatalogContex > getCatalogID ENTRY
[12/11/14 09:18:43:321 GMT] 00000948 CatalogContex < getCatalogID RETURN 10001
[12/11/14 09:18:43:321 GMT] 00000948 SolrSearchByC 1 invoke(SelectionCriteria) Catalog Id: 10001

[12/11/14 09:18:43:321 GMT] 00000948 SolrSearchByC 1 invoke(SelectionCriteria) Search categories: 60846

Looking more detail at CategoryDataBean.getProducts code if the environment is using using SOLR search then to check the product entitlement getCatalogContext gets the catalog context from the service context. This context will have information specific to the catalog like, the catalog ID.

So even though we are setting the catalogId on the scheduled job it has no impact instead we had to modify the code we have to do the following.  We are now setting the catalogId in the context and as soon as we did this the code works.

New CategoryDataBean Code

New CategoryDataBean Code

None of this is documented at all, and it took us quite a long time to have any idea on why what looked like good code was failing to work.  Hopefully we will see more updates into the Info Centre that explain what is going on and what you need to look out for.


Tuning WebSphere Commerce Solr Search

You have implemented Solr search with WebSphere Commerce, but how good are the results that users of the site get?  When they search on ‘green xmas sprount’ (yes a typo) is your top selling christmas vegetable going to be returned or will they see something else or nothing at all?  Hopefully by the end of the article you will see why they might have got nothing. So the chances are the search does need tuning, and in this article after the understanding WebSphere Commerce Solr integration we look across the different options.

Test your on site WebSphere Commerce Solr Search

The first step before we can do anything is to identify how search is running on the site right now,  and there are a few tools that can help with that.

Analytics – An important first step is to make sure that your eCommerce site is tagged in the right way or that you are using the tools that are within WebSphere Commerce to show you how search is performing.  If you have the correct configuration within your WebSphere Commerce setup, you can use Management Centre, to see the top search hits and the top search misses that are taking place.  If you are migrating into FEP7 from a previous feature pack with Solr and are running these stats then you need to move the search configuration to the component.xml file that is in your search project, from the one in your Commerce project.

The reports are useful you can see below that the top search misses show us the terms, as well as the suggested keyword that has come back and the number of hits the keyword has got.  In our case we searched for ‘toy’ Solr returned no matches but the spellchecker returned ‘top’ as the closest word match.  However that also got no matches, these are the kind’s of things we want to look at.

WebSphere Commerce -Top Search Misses

WebSphere Commerce – Top Search Misses

Based on the results of the analytics the following areas can then be examined.

  • How relevant are the products that are being returned for your top search terms, and how many sales do they lead to.
  • When you get no matches, what are the top suggestions being offered to the shopper?
  • If you get no matches and no suggestions why is this?
  • Do you need to be looking at your synonyms, replacement terms and search rules to help with the search or perhaps the data?

The best way to manage this process is through some hard work and analyse the results, this could also become a scripted process so it’s easy to repeat and test. Take the top 100 search terms and then examine what is produced from the first 10 search results, identify the state of the product to the search ‘relevant’, ‘fairly relevant’, ‘irrelevant’, ‘nothing returned’.  This will show what the end user is seeing on the site, you should know better than anyone how good those products actually are.

This analysis will then become the basis of the changes we are going to be making, as we want to be able to see if we get improvements.

How can we further analyse the Results?

Now we have analysed what the shopper is seeing we can try and understand why that is taking place.  The most important aspect here is the relevancy score that Solr is calculating for us.  You can view this within the store preview environment, if the code is within the page so you can see the relevancy of the result.

Solr will generate this score based on the search terms the user entered and how they match against the fields in the Solr index.  This is a specialised area to understand, you can boost certain fields so that if the search term occurs in the short description it is worth more than the long description.  How close the terms exist to each other can also impact the score.  We can set up synonyms and replacement terms and we can also build search rules, all aimed at producing more relevancy.

The following image shows the Aurora starter store in preview mode at FEP7.  The search was on ‘pink’ and you can see the relevancy score that the products have been given.  As part of the analysis you initially do it is worth capturing what these scores are, because we can see later how they will adjust.

Solr Relevancy Score

Solr Relevancy Score

The other way you can see the scoring used by Solr and also get a lot more detail is by looking at the query being issued and then running it direct against the Solr server, with debugQuery=true.  Once you get the hang of this it is quite simple to do and the Solr explain output once you understand it, will help answer questions such as why is this product ahead of this.  To find the query that is being used by Commerce will depend on the environment and feature pack you are using.  The simplest is in the development environment where typically you will run everything in the workspace.  I don’t need to enable any trace statements because my Solr core requests appear in systemout.  If I was doing this in a server environment then I need to be looking on my SOlr server to pick up the requests that are being made.

When you make a search request in the store, what you are then looking for is the search query that has been issued against the CatalogEntry solr core.  If you have auto suggest turned on you may see several other requests going into Solr, look for the full search term being in the requested rather than the auto suggest lookups.  When you review the log as well depending on how many facets there are you could have a much bigger query, but there are only certain parts that we are interested in.

Solr Systemout Content

Solr Systemout Content

Once we have the query we can then begin to pull it apart.  A useful tool to help with this is the URL encoder/decoder , it will help decode the output in the log for the Solr query.  In the following two screenshots we have the encoded query and the decoded query, everything between the { and } when we look at the systemout.  You can see how much easier it is to read when decoded and we can easily identify all the parameters that are being passed under this FEP7 query.

Encoded Solr Query

Encoded Solr Query

Decoded Solr Query

Decoded Solr Query

A couple of things to look out for in that query if you want to run it directly against your Solr server.  Look out for the &wt parameter in this case setting the response as JSON, but in earlier version it will define a java type.  That will cause issues when you run it in the browser and also look for the version parameter being specified.  Both can be happily be removed to run the statement.

You can then either directly put the parameters after your hostname and Solr cell for example in dev you might have something like thisfor the hostname.

http://localhost/solr/MC_10351_CatalogEntry_en_US/select?<then the Solr query goes here>

or you might use a plugin like postmaster with Chrome.  The screenshot that follows shows the JSON output at FEP7 when I have entered a request.

Postmaster for Chrome with Solr QUery

Postmaster for Chrome with Solr Query

Lower down in the output we then can see the Solr explain that is generated, on how the returned products have been matched.  Now this will still look confusing but there are a few important points that you want to take a look at, because it will help when you think about the documents returned.

term’s Frequency (tf) – the higher the terms occur in the document the better the relevancy

Inverse Document Frequency (idf) – how rare is the term across all the documents, if we search on ‘Black Piano’ then black may occur across a lot of documents.  However Piano would be much rarer and score higher.

fieldNorm – the more words in a filed the lower it’s score, this is why you sometimes must look at how your product is described in something like short description.

boost – the boosts that have been made on the fields.

Solr Explain output from a query

Solr Explain output from a query

So that response can look very complicated and difficult to understand one place that might help is an explain interface that has been created at  Make sure that the output being produced has debugQuery=true and that you are producing XML, it also works with V4 of Solr.  Take the output from the query and cut and paste and run it.  It will give you a breakdown of what has been found and how it worked out.  The example below is from a search for “blue” “jean” on the Aurora starter store, and the rating of the two top products, at least we can see in a simpler fashion what the debug means.

Visual Explain on the Solr Debug Output

Visual Explain on the Solr Debug Output

What can we change?

Search Configuration

  • How is your search in WebSphere Commerce setup for example are you matching ALL terms a user puts in or ANY terms.  There will be a big difference there in what is returned with ANY it will match each search term and get results.  For example if I searched for ‘Red Table’ I would get all the matches not only that contained ‘Red Table’, but also those with just ‘Red’ or ‘Table’ so I would get a ‘Green Table’ if one existed is that good for the shopper?  If I say ALL I am going to get matches only on those that have ‘Red Table’ within the document, but is that going to limit what I see in terms of long tail searching?
  • The search configuration type is setup in the searchsetup.jspf file (unless you have customised it) you can see in there a description of the various configuration options.  You will also notice that you can bring back various options in the data such as just getting just SKU’s.  This is done by altering the query issues to Solr so it can filter out the type being returned.
  • Phrase Slop (PS) this indicates on the query how many additional terms can be in between what I am searching for.  So if I am searching on black couch, and the description of the product is ‘long black leather and velvet couch’ and I have a PS=3 I would get a match.  Also the closer the terms are together in a field the better ranking that field would have so the ‘leather black couch’ would receive a higher ranking.
  • Minimum Match (mm) – this is potentially the biggest influencer in terms of improving your results.  The mm option allows you to specify on a search how many terms have to match based on what the shopper has searched for.  This allows us to get away from the main IBM settings of ANY and ALL as far as term matching goes.  If you want to use mm then you cannot use ALL because ALL means the query terms will be generated with an AND against each one.  By using ANY you then allow the Solr query parser to look at how many terms have matched before a result is returned for example mm=4<-1 4<3, this means any query with 4 or less terms must match 4 terms and over 4 it must match at least 3.  It opens up the long tail search to us where a shopper may know what they want but has a slight spelling mistake, which with ALL they would have had no matches. There is a new minimum match article available in the search cookbook on using minimum match with FEP6, we have been asking a few questions on there.  However this is a really good piece of functionality they are bringing into the mix.
  • Boosting can be defined on both the Query Fields (qf) and on the Phrase Fields (pf) to aim in increasing the relevancy score.  If you then get matches on those fields so the relevancy score calculation will be increased.  For example you might boost the shortDescription field ahead of the longDescription field.

Management Centre

This was covered in the earlier Solr article but a run through again of some of the aspects is always useful.  It is very easy to get these things wrong, and end up attempting to do everything perhaps as synonyms.  What we are hoping to see from IBM soon as well, is a way that you can manage your synonyms and replacement terms in Management Centre but use them in way that does not impact your Solr query.  This is because they are changed before query time in Solr so it just see’s them as query terms, refer to the IBM cookbook post above on minimum match and have a read of the comments.


Use them to setup the relationship between terms that have a similar meaning, especially if we are selling internationally.  A term in one country might have a different match somewhere else.  So a ‘chook’ in Australia would refer to a ‘chicken’ if you were in the UK, and we want to be able to search on both.

Do not use Synonyms for spelling mistakes such as the following (taken from a customer)-

Magazine, Brochure, Leaflet, Panflet, Panflit, Catalogue

Instead and if they are common misspellings you are seeing on the site they should be be setup as Replacement terms so Panflit ‘instead search for’ Pamphlet.

Then the synonym rule would be altered to be

Magazine, Brochure, Leaflet, Pamphlet, Catalogue

That way when the customer searches on Panflit, the replacement runs first and then the search is expanded for the synonyms.  Even if Pamphlet does not exist we would get matches on the other terms.

Replacement Terms

The replacement terms should be correcting spelling mistakes that are common within the users or directing users to appropriate search terms should they use a term that you know has a replacement.

For example you may have been changing products and shoppers are used to looking under a specific product code for your bestseller.  You can use the ‘instead search for’ functionality to take the term or product code they are looking for and replace it with the new code or product name.

Search Rules

The search Rules in WebSphere Commerce are very powerful, you can generate a lot of different options that can impact how results are returned, boost products within the search or lower them or alter what the user is searching for.

Product Data

Getting the product data and categorisation right is important when you look at how the products are being scored by Solr.  For example if you use the categoryName as a boosted field and then have the users doing a search that matches exactly that category name, products will get a good increase in the relevancy score.  So if the user searches on ‘3d tv’ and you have a category called ‘3d tv glasses’ then all the products in the category would match and appear in the search results.  They would get a boost both from the category and you would also expect they have ‘3d tv’ in the title and in the long description.  Suddenly TV’s are not at the top of the list but the 3d tv glasses are.

The same can happen when you look at the length of descriptions with the fieldNormalisation, so you search on ‘Red Table’ and on one product the short description says ‘Beautiful looking red table that is smooth to the finish and has a gloss layer’, that will have a lower relevancy than ‘Large Red Table’ because it has more words in the short description.

Also when you see random products appearing in your search results it may be down to a field such as the long description containing a term you did not expect, so it’s always worth having a look at what is in the data.

Tuning your Search environment is Complicated

If you have got to the bottom then you will have realised that tuning your search, analysing the results and giving the best answers to your customers takes time.  It is an ongoing process analysing those top terms, creating synonyms, replacements and search rules.  Solr is not the most obvious environment to work with but the IBM integration is improving as they plug in aspects such as minimum match.  But it does take specific skills on the Solr side to get the most from it.

And a couple of reasons why ‘green xmas sprount’ might have got no answers but we can fix them all.

  • They searched on xmas and the product has Christmas in its title, but we forgot the synonym of christmas and xmas
  • They spelt sprout wrong of course searching for ‘sprount’  and we did not have a replacement term.  Because we heave picked up people making that spelling mistake we add one in sprount ‘instead search for’ sprout.
  • We had a match of ALL in our Commerce setting instead of using minimum match so our Christmas Sprout would not be returned as it is also searching for green, even with a synonym and a replacement term set.  Instead we will set ANY and to use the Solr minimum match option so we can just pick two terms.

Understanding the WebSphere Commerce Solr Integration

If you use WebSphere Commerce V7 then you may already use the WebSphere Commerce Solr integration for search that is provided in the product, or you might be thinking about using it. This integration brings together two very complex pieces of software here WebSphere Commerce we know is complex and Solr is an enterprise search.

It will have been a trade off for IBM when working on the integration to give your marketing team a single interface to manage both search and merchandising functionality,while at the same time supporting Commerce functionality like eSites.  It works well and can be customised but are you really giving customers relevant results or are they seeing too many no results pages, or irrelevant responses?  Do you want to know more and get more from Solr, if you do we will try and bring together these different areas and help you get the most from search.  Understanding the integration, and improving search relevancy for your customers when they are looking for the products that you sell.

Some Useful Terms

First let’s take a look at just some of the components and terms that you will use when working with the WebSphere Commerce Solr integration.

  • Solr – provides an open source enterprise search environment from the Apache foundation, supporting many features including full text searching, faceted search and rich document support such as word and PDF. It powers some of the largest site’s in the world and many eCommerce vendors integrate with it to provide search functionality.
  • Preprocess – the di-preprocess command will take the WebSphere Commerce data and generate a series of tables that flatten the data data that so it can be indexed by Solr.  A number of preprocess configuration files are provided out of the box and when you run the command you will see a series of tables that start ti-…. will be created in your database instance.  When you become more advanced with Solr you may want to include additional data at pre-process time.
  • di-buildindex – for Solr to run it must have an Solr Index, this is built from the data that was generated when running the pre-process component of WebSphere Commerce.  The index then needs to be kept up to date at various times either through a full build of all the data or a delta build to just pick up changed data.
  • Structured data – the structured data for Commerce is anything from the database so your product information would be part of your structured data.
  • Unstructured data – this would be your PDF’s documents anything not from the database that will be returned in your results.  We won’t really focus on this type of information yet, there is enough to get right with the structured data.
  • Solr document – a document in the Solr index refers to the details on a product / item /category, the document contents are then returned as part of the Solr response.
  • Search term – the search term’s are the words you are looking for within the Solr Index
  • Relevancy Score – this is very important it how Solr has ranked the document when it performs a search against the terms.  That score can be impacted by a wide variety of options both Solr driven but also down to how you have structured the data.  Understanding this score is understanding the results being produced.
  • Extended dismax – a query mode used when working with Solr. Prior to Feature Pack 6 IBM went with the very simple Solr query parser, at FEP6 and up they started using dismax (though not fully).  The Solr parser is limited in what it can do, IBM did produce a cookbook example on how to fix this but it is pointless, we explain why in a forthcoming post.
  • Schema.xml – the schema.xml file defines the structure of the Solr configuration,  the file can be modified if you want to say add longDescription into your search index which by default is not used. You would also make changes in here if you adjust the configuration components such as the spellchecker.
  • Solr Core – this allows us to have a single Solr instance with multiple Solr configurations and indexes, you will see a ‘default’ core that is available and not used.
  • CatalogEntry Core – the index created that covers everything about the products and items within WebSphere Commerce.  When a query is created you send it against that index for example http://<myhostname>:<port if not 80>/solr/MC_10351_CatalogEntry_en_US/select?q*:* will return information from the entry based index on products and items in there.  You can see from the core name that it’s taking the master CatalogId as an identifier as well as the language.  This means we can have multiple language indexes being used.
Solr WebSphere Commerce CatalogEntry Query

Solr WebSphere Commerce CatalogEntry Query

  • CatalogGroup Core – the index created that covers information about the categories that are within the store. An example query against the Catalogroup gCore http://<myhostname>:<port if not 80>/solr/MC_10351_CatalogGroup_en_US/select?q*:*
Solr WebSphere Commerce CatalogGroup Query

Solr WebSphere Commerce CatalogGroup Query

Working with Solr through Management Centre

The marketing team interact with Solr though Management centre, it provides the ability to manage how results are produced based on a combination of what the customer is searching for.  The ‘Landing Page’, ‘Synonym’ and ‘Replacement Terms’ that follow are all found under the ‘Catalogs’ section of Management Centre, while ‘Search Rules’ are found in ‘Marketing’.  It may not seem obvious to split the functionality up in this way, especially as when you first look certain aspects of say a replacement term are repeated in a search rule.  But what you will find is the power of the ‘search rule’ means that more can be done rather than just altering the terms.   It will really be down to you to decide where you want to manage the functionality, because most users will have access to both areas, very few companies restrict the access in Management Centre that we have come across.

Landing Page – although it comes with the other Solr components the Landing Page is not actually doing anything with Solr.  That is really important to understand. If you have a landing page defined for a search that a user makes, it will be the first option evaluated.  If there is a match then the landing page is called and the search request never goes near to Solr.  Instead the user will get a browser redirect to the page that has been defined, and the process will finish

Synonym – is a way of increasing the scope of the terms a user is searching on, by adding in additional search terms. For example you might have two terms that have nearly the same meaning so ‘dog’ and ‘pooch’, or you might have words that describe the same term so ‘shelves’ and ‘shelf. Also with a synonym it is bi-directional,  so if I enter dog, my search will be for both ‘dog’ and also ‘pooch’ and if I enter ‘pooch’ it will also be for ‘dog’.

One area that can cause unexpected results with synonyms is when setting up multi-term synonyms there is a really good article on why they are so awkward.

To keep your configuration tidy synonyms should not be used for misspelling that is where replacement terms are used. You don’t really want to be producing a search that has both the misspelt term and the correctly spelt term, it just uses up processing time.

Replacement Term – is a way of changing the search terms a user has entered, either by using an ‘also search for’ or an ‘instead search for’. As an example suppose we pick up in our analytics that a common customer searches is for fusa, we could have an ‘instead search for’ that replaces the term with fuchsia correcting the misspelling. We could then use the ‘also search for’ if they put in a term that may have some non natural listing so they search term is ‘notebook bag’, we could have an ‘also search for’ that extends ‘bag’ to be ‘sleeve’. That way we can pick up our products for ‘notebook sleeve’ as well as ‘notebook bag’

As with Synonyms you must be careful when looking at multiple term replacements, you can get some strange results. For example if you have a replacement that says ‘matteress topper’ instead search for ‘mattress topper’ to pick up the type, you end up with a search term that looks like this.

+”matteress” +”topper” +”mattress topper”

This is how the query parameters are sent to Solr, we have the individual terms and we have the full term that has been replaced.  The + sign will be telling Solr it’s an AND so all our search terms must match, we will then get no matches.  The reason why is because we still can see that ‘matteress’ is there, and it’s spelt wrong, so the AND will fail in this case.

The answer is make single term replacement’s not multi-term, and the use of ANY and ALL as your matching types will also help.

Understanding how Solr Synonyms and Replacement terms integrate with WebSphere Commerce

The way synonyms and replacement terms work is not the same as if you were just using Solr on it’s own.  Instead WebSphere Commerce is mimicking some of the functionality that Solr provides, so it handles expanding the search terms if there are synonym matches and the same for replacement terms.  This is to help it support eSites, but it can cause potential issues especially when you look to use more of Solr such as the minimum match parameter.  The way WC works is it uses two tables SRCHTERM and SRCHTERMASSOC and allows changes to be seen straight away.  But because this is done outside of Solr it can have an impact on looking to use some of the more interesting Solr functionality such as minimum match.  I will investigate this further but just keep it at the back of your mind it is good for integration not so good for working with Solr.

The final option and the one with the most functionality can be found in the marketing area Management Centre and these are the search rules.

Search Rule – this is the key part of ‘searchandising’ it is here that you can really manipulate the interaction with the searches on the site and Solr.  The search rule is made up from two parts

The target where we can apply to customers in a certain segment, at a certain time of day when they search on a specific term.  There are a variety of targets including social participation or external site referral.

WebSphere Commerce Search Rule Target

WebSphere Commerce Search Rule Target

The action this allows us to take modify and control what the user see’s, so we might first of all adjust the search term, and then bring back a list of products with some boosted in that search.  We can be clever in here and create actions such as canned searches, where we can control the products a user see’s by generating a search term that gets no other product matches.

WebSphere Commerce Search Rule Action

WebSphere Commerce Search Rule Action

The search rules can also handle branching and experimentation, it is very powerful but what it produced at the end is the query that will be passed into Solr.  And again if you can understand that query you can also help in how the results are delivered.

Where to go from here?

That is a brief introduction to some of the areas we feel are useful when working with Solr, and there is a lot more that can be covered.  It is important to understand because good search results within the site can count as much as having good SEO off the site.

Right now we are creating articles on tuning search and getting the best relevancy as well as on understanding and using Dismax pre Feature Pack 6.  The importance of Solr is increasing all the time with Feature Pack 8 to come, there will be more new features to look at using.  It is a very powerful piece of software, that needs time and attention to get the best results from

We also have some existing articles looking at issyes with Delta Builds, and some more around potential Core issues when changing feature Packs or installing APAR’s.

Problems with customising SOLR Search in WebSphere Commerce Feature Pack 7

The following has been confirmed as a defect by WebSPhere Commerce support, they are working on a fix.  As highlighted below you should delete and transform the search-rest project to allow you to make customisations.

After installing FEP7, we wanted to use the new REST search service to retrieve the product details for a new product page layout. In order to do that, we wanted to extend the “store/{storeId}/productview/byId/{productId}” URI in order to display the field1 and field2 fields.

Seemed to be simple enough: first we needed to provide a new search profile for that URI. So we defined it in /Search/xml/config/

<_config:profile name=”MOR_findProductByIds_Details” extends=” IBM_findProductByIds_Details” indexName=”CatalogEntry”>
<_config:field name=”field1″/>
<_config:field name=”field2″/>
<_config:field name=”field3″/>
<_config:field name=”field4″/>
<_config:field name=”field5″/>

Then we allowed that search profile on the URI in /Search-Rest/WebContent/WEB-INF/config/

description=”Get product by unique ID” searchProfile=”MOR_findProductByIds_Details”/>

But it wasn’t working, and we had to spend a lot of time finding out why. We traced to be able to see
the following in the trace:

loadConfig File D:\Javadev\WC70\workspace\.metadata\plugins\org.eclipse.wst.server.core\tmp0\Search\Search-Rest.war\WEB-INF\config/ does not exist

After a lot of looking we could see that in fact the customisations in Search-Rest in the workspace couldn’t be applied because it was deploying the “Search-Rest.war” file from the Search project.

So we deleted that file, transformed the “Search-Rest” project as a web application project in the workspace and published everything again.

Now the customisations are working fine

So the Feature Pack 7 setup seems to be flawed because the Search-Rest project in the workspace is never used by default. This would mean the setup in the workspace is wrong, the documentation is wrong or somewhere we have misunderstood this.  But the above might help you if you are customising SOLR with FEP7.

WebSphere Commerce Feature Pack 7

First thoughts are looking good plenty of new functionality in WebSphere Commerce Feature Pack 7 to get to grips with.  Things are moving on at a good pace, plenty for the marketing teams to get excited about who work with WebSphere Commerce.MC - Feature Pack 7 Intro Page

  • Commerce Composer (for managing layouts)
  • Responsive layouts
  • Widgets (organised and can now be developed in a much better than under FEP6)

Widget Options for Page Layouts

I just noticed one neat thing in the preview I can select the size that is being used. So I can preview what I have setup on a smaller screen size and adjust accordingly.

Resize the WebSphere Commerce store preview with feature Pack 7

WebSphere Commerce and Solr Search Statistics

In V7 of WebSphere Commerce you get some great integration with Solr, with configuration and management being driven through the Management Centre business tooling.  One aspect you can then see are the top search hits and top search misses reports for how Solr is performing on the site, and then make the appropriate adjustments to your configuration.  You could also be using this alongside your analytic reports.

However before you get any access to the reports you first need to enable the capture of the Solr search statistics data that will drive them.  Given at the moment the pages don’t link very well together in the InfoCentre even though all the data is there, what you must do first is update your configuration to capture the data from search.  Even in the dev environment this is not enabled by default and without it the reports will not work and you will get no results.

Updated November 2014 – Be aware that if you move from a previous release to FEP7 or FEP8 you will need to move the configuration changes you have in the component.xml file.  This is because the location of the file has moved into the search app rather than the commerce app. If you don’t make these changes you will not be capturing any results.

WebSphere Commerce and Dialogue Marketing – useful things to know

In WebSphere Commerce V7 IBM introduced two types of marketing based activities Web and Dialogue.  The Web activities are aimed at a users interaction with the website, product lists, advertising banners, proximity text, anything that comes from you viewing a web page and the actions and targets that then run to bring back appropriate marketing content.  The dialogue activities are based on interactions while you are not in the website for example you complete a registration and the store sends you a welcome coupon.  You did not complete your basket and after two days you get an email sent to you that tells you this and advertises some other special offers from an eSpot.  Dialogue activities are very powerful but they are quite a different option to web activities that are enabled for you out of the box and ready to use.

So hopefully the following information can help, along with links to useful IBM pages in step 3.

1) Dialogue activities are not enabled out of the box with WebSphere Commerce even in a starter store you must enable them (which came as a surprise).  To do this you need to turn on the Marketing Listeners in your instance.xml the main one to enable is the SensorEventListener.  The configuration on your live servers can be done in the configuration tool, which will then ensure the changes go into the live server configuration.  In your development environment enable them by hand in the instance file. Without those you can quite happily set activities up but you will never get any results.  It seems a fairly fundamental thing to miss out when diagnosing what can go wrong, see the links on step 3 which should perhaps mention this needs doing.

2) It is important to understand what is created in the database when you test and work with a dialogue activity in stage.  This is because it has the potential to cause problems when your run a propagate to the live server if you then keep testing the activities in stage.  For example the dialogue activity when running will create member groups in stage and if your member group ID is the same as one that is created in live the changes are your propagate will fail.

The image below shows a view of staglog table after running a dialogue activity, which sent an email to a customer after registration and also added them to a customer segment.  The problem is that because the entries are a mbrgrp that wants to be taken over in the propagate process.  If I had already been running this activity on live the chances are that mbrgrp ID would have been created, depending on how busy the site is.  Normally a mbrgrp ID would be a customer segment and you would have control over those on stage, but with the dialogue activity they are going to be created as the activity is run.

Because they are being created on live as the activity runs and customers register you may get issues.  One answer to this is to perform a key split between stage and live on the mbrgrp table  and the same for any other tables involved.  Then if items create on live the ID used would be lower and so you would not get any conflicts.

STAGLOG Dialogue Activity Entries

To do this you must also modify the lowerbound and upperbound ranges in the KEYS table in order to really resolve the issue.

The recommended solution is to implement key-splitting between the staging and production (live) environments in order to avoid any key conflict issues on tables which are staged (or even just a subset of staged tables). Here is the public link to the documentation on how to perform key-splitting:

Essentially, key-splitting involves this:
Splitting the key ranges between staging and production such that the key ranges between the two environments are mutually exclusive. To accomplish this, you would need to modify the counter, upper bound and lower bound columns found in the KEYS table such that the staging environment has keys which fall in the midrange and production will have keys which fall in the lower range.

To give you a better idea of what we are doing here, let’s use this reference below using a total key range of 0 – 10:

In the keys table you’ll find these three columns: counter, lowerbound, upperbound
Originally, both staging and production environments have the same keys ranges as shown below:
0                              0                                10

After key-splitting has been enabled, the ranges will be altered as follows:
Staging: Midrange values
4                                  4                             7
Production: Lowrange values
0                               0                                  3

You can see the ranges are mutually exclusive.
  0 1 2 3 4 5 6 7 8 9 10
ORIGINAL:        |————————–|
STAGING:                    |——|

2. If you refer to the link  provided above, you’ll see the SQL statements that can be used to perform the key-splitting on the KEYS table:

Portion one (This must be run on the Production server. It sets the keys range to a lower range)
update keys set upperbound=(upperbound-lowerbound)/3 + lowerbound
where tablename in (select tabname from stgmertab) or tablename
in (select tabname from stgsitetab)

Portion two (This must be run on the Staging server. It sets the keys range to a mid range)
update keys set upperbound = (upperbound-lowerbound)/3*2 + lowerbound,
lowerbound = (upperbound-lowerbound)/3 +lowerbound+1, counter =
counter+(upperbound-lowerbound)/3 +1 where tablename in
(select tabname from stgmertab)
or tablename in (select tabname from stgsitetab)

Portion three (This sets the upper bound range which can be kept for a second potential staging server in the future)
update keys set lowerbound = (upperbound-lowerbound)/3*2 + lowerbound +1
, counter = counter +(upperbound-lowerbound)/3*2 +1 where tablename in
(select tabname from stgmertab) or tablename in (select tabname from stgsitetab)

Note: These SQL statements alter the key ranges for all of the staged tables which are maintained by the STGSITETAB table. You can alter this SQL statement to only be applied to a subset of staged tables which is likely what’s needed  in your scenario.

3) Some good information on testing triggers and actions  that are created for the activity and also testing your dialogue activity can be be found in the Info Centre.  Also this is a very good presentation from September 2013 on troubleshooting marketing issues with WebSPhere Commerce.  There is a good section on working with Dialogue activities.


WebSphere Commerce Messages CWXFS3201W and CWXFS3202W with SOLR Processes

If you use SOLR with your WebSphere Commerce configurations the chances are you will come across the following messages at some point.

CWXFS3202W: Another CatalogGroup indexing process is still in progress for master catalog “10001”


CWXFS3202W: Another CatalogGroup indexing process is still in progress for master catalog “10001”

These messages are produced when the system for some reason thinks for the right or wrong reasons  that you are already reindexing the data in SOLR and the process is running.  There is not a great deal of information out there on how this works so this should help.

If you look in your database for the server thats reporting the error you will see two tables.  TI_DELTA_CATENTRY and TI_DELTA_CATGROUP, each message I assume is for the different table. These tables are used by the SOLR index to know what data to update in terms of catentry and catgroup that has changed on the system.  What the build index command first looks for is any entries in there that have a catentry or catgrpup of -1.

This link shows the temporary table definitions and you can see that if they have a -1 and a P in there the system will assume that the search index is pending.  We have also noticed that the tables can have a B set as a status (see image below)that is not shown in the Info Centre.  We have now found out that the B lock is added by the di-preprocess command.

What we then found is those entries were not being removed and so the delta index process in the scheduler kept on saying it could not run.  The only way you can then run the command is to use the -force tag on the command line, without that it also reported the process was already in use.  Put the force tag on and you can make it run, but that does not solve the issue.  Given a full build in the environment we were looking at took less than 1 minute we knew that was not the case the the index build was running.  the image below shows a screen shot of the CATENTRY tables you can see the two records at the top that were causing the job not to run.  We removed those two entries and we then get our index delta build runnning.

This causes error  CWXFS3202W and  CWXFS3201W


Adding Google +1 to the WebSphere Commerce Aurora Store

This was an interesting one to sort out, you would think a 5 minute job.  However not quite so simple.  When we added the code from into the site we were not able to see a button.  We started investigated why and found a few things

First with firebug on in Firefox it always throws warning errors about the following, which is a typo on the google code. It should have an upper-case M after the Y so nothing you can do about this one.

“Unexpected value xMidYmid meet parsing preserveAspectRatio attribute.”

It will cause a script error with firebug open and the plus one image will not display in certain cases.

In Chrome you will get warnings about the iFrame and incompatible domains as you load the iFrame into a different domain. But this happens on several plugins and again is fine.

However we worked it out to the problem being CSS related and the the fact that the iFrame tag is set not to show in the legacy.css file that is used by Aurora. Because the Google tag displays in an iFrame and the code comes from them the iFrame is not displaying.  Remove the display none from the CSS and you get your +1 tag showing.