Category Archives: FEP7

WebSphere Commerce CategoryDataBean and Solr

An problem we have recently come across is the way that the CategoryDataBean works when you have Solr involved in your configuration.  Rather than as you might expect that the databean is getting data from the database, it in facts gets it data from Solr if you are using it.

We found this out because we had a some code that performed an extract of all the products in a sales catalogue.  Those products when we first setup the sales catalogue belonged in categories that were all linked to the master catalogue, and the extract worked fine.

The sales catalogue was then altered creating new categories that only existed inside that sales catalogue so were not under the master catalogue, and the products were moved into them until no master catalogue categories existed.  When we finished this process suddenly the extract had stopped working, it had probably been a gradual process but it was noticed that it contained no products and the files were empty.

We looked and could see nothing obvious that was wrong, the extract process first built a list of all the categories in the sales catalogue and then took each category in turn and got a list of the products.  The statements we had for tracing showed the category code was being picked up but as soon as it was called we got no products.  The following was the piece of code we were using, nothing complicated we would set and initialise the category databean.  the storeId and the catalogId and the langId were being passed in on the scheduled job.

Initial CategoryDataBean Code

Initial CategoryDataBean Code

We then looked more at what was going on and noticed that when the Command was running we were seeing requests made to Solr and it was then we saw what appeared to be the problem.  The following is the Solr request and we could then see that the catalog_id was being set as 10001 the master catalogue.  When in fact the Sales catalogue was 10251 that should have been used.  We took the Solr query and ran it directly against the Solr server and tweaked the options to see it really was the problem, some info on doing this in this article on tuning Solr.

[12/11/14 09:18:45:820 GMT] 00000097 SolrDispatchF 1 org.apache.solr.servlet.SolrDispatchFilter doFilter Closing out SolrRequest: {{params(q=*:*&start=0&debugQuery=false&fl=catentry_id,storeent_id,childCatentry_id,score&facet=true&version=2&rows=5000&fq=storeent_id:(“10151″+”10012″)&fq=catalog_id:“10001”&fq=parentCatgroup_id_search:(+”10001_60844″)&timeAllowed=15000&wt=javabin),defaults(echoParams=explicit)}}

So we then opened a PMR (thanks Mateja) because the IC was giving no clues as to what was going on and started looking at the trace statements taking place and noticed the following. Which shows the wrong catalogue ID being used.  It was being set to 10251 but then became 10001.

[12/11/14 09:18:43:289 GMT] 00000948 ServiceLogger 3   Command parameters: [jobId=333423] [langId=-1] [catalogId=10251][storeId=10053] [jobInstanceId=889982]

SolrSearchByCategoryExpressionProvider  gets the CatalogId from CatalogContext:

[12/11/14 09:18:43:321 GMT] 00000948 CatalogCompon > getCatalogContext() ENTRY
[12/11/14 09:18:43:321 GMT] 00000948 CatalogContex > getCatalogID ENTRY
[12/11/14 09:18:43:321 GMT] 00000948 CatalogContex < getCatalogID RETURN 10001

[12/11/14 09:18:43:321 GMT] 00000948 CatalogCompon < getCatalogContext() RETURN [bDirty = false][bRequestStarted = true][iOriginalSerializedString = null&null&false&false&false][iToken = 2646180:true:true:0]
[12/11/14 09:18:43:321 GMT] 00000948 CatalogContex > getCatalogID ENTRY
[12/11/14 09:18:43:321 GMT] 00000948 CatalogContex < getCatalogID RETURN 10001
[12/11/14 09:18:43:321 GMT] 00000948 SolrSearchByC 1 invoke(SelectionCriteria) Catalog Id: 10001

[12/11/14 09:18:43:321 GMT] 00000948 SolrSearchByC 1 invoke(SelectionCriteria) Search categories: 60846

Looking more detail at CategoryDataBean.getProducts code if the environment is using using SOLR search then to check the product entitlement getCatalogContext gets the catalog context from the service context. This context will have information specific to the catalog like, the catalog ID.

So even though we are setting the catalogId on the scheduled job it has no impact instead we had to modify the code we have to do the following.  We are now setting the catalogId in the context and as soon as we did this the code works.

New CategoryDataBean Code

New CategoryDataBean Code

None of this is documented at all, and it took us quite a long time to have any idea on why what looked like good code was failing to work.  Hopefully we will see more updates into the Info Centre that explain what is going on and what you need to look out for.


Understanding the WebSphere Commerce Solr Integration

If you use WebSphere Commerce V7 then you may already use the WebSphere Commerce Solr integration for search that is provided in the product, or you might be thinking about using it. This integration brings together two very complex pieces of software here WebSphere Commerce we know is complex and Solr is an enterprise search.

It will have been a trade off for IBM when working on the integration to give your marketing team a single interface to manage both search and merchandising functionality,while at the same time supporting Commerce functionality like eSites.  It works well and can be customised but are you really giving customers relevant results or are they seeing too many no results pages, or irrelevant responses?  Do you want to know more and get more from Solr, if you do we will try and bring together these different areas and help you get the most from search.  Understanding the integration, and improving search relevancy for your customers when they are looking for the products that you sell.

Some Useful Terms

First let’s take a look at just some of the components and terms that you will use when working with the WebSphere Commerce Solr integration.

  • Solr – provides an open source enterprise search environment from the Apache foundation, supporting many features including full text searching, faceted search and rich document support such as word and PDF. It powers some of the largest site’s in the world and many eCommerce vendors integrate with it to provide search functionality.
  • Preprocess – the di-preprocess command will take the WebSphere Commerce data and generate a series of tables that flatten the data data that so it can be indexed by Solr.  A number of preprocess configuration files are provided out of the box and when you run the command you will see a series of tables that start ti-…. will be created in your database instance.  When you become more advanced with Solr you may want to include additional data at pre-process time.
  • di-buildindex – for Solr to run it must have an Solr Index, this is built from the data that was generated when running the pre-process component of WebSphere Commerce.  The index then needs to be kept up to date at various times either through a full build of all the data or a delta build to just pick up changed data.
  • Structured data – the structured data for Commerce is anything from the database so your product information would be part of your structured data.
  • Unstructured data – this would be your PDF’s documents anything not from the database that will be returned in your results.  We won’t really focus on this type of information yet, there is enough to get right with the structured data.
  • Solr document – a document in the Solr index refers to the details on a product / item /category, the document contents are then returned as part of the Solr response.
  • Search term – the search term’s are the words you are looking for within the Solr Index
  • Relevancy Score – this is very important it how Solr has ranked the document when it performs a search against the terms.  That score can be impacted by a wide variety of options both Solr driven but also down to how you have structured the data.  Understanding this score is understanding the results being produced.
  • Extended dismax – a query mode used when working with Solr. Prior to Feature Pack 6 IBM went with the very simple Solr query parser, at FEP6 and up they started using dismax (though not fully).  The Solr parser is limited in what it can do, IBM did produce a cookbook example on how to fix this but it is pointless, we explain why in a forthcoming post.
  • Schema.xml – the schema.xml file defines the structure of the Solr configuration,  the file can be modified if you want to say add longDescription into your search index which by default is not used. You would also make changes in here if you adjust the configuration components such as the spellchecker.
  • Solr Core – this allows us to have a single Solr instance with multiple Solr configurations and indexes, you will see a ‘default’ core that is available and not used.
  • CatalogEntry Core – the index created that covers everything about the products and items within WebSphere Commerce.  When a query is created you send it against that index for example http://<myhostname>:<port if not 80>/solr/MC_10351_CatalogEntry_en_US/select?q*:* will return information from the entry based index on products and items in there.  You can see from the core name that it’s taking the master CatalogId as an identifier as well as the language.  This means we can have multiple language indexes being used.
Solr WebSphere Commerce CatalogEntry Query

Solr WebSphere Commerce CatalogEntry Query

  • CatalogGroup Core – the index created that covers information about the categories that are within the store. An example query against the Catalogroup gCore http://<myhostname>:<port if not 80>/solr/MC_10351_CatalogGroup_en_US/select?q*:*
Solr WebSphere Commerce CatalogGroup Query

Solr WebSphere Commerce CatalogGroup Query

Working with Solr through Management Centre

The marketing team interact with Solr though Management centre, it provides the ability to manage how results are produced based on a combination of what the customer is searching for.  The ‘Landing Page’, ‘Synonym’ and ‘Replacement Terms’ that follow are all found under the ‘Catalogs’ section of Management Centre, while ‘Search Rules’ are found in ‘Marketing’.  It may not seem obvious to split the functionality up in this way, especially as when you first look certain aspects of say a replacement term are repeated in a search rule.  But what you will find is the power of the ‘search rule’ means that more can be done rather than just altering the terms.   It will really be down to you to decide where you want to manage the functionality, because most users will have access to both areas, very few companies restrict the access in Management Centre that we have come across.

Landing Page – although it comes with the other Solr components the Landing Page is not actually doing anything with Solr.  That is really important to understand. If you have a landing page defined for a search that a user makes, it will be the first option evaluated.  If there is a match then the landing page is called and the search request never goes near to Solr.  Instead the user will get a browser redirect to the page that has been defined, and the process will finish

Synonym – is a way of increasing the scope of the terms a user is searching on, by adding in additional search terms. For example you might have two terms that have nearly the same meaning so ‘dog’ and ‘pooch’, or you might have words that describe the same term so ‘shelves’ and ‘shelf. Also with a synonym it is bi-directional,  so if I enter dog, my search will be for both ‘dog’ and also ‘pooch’ and if I enter ‘pooch’ it will also be for ‘dog’.

One area that can cause unexpected results with synonyms is when setting up multi-term synonyms there is a really good article on why they are so awkward.

To keep your configuration tidy synonyms should not be used for misspelling that is where replacement terms are used. You don’t really want to be producing a search that has both the misspelt term and the correctly spelt term, it just uses up processing time.

Replacement Term – is a way of changing the search terms a user has entered, either by using an ‘also search for’ or an ‘instead search for’. As an example suppose we pick up in our analytics that a common customer searches is for fusa, we could have an ‘instead search for’ that replaces the term with fuchsia correcting the misspelling. We could then use the ‘also search for’ if they put in a term that may have some non natural listing so they search term is ‘notebook bag’, we could have an ‘also search for’ that extends ‘bag’ to be ‘sleeve’. That way we can pick up our products for ‘notebook sleeve’ as well as ‘notebook bag’

As with Synonyms you must be careful when looking at multiple term replacements, you can get some strange results. For example if you have a replacement that says ‘matteress topper’ instead search for ‘mattress topper’ to pick up the type, you end up with a search term that looks like this.

+”matteress” +”topper” +”mattress topper”

This is how the query parameters are sent to Solr, we have the individual terms and we have the full term that has been replaced.  The + sign will be telling Solr it’s an AND so all our search terms must match, we will then get no matches.  The reason why is because we still can see that ‘matteress’ is there, and it’s spelt wrong, so the AND will fail in this case.

The answer is make single term replacement’s not multi-term, and the use of ANY and ALL as your matching types will also help.

Understanding how Solr Synonyms and Replacement terms integrate with WebSphere Commerce

The way synonyms and replacement terms work is not the same as if you were just using Solr on it’s own.  Instead WebSphere Commerce is mimicking some of the functionality that Solr provides, so it handles expanding the search terms if there are synonym matches and the same for replacement terms.  This is to help it support eSites, but it can cause potential issues especially when you look to use more of Solr such as the minimum match parameter.  The way WC works is it uses two tables SRCHTERM and SRCHTERMASSOC and allows changes to be seen straight away.  But because this is done outside of Solr it can have an impact on looking to use some of the more interesting Solr functionality such as minimum match.  I will investigate this further but just keep it at the back of your mind it is good for integration not so good for working with Solr.

The final option and the one with the most functionality can be found in the marketing area Management Centre and these are the search rules.

Search Rule – this is the key part of ‘searchandising’ it is here that you can really manipulate the interaction with the searches on the site and Solr.  The search rule is made up from two parts

The target where we can apply to customers in a certain segment, at a certain time of day when they search on a specific term.  There are a variety of targets including social participation or external site referral.

WebSphere Commerce Search Rule Target

WebSphere Commerce Search Rule Target

The action this allows us to take modify and control what the user see’s, so we might first of all adjust the search term, and then bring back a list of products with some boosted in that search.  We can be clever in here and create actions such as canned searches, where we can control the products a user see’s by generating a search term that gets no other product matches.

WebSphere Commerce Search Rule Action

WebSphere Commerce Search Rule Action

The search rules can also handle branching and experimentation, it is very powerful but what it produced at the end is the query that will be passed into Solr.  And again if you can understand that query you can also help in how the results are delivered.

Where to go from here?

That is a brief introduction to some of the areas we feel are useful when working with Solr, and there is a lot more that can be covered.  It is important to understand because good search results within the site can count as much as having good SEO off the site.

Right now we are creating articles on tuning search and getting the best relevancy as well as on understanding and using Dismax pre Feature Pack 6.  The importance of Solr is increasing all the time with Feature Pack 8 to come, there will be more new features to look at using.  It is a very powerful piece of software, that needs time and attention to get the best results from

We also have some existing articles looking at issyes with Delta Builds, and some more around potential Core issues when changing feature Packs or installing APAR’s.

Improving your WebSphere Commerce SEO Redirects in V7

If you are on V7 of WebSphere Commerce and not making use of it’s SEO functionality then you are missing a trick.  This can ensure that you map out the usual parameters in a Commerce request, to something that looks much better and works for SEO and most people Google.  This is managed from several different tables when it comes to products and categories.  More data is contained when you create your own mappings, but this article looks at how you can use SEO redirects for categories and products.

SEOURL – Holds the mapping between the category and product tokens and the values for them from the catentry and catgroup tables.

SEOURLKEYWORD – Links to the SEOURL table on the SEOURL_ID and provides the keyword entry and mobile keyword for the language, storeent and if the entry is active or not.

SEOREDIRECT – This holds the relationship between an old keyword and a new keyword when you change over your entries, and it’s here that you can do some clever things.

The functionality within WebSphere Commerce manages adding entries into these tables when you run the SEOURLKEYWORDGEN, this includes entries when you change the SEO tag on a category or a product.  It will then insert an entry into the SEOREDIRECT table, which contains the old keyword and the new keyword.

When you then access the website using the old keyword instead of having to put entries into your webserver for redirects, WebSphere Commerce will do this for you.  Even better you can then keep track of how many times the redirect is used via the SEOREDIRECTTRAFFIC table, and have the system clean up those entries no longer needed when they get no hits over a period of time.

Now this works great if you change an existing category or product but what about if you create a brand new category and move your products around or want to go from an old product to a new product.  Nothing is automated in this case because the SEOURLKEYWORDGEN utility is just looking at existing categories or products for changes.

Instead what you need is a little manual intervention.

1) identify the old entry in the SEOKEYWORD table and make a note of it’s SEOURLKEYWORD_ID, and then set it’s status to 0 so it is inactive.  The inactive is important otherwise the SEO functionality will keep using the entry and not redirecting

2) Identify the new entry in the SEOKEYWORD (after you have run the keyword generation command) and make a note of it’s SEOURLKEYWORD_ID.

3) Add an entry into the SEOREDIRECT table give it a unique ID and add in the old SEOKEYWORD_ID and the new one.

4) Go into the admin console (8002) for the site and refresh the registry so the SEO data is updated.

Now on the site access with your old SEO URL and you should find it’s redirected to the new one.  if you get any problems then trace this component*=all

It will only show entries if the trace actually works so if you are seeing nothing in the trace, then it means you have missed something on the setup and registry update.

JR49995 – REST calls with GZIP enabled on webserver

This is another finding from our work with the new REST environment introduced in Feature Pack 7 for WebSphere Commerce  This one was actually the first issue we hit once we moved from development to the server environment, with the updated store model.  One particular page that had been working just fine in development now failed to compile when we went across to the server.

This page was not only calling the new REST environment it was also making a REST request from within the JSP to Commerce itself.  That is a big difference to be aware of if you really want to move your pages over to the new Commerce REST framework, as you are now creating a headless app so it could be running outside of Commerce.  So even when it is a JSP running in Commerce instead of using databeans it will now make REST requests to Commerce, it sounds a bit strange but the request we were using was this 

This meant that this request is made from the Commerce JSP out to the IHS webserver because it has the hostname, back into the application server environment (could be the same websphere instance), serviced and a response then sent back around to the JSP.  It sounds an interesting path and is why fixes like JR49956 are important for performance.

What was then happening is the page would fail and tell us that there was a bad character at the start of the input, the following comes from the trace we enabled but the character was always a <.

[18/05/14 14:47:10:851 BST] 00000034 JSONEntityPro < writeTo RETURN
[18/05/14 14:47:10:854 BST] 00000032 SystemErr R org.apache.commons.json.JSONException: Error occurred during input read.

After a lot of tracing of all kind of aspects, foundation, rest components, of double and triple checking file differences between dev and WAS we just could not see the issue  What we kept seeing from the trace was the request outbound was fine, that Commerce received the request for the inventory lookup, it serviced the request and it built and sent back a JSON response.  the response looked good, and then it would fail when the response was received.  We could make the call to the URL in a browser 

and get back a good response.  And then we realised the one difference here that because the request had the server hostname it was always going through the IHS server.  Unlike the requests which went through the IHS server straight at the SOLR/REST environment on the main IHS servers we had GZIP enabled, and we certainly had no GZIP in the dev environment where the page worked fine.

And guess what there is a fix for GZIP enabled webserver it is  JR49995 – ‘This fix is to add support to gzip function of IHS webserver integration with WebSphere Commerce.’.  Before applying it we turned GZIP off and suddenly the request worked fine and the page compiled no error.  We turned GZIp back on it failed, we applied the fix and it worked.

It should be noted this was the first fix we had applied for the brave new FEP7 world and the update installer is now different as it updates not only your Commerce instance but also the SOLR instance with the fix.  No more manual deployment of SOLR updates, it is all done at the same time.

The moral of this is be prepared to think in a different way when it comes to FEP7, the old thinking is out of the window at times when it comes to debugging, it really is a whole new world.

Feature Pack 7 SOLR/REST and fix JR49956 removing whitespace

We have been doing a lot of work with Feature Pack 7 and it’s all new SOLR/REST environment, looking at improving performance and really doing the most to move as much data as possible into the search and nav area.. Using those JSON calls makes it much easier to see what is going in your requests to get data and by default the layout had worked well in Chrome and Firefox, looking like this.

Formatted content in browser before applying JR49956

Nice formatting in the browser before the whitespace is stripped by JR49956

But then we installed a series of fixes for REST one of which was JR49956 – ‘This fix is to REST tag performance improvement when reading data from search server’. Straight away the nice formatting went when we were using the browser to view the contents of the response as we tried a request out. Instead it was just one big long string of data.

Whitespace removed by JR49956

Formatting has gone inside browser with JR49956 removing whitespace

Upon further investigation with a PMR open it turned out that JR49956 was as we suspected stripping out the whitespace in the response. This is good because it buys around an 8% performance improvement, but at the same time as we found it could not be read.

We were then recommended to install PostMan into Chrome and as you can see things look much better. Always nice to know what a fix is changing in WebSphere Commerce and a way to get around any issues it might create, because you really do want to be able to issue those JSON requests and look at the response.

Using Chrome Postman when JR49956 is deployed

The view of a WebSphere Commerce JSON response with JR49956 deployed which removes white space to improve performance.

Problems with customising SOLR Search in WebSphere Commerce Feature Pack 7

The following has been confirmed as a defect by WebSPhere Commerce support, they are working on a fix.  As highlighted below you should delete and transform the search-rest project to allow you to make customisations.

After installing FEP7, we wanted to use the new REST search service to retrieve the product details for a new product page layout. In order to do that, we wanted to extend the “store/{storeId}/productview/byId/{productId}” URI in order to display the field1 and field2 fields.

Seemed to be simple enough: first we needed to provide a new search profile for that URI. So we defined it in /Search/xml/config/

<_config:profile name=”MOR_findProductByIds_Details” extends=” IBM_findProductByIds_Details” indexName=”CatalogEntry”>
<_config:field name=”field1″/>
<_config:field name=”field2″/>
<_config:field name=”field3″/>
<_config:field name=”field4″/>
<_config:field name=”field5″/>

Then we allowed that search profile on the URI in /Search-Rest/WebContent/WEB-INF/config/

description=”Get product by unique ID” searchProfile=”MOR_findProductByIds_Details”/>

But it wasn’t working, and we had to spend a lot of time finding out why. We traced to be able to see
the following in the trace:

loadConfig File D:\Javadev\WC70\workspace\.metadata\plugins\org.eclipse.wst.server.core\tmp0\Search\Search-Rest.war\WEB-INF\config/ does not exist

After a lot of looking we could see that in fact the customisations in Search-Rest in the workspace couldn’t be applied because it was deploying the “Search-Rest.war” file from the Search project.

So we deleted that file, transformed the “Search-Rest” project as a web application project in the workspace and published everything again.

Now the customisations are working fine

So the Feature Pack 7 setup seems to be flawed because the Search-Rest project in the workspace is never used by default. This would mean the setup in the workspace is wrong, the documentation is wrong or somewhere we have misunderstood this.  But the above might help you if you are customising SOLR with FEP7.

How to make CKEditor in Feature Pack 7 work with a migrated store

Updated 31/3/2014 – better code within the ckeditor.jsp file

We are in the process of migrating an environment from Feature Pack 6 to Feature Pack 7, which is presenting a number of interesting issues.  One of those we noticed was that the CKEditor, which is now included in Management Centre for working with your content failed to initialise.  In order to use the CKEditor remember that you have to set your preferences to say you want use CKEditor, when in Management Centre.  preferences are at the bottom of the drop down where you select the MC tool you want to use.

CKEditor Preferences in Management Centre

Set that you want to use CKEditor

What we found was when we went into a piece of Content and selected that we wanted to use CKEditor, we got a screen that opened in the following way.  Here you can see that we have opened up firebug to show the error CKEditor.instances.inputTextField is undefined.

CKEditor.instances.inputTextField is undefined.

If we then refresh that with a CTRL-F5 we get ReferenceError:editorLocale is not defined and CKEditor will not start.

CKEditor ReferenceError:editorLocale is not defined

ReferenceError:editorLocale is not defined

It seems that the key issue here is that because we have a migrated store we are missing certain entries in EMSPOT and DMEMSPOTDEF as well as a change is needed to the CKEditor.jsp.

The CKEditor needs to have URL references for the CSS, which is in use in the storefront , there are two values .. “URL” and  “Locales” – espot name= “vfile.stylesheetbase”, USAGETYPE=”STOREFILEREF”

First open <Toolkit>\workspace\LOBTools\WebContent\jsp\commerce\foundation\restricted\CKEditor.jsp and look around line 42. Then add the following text, which is also shown in the next image.

var css = “”;
var cssLocales = “”;

if (window.dialogArguments.storeCSSURL != null && window.dialogArguments.storeCSSLocales != null) {
css = window.dialogArguments.storeUriPrefix + window.dialogArguments.storeCSSURL;
cssLocales = window.dialogArguments.storeCSSLocales;
if (cssLocales.indexOf(contentLocale) >= 0) {
css = css.replace(“$locale$”, contentLocale);
} else {
css = css.replace(“$locale$”, “”);

You can either overwrite or comment out the existing lines, as has been done below.

Adding in the extra css logic for the ckeditor.jsp file

Ensure you insert these lines into the ckeditor.jsp file

Now entries needed to be added to the database, against the store you are working with.  You need to make sure that all migrated stores will contain these settings.

The following will give you the next keys you can use in EMPSOT and DMEMSPOTDEF

select max(EMSPOT_ID+1) from EMSPOT;  (this gives you the next value you can use in emspot)

Then add the entry to emspot to set the stylesheetbase, set the <storeId> to the store you are using and the <emspotId> to the key value.

 values (<emspotId>,<storeId>,’vfile.stylesheetbase’,’Base header and footer stylesheet file reference’,’STOREFILEREF’);

Add the first entry into DMEMSPOTDEF set the

 values (<dmemspotdefId>,<emspotId>,<storeId>,’URL’,’css/base$locale$.css’);

Add the second entry, add one to your <dmemspotdefId> value, so you do not get a duplicate.

 values (<dmemspotdefId+1>,<emspotId>,<storeId>,’Locales’,’ar_EG,iw_IL’);

With all that created restart the server to get the data loaded.  I also cleared the temp files under the lobtools project in my WASPROFILE just to make sure the JSP creates.

The first access I made to open the CKEDITOR it did still appear to have a problem, so I CTRL_F5 to get it to reload and it worked I got the following.

CKEditor working in WebSphere Commerce

CKEditor working in WebSphere Commerce

You might be able to spot in that image I have also added in the GoogleMaps plugin that Bob Balfe shows here.  It works well, I can embed maps into the content I am creating for WebSphere Commerce, plus go and get other plugins.

Along with Bob thanks to Matej from WebSphere Commerce support as well for help in getting this going.

WebSphere Commerce Feature Pack 7

First thoughts are looking good plenty of new functionality in WebSphere Commerce Feature Pack 7 to get to grips with.  Things are moving on at a good pace, plenty for the marketing teams to get excited about who work with WebSphere Commerce.MC - Feature Pack 7 Intro Page

  • Commerce Composer (for managing layouts)
  • Responsive layouts
  • Widgets (organised and can now be developed in a much better than under FEP6)

Widget Options for Page Layouts

I just noticed one neat thing in the preview I can select the size that is being used. So I can preview what I have setup on a smaller screen size and adjust accordingly.

Resize the WebSphere Commerce store preview with feature Pack 7