Best Practice SEO Guide For Building Websites In Episerver

One of the most fundamental aspects of any project is making sure your website is easy for both users to find from Google. Part of this involves making sure search engine robots can read your website and spider it correctly. Architecturing your Episerver project to empower content editors to manage SEO and improve your search engine results page (SERP) ranking will dramatically increase the number of visitors to your website. In today’s guide, I’m going to cover some best practice techniques you should be implementing in your Episerver website.

SEO-tab

One of the most powerful features of Episerver is the ability to completely customize it. Unlike some other CMS solutions, it’s up-to-you to decide how to structure your page type definitions to allow content editors to add in SEO details. I’ve worked on a lot of different Episerver projects, with many of the UK’s top digital agencies and companies and after a bit of tweaking, below outlines my favoured approach. To start thinking about SEO, let’s look at the Alloy sample SEO tab:

episerver_seo_guide_1

On any Episerver project, you will want to implement something similar. Having your all your SEO data consistent across every single content page should be your aim, so defining all your SEO properties once in a base page and then have all your content pages inherit from it is the way to go.

One idea I’ve been toying around with is making a Dojo SERP-Preview, that would give a little preview of how the page looks in Google, but I haven’t had enough time to get around to looking into that yet.

META Properties

On any successful CMS build, content editors will need the ability to edit a page’s META tag properties. In more recent years, a few of these properties are less important than they used to be, but it’s still good practice to enable them on each page. The main META properties you should include support for are, keywords (not really used much anymore), title (for SERP), disable indexing (for robots), no follow (for robots) and a description (for SERP). The META HTML that your page renders should look roughly like this:

<title>[CONTENT INSERTED HERE]</title>
<META NAME="Keywords" CONTENT="[CONTENT INSERTED HERE]">
<META NAME="Description" CONTENT="[CONTENT INSERTED HERE]">
<META NAME="ROBOTS" CONTENT="NOINDEX, FOLLOW">

The code to define these properties, would look like this:

public abstract class SeoProperties : ISeoProperties 
{
[Display(
GroupName = Global.GroupNames.MetaData,
Order = 100)]
[CultureSpecific]
public virtual string MetaTitle
{
get
{
var metaTitle = this.GetPropertyValue(p => p.MetaTitle);
return !string.IsNullOrWhiteSpace(metaTitle)
? metaTitle
: PageName;
}
set { this.SetPropertyValue(p => p.MetaTitle, value); }
}
[Display(
GroupName = Global.GroupNames.MetaData,
Order = 200)]
[CultureSpecific]
[BackingType(typeof(PropertyStringList))]
public virtual string[] MetaKeywords { get; set; }
[Display(
GroupName = Global.GroupNames.MetaData,
Order = 300)]
[CultureSpecific]
[UIHint(UIHint.LongString)]
public virtual string MetaDescription { get; set; }
[Display(
GroupName = Global.GroupNames.MetaData,
Order = 400)]
[CultureSpecific]
public virtual bool DisableIndexing { get; set; }
[Display(
GroupName = Global.GroupNames.MetaData,
Order = 400)]
[CultureSpecific]
public virtual bool EnableNoFollow{ get; set; }
}

I always find it useful to implement the SEO proeprties with an interface. Using an interface can help unit testing and it takes an extra few seconds to implement, my interface would look like this:

public interface SeoProperties
{
string MetaTitle { get; set; } 
string[] MetaKeywords { get; set; }
string MetaDescription { get; set; }
bool DisableIndexing { get; set; }
bool EnableNoFollow{ get; set; }
}

Robots.txt

I’m assuming most people know what a robots.txt file is, but just in case you don’t, it’s a file that sits on your server that tells search engine robots how to process your website for indexing. On a plain .NET website, this is usually a developer task. They have to update the file, commit it to source control and wait for it to be published. When we work with CMS platforms the aim is to manage everything editable, including Robots.txt.

Luckily, David Knipe, back in the day, wrote a handy little admin plug-in that allows content editors to update their robots.txt with the Episerver admin. This plug-in has been upgraded ever since as of wiring works with the latest version of Episerver. The plug-in is called ‘POSSIBLE.RobotsTxtHandler’ and you can download the source code from here. You can also install the package via Episervers NuGet feed:

episerver_seo_guide_3

After installing Robots.txt, if you load up your Episerver website and visit the admin section, you will see a new section.

episerver_seo_guide_2

Sitemap

Having a Sitemap in your website is a good practice to help search robots to index your website. Luckily, with Episerver there are a few third-party plug-ins that you can use off the shelf that will solve the problem for you. Over the years, I’ve always tend to use the version by Geta, available here. The plugin is split into two parts. First, you will need to configure about what content you want to be included in your sitemap, the second step is enabling a scheduled task to ensure it gets updated.

episerver_seo_guide_4

To configure the plug-in, load up your Episerver website, and again you will need to visit the admin section. From here you will see a new section,
‘Search engine sitemap settings’.

episerver_seo_guide_7

From the settings page, you will need to create a new Sitemap. Configuring the plug-in is fairly straightforward and all the instructions are on the page. I usually create a Sitemap that indexes everything so I tend to leave everything blank After saving the content, you will need to enable the scheduled task to update it.

episerver_seo_guide_6

In the admin pages, ‘Scheduled Jobs’ section, click on the ‘Generate search engine sitemaps’ section. Click the ‘Start Manually’ button and your Sitemap will be generated.

episerver_seo_guide_8

After triggering the scheduled task, when you type in /sitemap.XML onto your website’s URL in a browser, you should now see an XML file that contains all your website’s content.

Redirects and Rewrites

There are several considerations you need to decide on before you launch a new site. You need to make sure your web page has a single Url to access it. If you have re-branded or re-platformed your website, you may need to create 301 re-directs to prevent search ranking loss.

WWW or Non-WWW

One of the most overlooked SEO mistakes a lot of companies make, is forgetting to decide on a www, or non-www content strategy. All pages in your website should only be accessible from a single Url. If you allow http://www.website.com and http://website.com to both resolves to your webpage, Google will see this as duplicate content and can penalize your search rankings. I’ve written a more in-depth article in, Setting Up Episerver To Always Use WWW Links to learn how to sort this one. As the www rule will never change I recommend putting these types of rules, to prevent anyone accidently breaking them. In this article this is probably the only re-direct I would recommend doing in config.

301 Redirects

There are several ways you can deal with 404 errors and 301 directs. The bog standard way is via the web.config and the Url Rewrite module. As I discussed earlier, when we work with a CMS platform, we want to architect a solution where the business can get on with day-to-day life without needing developer time to do simple things.

Live, robots.txt and sitemap we can use a free third-party plugin, called the BVN redirect module to make our lives easier. Installing BVN Handler has become a lot easier over the last few years, as again it’s been bundled up in a Nuget package:

episerver_seo_guide_10

After the plug-in is installed it has two functions. First 301 re-directs. Setting the 301 redirect is done via the dashboard gadget.

episerver_seo_guide_9

Creating a redirect is simple, in the ‘Custom Redirects’ tab, add the old URL and the URL you want the 301 to point to, click ‘Add’ and off you go. From previous project experience, I would say it’s good practice to manage this list. If the list gets too large it can affect performance a little. In most cases, after 6 months you can kill off any new/old re-directs.

404 Pages

404 Pages can also be managed in MVC with BVN. I wrote a comprehensive guide to installing and using the handler a few years’ ago in, Installing BVN.404Handler for an MVC Projec. As time has passed and the plug-ins improved, some of the information is a little outdated. BVN provide an MVS attribute out-the-box so the third-party NuGet package is now obsolete.

Conclusion

SEO is an import part of any project, but is a lot of the work and behind the scenes, it’s very easy to forget some of this stuff. The aim of this guide/checklist is to help teams enable good SEO practices on their projects. I know it looks like a lot of work, but after you’ve done it a few times it’s pretty simple and if leveraged correctly, can boost the number of visitors to your digital home.

Jon D Jones

Software Architect, Programmer and Technologist Jon Jones is founder and CEO of London-based tech firm Digital Prompt. He has been working in the field for nearly a decade, specializing in new technologies and technical solution research in the web business. A passionate blogger by heart , speaker & consultant from England.. always on the hunt for the next challenge

More Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *