21 December 2018Comments are off for this post.

How to boost EpiServer site performance and reduce visitor drop out rate

Before showing you HOW you can improve your Episever site performance, first I'd like to tell you WHY you should optimize it.

In today’s world, people don’t want to wait for things. They want them instantly.

Research shows that you have no more than 3 seconds to load your page. After this time, you can lose a lot of customers. If you want to be reckoned with these days, you have to match up to these expectations.

That’s why the performance of your site is one of the key features to make your site successful. Nowadays most traffic comes from 3G mobile networks, which is not super fast. Google knows that and if your site is slow, that may impact your position on the popular search engine result page (SERP).

Fortunately, there is a very handy tool which will help you to keep good practice on your site. This tool comes from Google and it is called Lighthouse.

What is Lighthouse?

Lighthouse is a 'ready to use automated tool' which helps you to maintain the quality of your webpage. It checks performance, good practices or even SEO. After running it against your site you will get helpful metrics together with some nice tips about how to improve.

Lighthouse is really easy to use. The easiest way is to open the Chrome browser and go to the "Audit" tab in DevTools. After running an audit you will see general numbers about how well your site is performing with a list of passed audits. You will also see what you should fix and how to fix the issues. For more advanced usage you can read the Lighthouse official documentation or since it is an open source project - you can even write your own metrics.

One of the most important factors is called first contentful paint. It is the time needed by the browser to render the first bit. I don’t have to tell you how important it is to keep it low, so users will see that your page is actually loading. Lighthouse shows you a few tips and one of them is that you will need to improve your server response time.

How can you speed up EpiServer site performance?

Before you start optimizing your website be sure to run an audit and save your results. It will be really inspiring to see how well the optimization went and you will have a reference to see actual improvement. This can be your bargain card for a raise. Apart from profiling your EpiServer application, there are some universal tips.

Avoid dynamic properties

DynamicProperty class is a really nice API to play with EpiServer objects in a flexible way without using tons of code. There is a strong temptation for developers to use it. Unfortunately, you can read in the EpiServer docs that:

“EPiServer.DataAbstraction.DynamicProperty is a non-cached API for administrative purposes and should not be used on normal templates that are not for administrative purposes. Using this API on your templates will significantly impact performance.”

Switch to Entity Framework

The second big step is to migrate from the default EpiServer Object-Relational Mapper called Dynamic Data Store to Entity Framework. There is a really nice blog post about how to make migration in EpiServer. After reading it you will know about some additional benefits coming from this change.

Configure your server properly

I would like to focus on the lighthouse performance audit. It is divided into four parts: metrics, opportunities, diagnostics and passed audits. The first one will give you the gist about the general condition of your page. Then comes opportunities and diagnostics which can help you upgrade your page. After you introduce all fixes to your page hopefully you will extend the list of passed audits.

 

google light house audit

 

Enable Text Compression

There is a strong correlation between page size and site speed. Fewer bytes to download means faster page loads. The average page size in 2018 was around 1.88 MB. To keep the size of your site small you can compress its source.

Lighthouse automatically detects if you are using any compression on the server. Enabling GZIP is often just one flag on your server, which can improve your metrics significantly.

Thanks to GZIP compression on the front page of alt.dk for our main HTML, CSS and JS files were compressed to 86 KB from the original 380 KB which is only 22.63%.

enable text compression

In the size column, you can see download size compared to the original size

 

Deliver your page in an efficient way

Besides the size of your page, it is important to deliver it in an efficient way. This is why you should use the content distribution network (CDN) for your static resources.

The general idea is to keep the resources closer to the end user. Your scripts, styles and images are distributed to multiple servers and then users will download them from the closer server. Thankfully it is quite easy to configure CDN for EpiServer.

Upgrade Protocol

The next thing you can improve is the protocol. To send a package with data to your client using Hyper Text Transfer Protocol (HTTP), for every resource, the server needs first to build a connection, send the actual data and then close the connection. However, according to Google research, the average number of resources for one site is around 115.

It would be nice to use the same connection for multiple data packages. Switching to HTTP/2 will do the trick. It allows you to send your resources faster to the client. If your site is already running HTTPS the change for HTTP/2 will be easy and painless.

Finally, Lighthouse will help you to set a proper browser cache time for your assets, which is helpful for returning users. There is no need to fetch resources again and again if they have not changed.

Our client Alt.dk is a leading Danish lifestyle magazine using EpiServer CMS. After 2 years of cooperation, the site now ranks in the top 10 of the Danish website index. See Case Study.  

 

product owner egmont

 

How can you speed up your frontend?

When you open Chrome devtools you can easily check what is the heaviest part of your website. In most cases images are. Thanks to Lighthouse here are a few hints on how you can decrease the size of graphics on your site.

Optimize images

Your site will be displayed on many different devices with different screen sizes and resolutions. Small images will be not readable on big screens or could be stretched in a weird way. On the other hand, big images can corrupt page layout and exhaust mobile network capacity. This is why you should use responsive images on your website.

Different images for different devices

All modern browsers have a special mechanism for serving different versions of an image for different screen sizes. Thanks to this you can display wide picture on desktops and the same picture cropped to show only the most important part on phones.

Here is an HTML markup for it:

<img srcset="image-320w.jpg 320w,
 image-480w.jpg 480w,
 image-800w.jpg 800w"
 sizes="(max-width: 320px) 280px,
 (max-width: 480px) 440px,
 (max-width: 800px) 800px"
 src="image-800w">

You can define different image sources inside srcset attribute for different sizes. There is a nice article about responsive images, which will help you to better understand the whole idea.

For example, the main image in the article on alt.dk is 1960px wide and its size is about 2 MB, however on mobile devices with small screens it is 420px with its size only about 154 KB.

Compress images

The next obligatory thing to do is to compress your images. There are various algorithms which will help you to save the network capability. Fortunately, there are a few ready to work solutions which can help you. Lighthouse apart from pointing out your uncompressed images also provides you already compressed images for downloading. All you need to do is just copy them to your site.

Load only images that are needed

When a user enters your page, especially on mobile devices, he will only see the very top of it. Probably the logo of your site, header, title of an article or maybe the few first lines. To see the next part he will need to scroll down. But the decision if he will stay or not may be based on the page load time.

To save that time and also your server resources you can postpone loading images from a further part of the site. This technique is called deferring offscreen images and Lighthouse is really looking for it.

When you enter the alt.dk homepage on an iPhone 5, you will get only 13 images at the beginning. After scrolling down another 22 images will be loaded. That is an improvement which saves 2.1 MB for the first page load (more than 60% of all images are not loaded if they are not needed).

How to defer offscreen images

The next opportunity to speed up the loading time of your site is to defer offscreen images. The general idea pointed out by Lighthouse is that users need to see only images displayed currently in the viewport. All other images, which are mostly below, can be loaded later or even never if the user decides to leave your site. To make it happen you need to change your HTML displayed for every image:

<!-- An image that eventually gets lazy loaded by JavaScript →
<img class="lazy" src="placeholder-image.jpg" data-src="image-to-lazy-load.jpg" alt="I'm an image!">
<!-- An image that is shown if JavaScript is turned off →
<noscript>
  <img src="image-to-lazy-load.jpg" alt="I'm an image!">
</noscript>

And add some Javascript for switching data-src with src when the image is near the viewport. For detecting that moment you can use IntersectionObserver . I strongly recommend to read an awesome article about that.

Optimize parts of the layout

On every modern design, you can find some small graphics displayed all over the site. It could be the logo, a fancy button, arrows or lines which are really hard to display using only HTML with CSS.

The first idea is to draw them using PNG files. Unfortunately, there are a few disadvantages in this approach. Let's say you need to display 4 different ornaments in 3 different colors on your site. Then you need to include 12 additional files. That will make 12 additional requests to your CDN server. It is time and resource consuming. You can combine all these images to one sprite, but it is still hard to add new colors or elements and it is hard to scale.

A Better approach is to use inline SVG. You can change the color of them from code and since it is a vector-based image, you can easily scale them. This approach is recommended by Lighthouse, but on the other hand, you can easily fall into another issue. When your SVG element is complicated, you can make your DOM tree too large. Lighthouse recommends:

  • keep less than 1500 nodes in total,
  • with a maximum depth size of 32 nodes
  • without a parent node with more than 60 child nodes.

So there is an even better solution: you can convert all your SVG elements into font with a really simple tool called Icomoon. Now your ornaments will be scalable, you can change colors using CSS and you can download it using one request for the custom font. For example, to display your clock icon, all you need to do is add this to the HTML code:

<i class="icon-">clock</i>

As you can see now your ornaments will be optimized and easy to maintain.

 

Summary

Website speed & performance is critical to any business success. A slow loading page can put off a huge chunk of visitors from visiting your website and knowing more about your business. Luckily with the help of Google Lighthouse, you can analyze how your site performs. It can point you towards the pertaining issues on your site as well as recommending practical solutions.

And if you’re site is running on EpiServer, then the steps I laid down above can boost your website performance drastically. So try them out and if you have any further questions, feel free to leave them in the comments below or just email me and I’ll be happy to answer them.

alt denmark

24 April 2018Comments are off for this post.

How to store custom data in Episerver – Part IV: Migrate to Entity Framework

In the previous parts of this series, I demonstrated how you can store custom data in Episerver using the Dynamic Data Store and how to make the most out of it. However, DDS has many serious drawbacks. In this blog post, I want to present to you an alternative solution which works much better: Entity Framework.

Problems with Dynamic Data Store

There are two main problems with Episerver’s DDS:

  • it has a poor performance
  • It has a limited way of querying data

DDS vs Entity Framework performance

To compare Dynamic Data Store and Entity Framework performance, let’s use an example from Part III of this blog series. We used PageViewsData class and put its store in a separate table to boosts the performance of DDS.

For test purposes I created another class named EntityPageViewsData which reflects DDS’s PageViewsData:

using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;

namespace Setapp.DataStore
{
    public class EntityPageViewsData
    {
        [Key]
        public int Id { get; set; }

        [Index("IDX_PageIdKey")]
        public int PageId { get; set; }

        public int ViewsAmount { get; set; }
    }
}

Let’s start with filling the table. The biggest difference here is that in Entity Framework you can add multiple entries to a table at once (in DDS you need to add entries one by one, there's no bulk action for it). And that speeds up this action a lot! I am adding 50 000 rows to each table:

var store = typeof(PageViewsData).GetOrCreateStore();

for (int i = 0; i < 50000; i++)
{
    store.Save(new PageViewsData
    {
        PageId = i,
        ViewsAmount = i
    });
}

DbSet entityPageViewsDatas = applicationDbContext.PageViewsData;

var entries = new List();

for (int i = 0; i < 50000; i++)
{
    entries.Add(new EntityPageViewsData
    {
        PageId = i,
        ViewsAmount = i
    });
}

entityPageViewsDatas.AddRange(entries);
applicationDbContext.SaveChanges();

Here’s a comparison of the average time of adding the entries to the tables using both frameworks:

 

migrate to Entity Framework

Now that makes a difference, doesn’t it?

Ok, but what about a read operation? Let’s compare a few cases here and start with searching for a single object by an indexed column:

store.Items().FirstOrDefault(pageViewsData => pageViewsData.PageId == 25000);

vs

entityPageViewsDatas.FirstOrDefault(pageViewsData => pageViewsData.PageId == 25000);

 

migrate to Entity Framework episerver

The difference here isn’t that big but it gets worse when we search with an unindexed column:

store.Items().FirstOrDefault(pageViewsData => pageViewsData.ViewsAmount == 25000);

vs

entityPageViewsDatas.FirstOrDefault(pageViewsData => pageViewsData.ViewsAmount == 25000);

 

migrate to Entity Framework episerver

Let’s go further and see what happens if we want to get and materialize the whole collection filtered by an unindexed value:

store.Items().Where(pageViewsData => pageViewsData.ViewsAmount > 25000).ToList();

vs

entityPageViewsDatas.Where(pageViewsData => pageViewsData.ViewsAmount > 25000).ToList();

 

migrate to Entity Framework episerver

The biggest problem here is the materialization of the objects, that’s what DDS handles much, much worse. The performance gets better if you only count the objects, although you can still clearly see the advantage of Entity Framework.

migrate to Entity Framework episerver

Other Entity Framework advantages

Well, let’s be honest: Entity can do at least the same things as DDS and it can do it much faster. Besides that, what are the most important features missing in Dynamic Data Store?

In DDS you can only query a single table. There is no way of using LINQ to build more complicated queries which would easily join tables or perform “group by” operations and sort it. So even if LINQ syntax allows you to write such a piece of code:

store.Items()
    .GroupBy(pageViewsData => pageViewsData.PageId)
    .Select(group => new
    {
        PageId = group.Key,
        Count = group.Count()
    })
    .OrderBy(obj => obj.PageId)

and even though it will compile with no problems, you will get a runtime exception as DDS cannot form a valid SQL query from this LINQ code. Entity will have no problems with that.

Another big advantage of Entity Framework over DDS is that in the latter case it can be very problematic if you want to change a class stored in the database.

For example, if you want to change a type of a property or add another one, it gets very complicated to change the store definition if you have a (much faster) custom table. In Entity Framework, you can use its migration mechanism which can automatically change the structure of your database if needed.

Final Words

You can find the whole solution on Github and test it yourself. As you can see, DDS is very limited in its functionality and its performance is very poor, even if optimized by using a custom table. To use Entity Framework in Episerver, you don’t even have to include any additional libraries - it’s one of the dependencies of Episerver anyway!

There is no reason not to start using Entity Framework instead of DDS, it will surely help you create better performing software.

If you missed my previous three blog posts from the 'How to store custom data in Episerver' blog series. Then, you can check  them out below:

P.S. At Setapp, we've got some great tech positions open. If you want to work in a challenging environment and deliver meaningful projects, then do check them out now. 

 

15 February 2018Comments are off for this post.

How to store custom data in Episerver – part III: Separate custom big tables

In the first two parts of this series, I wrote about how you can store your data using Dynamic Data Store (Part I) and how Episerver implemented DDS (Part II). Now we’ll learn about a more effective way of storing your data in Dynamic Data Store.

By default, Episerver keeps data of all stores in one big table (tblBigTable) which, as you may guess, isn’t very good for performance. If you search for data in one store, then the code actually needs to filter all other entries from other stores in the big table. To make the code more efficient, you should have a separate table in your database for each Dynamic Data Store.

Creating a custom table

To demonstrate the solution, let’s use an example from Part I - let’s store views of pages with CustomTablePageViewsData class:

using EPiServer.Data;
using EPiServer.Data.Dynamic;
using System;

namespace Setapp.DataStore
{
    [EPiServerDataStore(AutomaticallyCreateStore = true, AutomaticallyRemapStore = true)]
    public class CustomTablePageViewsData
    {

        public Identity Id { get; set; }

        [EPiServerDataIndex]
        [EPiServerDataColumn(ColumnName = "PageId")]

        public int PageId { get; set; }

        [EPiServerDataColumn(ColumnName = "ViewsAmount")]
        public int ViewsAmount { get; set; }
    }
}

Obviously, first you need to create your own table in the database. The best thing to do is to create it automatically from your code on an application startup. You can create a class which implements IInitializableModule interface and set its dependency on DataInitialization class. You will also need a database connection which you can get by calling:

var databaseHandler = ServiceLocator.Current.GetInstance<IDatabaseHandler>();
using (var connection = new SqlConnection(databaseHandler.ConnectionSettings.ConnectionString))
{
    connection.Open();
}

So here’s the class that you should have so far:

using EPiServer.Data;
using EPiServer.Framework;
using EPiServer.Framework.Initialization;
using System.Configuration;
using System.Data.SqlClient;

namespace Setapp.DataStore
{
    [InitializableModule]
    [ModuleDependency(typeof(DataInitialization))]
    public class CustomBigTableInitializer : IInitializableModule
    {
        private const string ConnectionStringName = "EPiServerDB";

        public void Initialize(InitializationEngine initializationEngine)
        {
            var databaseHandler = ServiceLocator.Current.GetInstance();
            using (var connection = new SqlConnection(databaseHandler.ConnectionSettings.ConnectionString))
            {
                connection.Open();
            }

        }

        public void Uninitialize(InitializationEngine initializationEngine)
        {
        }
    }
}

Here’s a general string pattern that can be used to generate an SQL query to generate any table for DDS:

private const string CreateTableSql = @"if OBJECT_ID('dbo.{0}', 'U') is null 
    CREATE TABLE [dbo].[{0}] 
    ([pkId] bigint NOT NULL, 
    [Row] int NOT NULL default(1) CONSTRAINT CH_{0} CHECK ([Row]>=1), 
    [StoreName] nvarchar(128) NOT NULL, 
    [ItemType] nvarchar(512) NOT NULL, 
    {1}
    CONSTRAINT [PK_{0}] PRIMARY KEY clustered([pkId],[Row]), 
    CONSTRAINT [FK_{0}_tblBigTableIdentity] FOREIGN KEY ([pkId])
    REFERENCES [tblBigTableIdentity]([pkId])); ";

{0} is where the name of the table goes, for example tblPageViewsData.
{1} is the place for a list of columns which correspond to public properties in a object that is supposed to be stored. In our example we would need to create the following list:

[PageId] int null,  
[ViewsAmount] int null,

Column names need to correspond to the names used in EPiServerDataColumn attribute in CustomTablePageViewsData class. If you skip this attribute, you will need a default Episerver naming convention. In our case you’d need to use the names Integer01 and Integer02. That’s just because we have two integer properties in PageViewsData (PageId and ViewsAmount). If you had another property which is a string, then you’d have to create a column called String01 etc. The naming convention is described more precisely in Part II in DDS class mapping section.

What about the rest of the query? At the beginning you need to check if the table is not created yet. For a DDS table you always need a few more columns: pkId, Row, StoreName and ItemType (their purpose is also described in Part II). Note that in the case of Row column, if you map all properties of the class to appropriate columns, then it will always contain a value 1. And that’s what you want to achieve, because this way one object won’t be split into multiple rows in the database.

Your custom table also requires a primary key on [pkId] and [Row] columns as well as a foreign key which references a primary key column pkId in tblBigTableIdentity table.

Adding indexes to your table

You can also create indexes on chosen columns and their name does not need to have a prefix Indexed_. Here’s an example pattern string for creating an index:

@" IF NOT EXISTS(SELECT * FROM sys.indexes WHERE Name = 'IDX_{0}_{1}')
    CREATE NONCLUSTERED INDEX [IDX_{0}_{1}] 
    ON [dbo].[{0}]([{1}]) 
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON); "

{0} is the name of a table.
{1} is the name of a column you want to put an index on.

Let’s create a DynamicDataStoreSqlProvider class which will generate an SQL query for creating the table. Here’s how the whole class looks:

using System.Collections.Generic;
using System.Text;

namespace Setapp.DataStore
{
    public class DynamicDataStoreSqlProvider
    {
        private const string CreateTableSql = @"if OBJECT_ID('dbo.{0}', 'U') is null 
        create table [dbo].[{0}] 
        ([pkId] bigint not null, 
        [Row] int not null default(1) constraint CH_{0} check ([Row]>=1), 
        [StoreName] nvarchar(128) not null, 
        [ItemType] nvarchar(512) not null, 
        {1}
        constraint [PK_{0}] primary key clustered([pkId],[Row]), 
        constraint [FK_{0}_tblBigTableIdentity] foreign key ([pkId])
        references [tblBigTableIdentity]([pkId])); ";

        public string GetCreateTableSql(string tableName, string sqlTableColumns, string storageName, IEnumerable<IEnumerable> sqlCreateIndex)
        {
            return string.Format(CreateTableSql, tableName, sqlTableColumns) + GetCreateIndexSql(tableName, sqlCreateIndex);
        }

        private string GetCreateIndexSql(string tableName, IEnumerable<IEnumerable> sqlCreateIndex)
        {
            var stringBuilder = new StringBuilder();
            foreach (IEnumerable indexColumns in sqlCreateIndex)
            {
                foreach (string indexColumn in indexColumns)
                {
                    stringBuilder.Append(GetIndexCreationQuery(tableName, indexColumn));
                }
            }

            return stringBuilder.ToString();
        }

        private string GetIndexCreationQuery(string tableStorageName, string columnName)
        {
            return GetIndexCreationQueryWithReadyColumnsNames(tableStorageName, columnName);
        }

        private string GetIndexCreationQueryWithReadyColumnsNames(string tableStorageName, string columnNamesForIndexName)
        {
            return string.Format(
                @" IF NOT EXISTS(SELECT * FROM sys.indexes WHERE Name = 'IDX_{0}_{1}')
                    CREATE NONCLUSTERED INDEX [IDX_{0}_{1}] 
                    ON [dbo].[{0}]([{1}]) 
                    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON); ",
                tableStorageName,
                columnNamesForIndexName);
        }
    }
}

Connecting a custom table to Dynamic Data Store

Once you create the table, you have to tell Episerver to connect it with the store of your object. First you need to assign the name of the store to a type of object:

if (GlobalTypeToStoreMap.Instance.ContainsKey(ObjectType))
{
    GlobalTypeToStoreMap.Instance.Remove(ObjectType);
}

GlobalTypeToStoreMap.Instance.Add(ObjectType, StoreName)

Then you need to assign the table name to a definition of the store:

var parameters = new StoreDefinitionParameters
{
    TableName = TableName,
};

GlobalStoreDefinitionParametersMap.Instance.Add(StoreName, parameters);

And that’s all you need to do! Episerver will handle the rest and create a store on a first call of:

typeof(CustomTablePageViewsData).GetOrCreateStore();

or

typeof(PageViewsData).CreateStore();

Then you can use your store normally as you would with it being kept in the default big table.

Optimization results

I created two classes for page views data with identical properties as in the example above. One called DefaultTablePageViewsData stored in the default big table and CustomTablePageViewsData which is stored in a separate custom table. Additionally I created another class FakeData which is also stored in the default big table. This is to simulate having more stores in the default table.

I added 50 000 objects to each store (including FakeData), filling ViewsAmount properties from 1 to 50 000. Then I measured the time for retrieving an object with ViewsAmount equal to 25 000:

typeof(DefaultTablePageViewsData).GetOrCreateStore()
      .Items<DefaultTablePageViewsData>()
      .FirstOrDefault(item => item.ViewsAmount == 25000);

typeof(CustomTablePageViewsData ).GetOrCreateStore()
      .Items<CustomTablePageViewsData >()
      .FirstOrDefault(item => item.ViewsAmount == 25000);

Here’s the average time (in milliseconds) to get the values for both cases:

episerver with fakedata store

As you can see, retrieving data from a store in the default big table takes 3 times longer in this case! No wonder! As there is twice as much data in the table. And that’s quite an optimistic scenario as normally you’d probably think it’s fine to create each store in the same big table which would grow and grow and grow…

Even if we get rid of the FakeData store (so both the default big table and the custom table contain the same amount of rows!), the test shows that getting data from the default big table takes twice as much time!

episerver without fake data store

I also compared times of adding the 50 000 rows to DefaultTablePageViewsData and CustomTablePageViewsData stores, but I omitted adding rows to the FakeData store. It would seem that there shouldn’t be a big difference since both your custom table and the default big table are empty. But unfortunately that’s not the case.

filling store data episerver

As you can see filling the store in the default big table takes about 25% longer!

Conclusion

As you can see it’s worth the effort to separate your stores in the database. It will definitely boost the performance of your website. Here’s the whole example project to download, including performance tests. You can run the tests and see the results yourself by going to yourdomain/performance-test. It can take about 20 minutes before it finishes running and the page turns on so be patient.

So are we there yet? Is this the best way to store our custom data in an Episerver application? Surely not and in Part IV you’ll learn an easy alternative to DDS which is faster and more flexible. Coming soon!

P.S. At Setapp, we are a team of talented and experienced.Net developers working on Episerver and other CMS solutions. If you’d like to be part of our growing team, get in touch with us!

7 December 2017Comments are off for this post.

How to choose the perfect CMS for your media site [with use cases]

If you’re an aspiring media or publishing company then you’re probably on the lookout for the best content management system (CMS) that can help you manage your content efficiently. So, what is the perfect CMS for your media site?

This, of course, depends on your needs, your budget, how urgent you need it and the site size!

 

most popular cms in the US

WordPress is the most popular CMS in the US according to Builtwith

 

Before you choose the perfect CMS for your site, you need to decide whether to go for a ‘Ready-to-use’ or an On-Premise CMS solution. Let’s look at the pros and cons for both and what CMS is the right choice for you.

1. Ready-to-use CMS

A lot of resources on the internet refer them to as ‘Cloud-based CMS’s. I like to call it ‘ready-to-use’, simply because ‘On-Premises (or On-prem) CMSs’ (we’ll come to that in a moment) can also be ‘Cloud-based.’

So, in layman terms with ‘ready-to-use’ CMSs, all you have to do is create an account, login-in and get started. Sometimes you’ll need to configure your domain, but that’s up to you.

'Ready-to-use' CMS use cases

  1. You’re starting a gadget review website. You’re the only one managing the content and don’t have any co-editors.
  2. You’d like to get quick feedback on whether your audience appreciates your content. You can use ready-to-use CMS as an MVP (minimal viable product) to validate your idea.
  3. If you want to develop a digital version of your existing print magazine.
  4. You have your own business, and you care more about the content and not how it looks and feels.

Examples of 'ready-to-use' CMS

Pros and cons of a ‘ready-to-use’ CMS:

Pros:

  1. Easy to Set Up: You don’t need any technical know how. You can set up your account instantly and start writing your first article within minutes.
  2. No maintenance: You’re not responsible for maintaining the servers. Nor you need to have any knowledge about servers.
  3. Faster updates:  You don’t have to wait for an update. If there’s a new version you’ll get it out of the box!

Cons:

  1. Restrictions: If you want to build something unique and featureful, then this might not work for you. With ‘ready to use’ solutions you need to work with what you already have.
  2. Lack of customization options: “Oh! I’d like to change how posts are positioned” or “I’d like to change the button colors to make it consistent with my brand”. None of those might be possible.
  3. The cap on limits: Let’s suppose the CMS you choose has a daily limit of 20k users (assuming that’s their TOP plan). If one day you start averaging 30k+ daily users. What will you do? Your only option is to migrate to a new CMS which is capable of handling your demand.

2. On-premises (or On-prem) CMS

Here the servers stores your software. You can either:

  1. Acquire a license from 3rd party ‘cloud-based’ service providers such as Google cloud, Azure, AWS and install your CMS there.
  2. Install it locally on your premises. Yes, this means having physical servers that store your data inside the walls of your organization.

On-prem CMS use cases

  1. You are an established business with existing content but with an old system. You would like to migrate to a new or a different system. 

Setapp helped vorseborn.dk to migrate its content to alt.dk which uses a different CMS.

  1. You’re growing fast and already have some content. In this case - there is a possibility that you will hit the limit on your current solution and you are looking for an alternate (custom) solution.
  2. If you want to store sensitive user data (such as name and address). For example, storing data of EEA (European economic area) nationals in non-EEA countries is only possible when a sufficient level of protection is assured.

Examples of on-prem CMS:

WordPress.com, Episerver, Drupal, and Joomla - all of which you have to install the software on your servers.

episerver popularity buildwith

Episerver is popular in the Nordics especially Sweden and Norway. (source: buildwith)

Pros and cons of having an On-prem CMS in your organization:

Pros:

  1. Full Control - You get complete control of how things are set. The customization possibilities are higher and you are free to install 3rd party plugins.
  2. Easy to integrate -  It’s easier to integrate with other services that are not ‘ready to use.’ For example, in the case that you’d like to have an advanced search capability on your website which uses Elasticsearch.
  3. Flexibility - It gives you the option to choose your server location. This can be crucial if it’s required by law to store sensitive data in the country where you’re providing the service.

Cons:

  1. Technical Knowledge - Yes it’s not something which you can install in a jiffy! You’ll probably require expert technical assistance to install it. For a CMS installed inside your organization - you may even have to hire additional or train existing staff for the upkeep of your servers.
  2. Maintenance required - You’re responsible for the health and maintenance of your software on the servers. In case you decide to install the servers inside your organization, you need to deal with the hardware maintenance as well, which requires additional money and effort.
  3. More time to set up - Unlike ‘ready to use’ which you can start using instantly, on-premise CMS solutions require additional configurations to set things up. This could take a couple of hours to even months depending on the size of your site.

Final thoughts!

So, as you can see, choosing whether to have a ‘ready-to-use’ or an on-demand CMS really is up to the individual needs of the company. I’d say that if you only care about producing great content then start with a ‘ready-to-use’ CMS.

On the other hand, if you care about how your website looks and feels from the beginning, then go for an on-prem solution. It will give you full customizable options and you can design your site exactly the way you like!

30 November 2017Comments are off for this post.

How to store custom data in Episerver – Part II: Dynamic Data Store implementation

In Part I, I wrote about the basic usage of Episerver’s Dynamic Data Store (DDS). Now you can find out how Episerver implemented its solution of storing data and how exactly yours is stored.

Behind the Episerver code

You already know how to retrieve and update data using DDS, but what exactly happens in your database when you create a dynamic data store and save your data? To get a better view of how DDS works, let’s add one more property to PageViewsData class introduced in Part I. Let’s say we want to keep some additional notes:

public IEnumerable Notes { get; set; }

When you execute this line of code for the first time:

DynamicDataStore store = typeof(PageViewsData).GetOrCreateStore();

Episerver will create a store definition which is held in 5 tables in your database:

  1. tblBigTableStoreConfig - here you can find the ID of your store (column pkId), a name of your store (column StoreName) and the name of the table in which your data is stored (column TableName). In our example, the values are respectively Setapp.DataStore.PageViewsData and tblBigTable. The store name is the full name of the class (including its namespace) but it can be changed with another parameter in EPiServerDataStore attribute and that parameter is called simply StoreName. tblBigTable is the default table that already exists in the Episerver database
  2. tblBigTable - data from simple fields is stored here
  3. tblBigTableReference - data from fields like lists and dictionaries is stored here
  4. tblBigTableStoreInfo - this is where Episerver stores information about how a class is mapped into all the columns in tblBigTable
  5. tblBigTableIdentity - here you can find a unique GUID of a store

DDS class mapping

Simple types

All simple values like int, string, float etc are stored in tblBigTable. The table contains 72 columns. A single row represents a part or the whole data from an object of a given store class.

There is a pkId column that contains Identity property data which is saved as a simple integer. Another important column is StoreName, containing the name of a store also used in tblBigTableStoreConfig. Column ItemType defines the type of an item stored in the row, in our case it is equal to:

Setapp.DataStore.PageViewsData, DDSProject, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

Almost every other column is a column dedicated to store data for a certain property type. For example, there are 5 columns that can store boolean values and are named simply Boolean01, Boolean02, Boolean03, Boolean04, Boolean05. The same thing is implemented for types like Integer (10 columns), Long (5), DateTime (5), Guid (3), Float (7), Decimal (2), String (10) and Binary (5).

In case of PageViewsData class and its ViewsAmount property of type int - DDS will find the first available Integer column, in this case it’s Integer01and will put a value there. For another property of that type it will save a value in Integer02 and so on. But what if there are, let’s say, 12 integer properties in a class? Well, there is also another column that can help us and it’s called Row. In the mentioned case, every object will be stored in two rows of tblBigTable.

The first one will have a value 1 in a column Rowand 10 first integer properties values (columns from Integer01 to Integer10). The second row will contain 2 in a column Row and the remaining two integer values. As in the first row, they will be assigned from Integer01. The same procedure is repeated for another simple (not collection) types.

But wait, there’s more! Remember we set up an index on PageId property? There are also special columns in tblBigTable on which an index is created. So PageId, instead of being put into Integer02 column, will be stored in Indexed_Integer01! Just like in unindexed columns, there are many more columns for each simple type, using the same naming convention (increasing the last number in a name) so you’ll get Indexed_String01, Indexed_String02 and so on.

Collection types

All collections like IEnumerable<int> are stored in tblBigTableReference. PkId columns store an ID of a store. PropertyName in our case would be Notes. The CollectionType column defines what type a collection is, in our example it’s:

System.Collections.Generic.List`1[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]], mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089

and ElementType indicates the type of a single element so here you’d get:

System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089

Index columns store the index of a stored element in a collection. The collection can be a simple list or an array but it can also be a bit more complicated, for example, you can store a Dictionary. For the sake of it, the table also contains the IsKey column which is a boolean value and indicates if a given row stores a key or a value of a given KeyValuePair.

Similar with tblBigTable, tblBigTableReference contains a column for each simple type that can be stored, so you get BooleanValue, IntegerValue, StringValue etc.

If you store an object like this:

var example = new PageViewsData
{
    PageId = 1,
    ViewsAmount = 10,
    Notes = new List { "Note 1", "Note 2" }
};

store.Save(example);

tblBigTableReference will contain three rows. The first one represents the definition of a list itself, filling only pkId, PropertyName, CollectionType with the same data as in the other two rows, but having a NULL value in ElementType and -1 in Indexcolumns. The other rows will also contain a column StringValue with Note 1 and Note 2 respectively.

Storing DDS definition

Episerver will not calculate a property mapping on every call. This information is saved in tblBigTableStoreInfo on a store creation. The table assigns a certain property (PropertyName column) to a column in tblBigTable (ColumnName column) and its rows number (ColumnRowIndex). It also contains additional information:

  • the type of a property (PropertyType),
  • the type of a mapping (for example 1 is for simple types, 3 is for collections),
  • if the mapping is currently in use (Active),
  • a version of a mapping (Version) - it’s incremented every time you change a definition of a class and a store definition needs to be rebuilt.

Reading data from the store in SQL

After the first use of the store, Episerver creates an SQL view which can be used to read data easily. So if you need to check some data in your database, you don’t need to go through all the tables mentioned before and join them. Instead, you can simply query a view. Its name pattern is VW_{name_of_a_store}, so an example would be VW_PageViewsData. Columns of the view correspond to a store’s class properties so in our case we have columns like Id, StoreId, ExternalId, ItemType, PageId, ViewsAmount.

Conclusions

As you have probably noticed, by default ALL data from ALL stores that you create is stored in the same big table. And it’s not very efficient since even for simple data retrievals, it needs to make a lot of searches on how to map the object and where to find its values. In the end, it ends up with lots of table joins and an unnecessary search through a big, big table.

But fortunately, there are ways to store custom data in Episerver in a much more efficient way! In Part III of this article, you will learn how to get your custom data store in a separate table. No more searching through objects from a different store! Coming soon!

P.S. At Setapp, we are a team of talented and experienced.Net developers working on Episerver and other CMS solutions. If you’d like to be part of our growing team, get in touch with us!

12 October 2017Comments are off for this post.

How to store custom data in Episerver – Part I: Dynamic Data Store basics

When you work with Episerver you might need to store extra data in your database. A simple example would be when you want to save statistics of your page views. You could think: “Hey! I can have another hidden property on my PageData object and store page views count in it."

Let me stop you there. Episerver offers a better and at the same time a very easy way to store some custom data which is not supposed to be seen or edited by CMS users. The solution is Dynamic Data Store (DDS).

Dynamic data store is an Object-Relational Mapper (ORM) which means it can convert data between a database and a C# code. Without further explanation, let’s see how you can actually use it in your code and let’s go back to the example in which you’d like to store page views data.

Dynamic data store definition

using EPiServer.Data;
using EPiServer.Data.Dynamic;
using System;

namespace Setapp.DataStore
{
    [EPiServerDataStore(AutomaticallyCreateStore = true, AutomaticallyRemapStore = true)]
    public class PageViewsData
    {
        public Identity Id { get; set; }

        [EPiServerDataIndex]
        public int PageId { get; set; }

        public int ViewsAmount { get; set; }
    }
}

The class represents a model of data you want to store. The class must have:

    • an attribute EPiServerDataStore
    • a property of type Identity from namespace EPiServer.Data which also needs to be named Id,
    • other custom properties which will be saved in a database (in this example you will need a property to store an ID of a page and also a property to store a number of views that this page has).

Episerver documentation doesn’t list all types that you can actually use. However here’s what it includes:

    • System.Byte
    • System.Int16
    • System.Int32
    • System.Int64
    • System.Enum
    • System.Single
    • System.Double
    • System.DateTime
    • System.String
    • System.Char
    • System.Boolean
    • System.Guid
    • EPiServer.Data.Identity
    • System.IEnumerable
    • System.IDictionary

But you can also use properties of other types, for example:

    • EPiServer.Core.ContentReference,
    • EPiServer.Core.IContent.

So why are we using int instead of ContentReference to store ID of a page? ContentReference is mapped into a column of type string so searching by it is slower than by an integer value.

Although you can use also IContent to store the whole object, I wouldn’t recommend it since actually every property value of IContent object is stored separately in a table so that’s not very efficient.

On application start, Episerver will find the class by the attribute and create a Dynamic Data Store, which means it will map properties to table columns in a database and also save information about the mapping and the store itself in some other Episerver tables.

Thanks to setting AutomaticallyCreateStore and AutomaticallyRemapStore properties to true in EPiServerDataStore attribute, Episerver will automatically create or update the store definition if you change your class definition.

Notice that you can also set up indexes on chosen columns. In our example, you will probably search for views data by the page ID so it’s a good idea to set a database index on the ID column to improve the code performance. You can do it by simply adding EPiServerDataIndex to the PageID property.

How to save and read Dynamic data store data

Ok, so we already have the store definition but how to use that? Let’s start with how to save some data first. In our case if a user visits a page for the first time, you want to add the first record about the page view. First, you need to get an instance of the data store. You can either get it by using an extension method:

DynamicDataStore store = typeof(PageViewsData).GetOrCreateStore();

or by using DynamicDataStoreFactory object:

dynamicDataStoreFactory.GetOrCreateStore(typeof(PageViewsData));

In the latter solution you can get dynamicDataStoreFactory in two ways: either by referencing to a static field in DynamicDataStoreFactory class (DynamicDataStoreFactory.Instance) or by using a dependency injection in your class constructor.

What’s the difference here? Using the extension method seems to be the easiest way to go. However, using the dependency injection allows you to write code which is more testable because you can then create mocks for your stores. But that’s a different story, so, for now, let’s focus on the easier solution.

Once you have your store instance and a PageData object called page, you need to create an object of PageViewData:

var viewData = new PageViewsData
{
    PageId = page.ContentLink.ID,
    ViewsAmount = 1
};

Then you can finally save the data by simply calling:

store.Save(viewData);

And that’s it! Episerver automatically fills in the ID field, so you don’t need to care about that.

But what if the page is being visited for the second time and you just want to increase the view counter for a page?

 “Read and write” approach

Given the same page object as in the previous example, you can first read a number of views. To do so, you need to call a generic method Items and then you can use LINQ queries to find objects with some criterias.

PageViewData pageViewData = store.Items<PageViewsData>()
    .Where(viewData => viewData.PageId == page.ContentLink.ID)
    .FirstOrDefault(); 

if (pageViewData == null)
{
    pageViewData = new ViewData
    {
        ContentId = page.ContentLink.ID,
        ViewsAmount = 0
    };
} 

pageViewData.ViewsAmount++;
store.Save(pageViewData);

You try to find an already existing record about the page. If there is none, you create a new object like in the previous example. Then, you increase the views amount and also just pass the object to store to save it. If the object is retrieved by the data store, it will have an Id field filled in, so when you update its properties and save it, DDS will know which record to update.

But this approach has some disadvantages. First of all, there is a risk that another request will appear and perform the same read and write on the updated record between our read and write operations. So, in this case, the counter wouldn’t be reliable as it would be increased only by 1, even though there were 2 requests to the same page. So what you can do here is use DDS transactions. You can wrap the whole code from above with a transaction:

DynamicDataStore store = typeof(PageViewsData).GetOrCreateStore();
var provider = DataStoreProvider.CreateInstance();

provider.ExecuteTransaction(() =>
{
    store.DataStoreProvider = provider;
    //here goes the code for updating the page view
});

You can even share the DataStoreProvider between many different stores in the same transaction if you want to keep your data consistent in many data stores!

“Just update” approach

In the previous example, you base the new value on the old one. But if you already know what the new value should be, there is a better way to update store items. The previous approach works fine as long as you don’t need to update many records in the same time. If you do, then in the first step all the objects need to be loaded to RAM.

If you work with big tables, that can get problematic and it can also slow down the whole operation a lot. So instead you can simply tell the store which records to update and what values to change!

So let’s say we just want to set the views’ amount to 10:

store.Update<PageViewsData>()
    .Where(viewData => viewData.PageId == page.ContentLink.ID)
    .Set(viewData => viewData.ViewsAmount, 10)
    .Execute();

In this case all rows matching the Where query will be updated with new properties values set in the Set function. And it’s all translated to one SQL update query, so it’s much faster and also transactional! Much nicer, huh?

Conclusions

The implemented Dynamic Data Store solution is for sure very flexible since a store can be automatically rebuilt on most class changes like adding a new property. It’s good for storing a small amount of objects and it’s very fast to implement.

Unfortunately, the Episerver solution has many serious drawbacks. You can learn more about that and get information about implementation details in Part II of this article. Coming very soon!

 

episerver cms

 

OUR OFFICE

Wojskowa 6, 60-792 Poznań, Poland
+48 506 798 998
office@setapp.pl

OUR OFFICES

POL: Wojskowa 6, 60-792 Poznań, Poland
+48 506 798 998
office@setapp.pl

ISR: 220 Hertzel Street, 7630003 Israel

COMPANY DATA

Setapp Sp. z o.o.
VAT ID: PL7781465185
REGON: 301183743
KRS: 0000334616

PRIVACY POLICY

OUR OFFICES

PL: Wojskowa 6, 60-792 Poznań, Poland
+48 506 798 998
office@setapp.pl

ISR: 220 Hertzel Street, 7630003 Israel

COMPANY DATA

Setapp Sp. z o.o.

VAT ID: PL7781465185
REGON: 301183743
KRS: 0000334616

PRIVACY POLICY

OUR OFFICE

Wojskowa 6, 60-792 Poznań, Poland
+48 506 798 998
office@setapp.pl

COMPANY DATA

Setapp Sp. z o.o.

VAT ID: PL7781465185
REGON: 301183743
KRS: 0000334616

PRIVACY POLICY

 COMPANY DATA

Setapp Sp. z o.o.
VAT ID: PL7781465185
REGON: 301183743
KRS: 0000334616

PRIVACY POLICY

OUR OFFICES

POL: Wojskowa 6, 60-792 Poznań, Poland
+48 506 798 998
office@setapp.pl

ISR: 220 Hertzel Street, 7630003 Israel

COMPANY DATA

Setapp Sp. z o.o.
VAT ID: PL7781465185
REGON: 301183743
KRS: 0000334616

PRIVACY POLICY

OUR OFFICES

PL: Wojskowa 6, 60-792 Poznań, Poland
+48 506 798 998
office@setapp.pl

ISR: 220 Hertzel Street, 7630003 Israel

COMPANY DATA

Setapp Sp. z o.o.

VAT ID: PL7781465185
REGON: 301183743
KRS: 0000334616

PRIVACY POLICY

OUR OFFICE

Wojskowa 6, 60-792 Poznań, Poland
+48 506 798 998
office@setapp.pl

COMPANY DATA

Setapp Sp. z o.o.

VAT ID: PL7781465185
REGON: 301183743
KRS: 0000334616

PRIVACY POLICY

klacz
Clutch award badge
topdizajn
svg-image
svg-image
svg-image
svg-image
Instagram Icon
svg-image
svg-image
smart-growth
european-union

©2020 Setapp. All rights reserved.