Kentico Connections: Storing and Displaying Data from SharePoint Online

By Cathy Dew on January 25, 2018

Since the rise of SharePoint Online, we are seeing more and more clients needing to collect information residing in a SharePoint Online Site Collection or Subsite and bring that into Kentico. The data collection is complicated by the ability within SharePoint to create ad-hoc metadata. While internally well structured, the document library or list metadata does not lend itself to simple collection and discovery outside of SharePoint. You need to have a plan in place and a well-defined metadata model to build a robust connection between Kentico and SharePoint Online.


After you collect your data, how you store it and display it depends on several factors

In the first part of this series, we discussed how to use a RESTful API to collect metadata and URL information for a set of files stored in a SharePoint Online Library. Understanding how to collect data from SharePoint using RESTful API communication is only half the battle, today will discuss what to do with that data now that you have it.


We’ve caught the car, what’s next?

Now that we’ve build an HTTP communication with the target SharePoint Online library and successfully pulled the metadata for the files, what we do with the data depends on several key questions:

  • How much will the full data set change during the scheduled refresh?
  • What type of metadata are you collecting for display?
  • What are the plans for displaying the data to our user base?
  • How often do you need to refresh the data?

These are just the top-level questions you need to answer in order to build a support structure inside Kentico for your new data set. While answers to these questions invariably lead to more, we’ll just cover the basics here in order to give a proper framing to the project.

Custom Page Types versus Custom Tables

The foundation object in Kentico for content management is the Page Type. We’ve discussed different pages types previously, and even have delved into Content-Only Page Types. When it comes to collecting 3rd party information, we are partial to the Content-Only page type, because we don’t expect to have each item have it’s own display page. We are really interested in displaying the full dataset, with some level of filtering, not the individual record by itself. Also, in this example, the link we will be constructing will be pointing back to the SharePoint Online library, so the user can grab the target file. We are not pulling the physical file into Kentico, only the meta data.

While we can build custom tables to support the new data set, using a custom content-only page type provides us with built-in integration with Kentico’s CMS and search capabilities. The immediate downside of a design pathway is the content tree with the Admin interface could get bogged down with the number of item in the tree branch. Since you will most likely not be editing these page types directly, that shouldn’t be a huge issue except for those one-off times you need to update or remove data manually.

Using custom tables is more appropriate for holding the distinct values of the meta data, so you can more easily build drop downs for ad-hoc display filters.


Updating your look up data

One of the challenges with 3rd party connections and with SharePoint in particular, is tracking and managing look up data and relationships. One approach is to try and mimic the relationships with associated relational tables. Another way is to just use the target label/value directly with the storage of the target data set. One benefit of that is the ability to target the columns more easily for search, if you need to expand the search net.

But you still want to provide filters for the user to quickly find the data they are interested in, so we generally target the meta data storage with its own sync loop and update a set of custom tables. We then kook these custom tables up to our filter options. One advantage of this is we can manually update the lookup tables using Kentico’s built in table editing features, if needed. Additionally, this design pattern makes sure we have the complete list of available choices, versus only showing what’s currently assigned within the full data set. It might be confusing for the user to not find an expected meta data value in the filter/lookups.

The trick with setting up your own look up tables will be creating a function to pull all the target lookup data from SharePoint or your 3rd party source. That’s why we stressed earlier that you’ll need a solid meta data model, so you know where the source data will be residing.


Updating your local data

After collecting the source data set, there are two basic design patterns for updating your data set: rip & replace, or update & insert. If you are using only custom tables, deleting/purging the target table and then bulking inserting is probably the fastest method available. However, if you are using custom page types to store the data, this process is not recommended. We’ve found the following design pattern more favorable and scalable.

Once we have collected the source data set, we generate a process GUID to mark this update session with a unique ID. And since we are dealing with libraries – folders – file names, the full target URL for the file becomes our unique record identifier. (You can also use the file GUID assigned by SharePoint with the JSON object.) Therefore, while iterating through our source data set, we use the URL to find or not find the target record. If found, we update all the meta data collected from the source record and update the session GUID. If not found, we insert the new file and it’s corresponding meta data along with the session GUID.

After updating or inserting all the source data, we then search our target page type branch for all records that do not have the current session GUID. These records are the ones we did not update during the current session, and therefore they can be flagged for removal.

We use a two-step process for removal flags: warning and orphan. At warning status, the file will not appear in the public list. At orphan status, the file is removed. This gives us the ability to run reports after updates, identifying those files that might be removed on the next update.


A side-note on scheduled services

It’s important to understand where you run your scheduled service when updating your 3rd party data. If you are running within the Kentico service, and you don’t have a web site that has the proper level of traffic, IIS will shut down your service after a period of time and your scheduled tasks will not execute. Therefore, it’s always best to run your critical tasks outside of Kentico / IIS. You can do that via Windows Task Manager or via Kentico’s stand-alone Task Service.


Displaying your data with transformations

Now that you have your 3rd party data imported into Kentico, and are using a custom page type for storage, you can use the basic Kentico Repeaters and Transformations to display your data for your users. Here your display options are driven by your data and user experience needs. You can use accordions sorted by key meta data, drop downs to provide in-page or AJAX style filters, or even tabbed interfaces for primary groupings.

Once you have these pieces in place, updates to your target SharePoint Online libraries will cascade down to your Kentico site and be available for your users. 2Plus2 is a Kentico Gold Partner and would love to help you get the most out of your SharePoint Online and Kentico setups. Go online to schedule a free consultation with our team or call 510-652-7700 today.

Cathy Dew
Cathy Dew – CEO + Information Architect
Cathy focuses the company on our mission – Real results. Every time. Information architect and strategist, Cathy is passionate about making software work well – the function, the feel, the result.