Content Security Policies need to stop being a money dump

Let’s start with one of the many examples. Somebody from Analytics writes an email to IT:

This email is great because it contains pretty much all the issues I have experienced with Content Security Policies in the last year.

I am not alone…

I estimate that I alone spent at least a full week (42 hours) last year on Content Security Policy issues. Add to that:

  • the time my clients needed to read my messages/listen to me and then communicate this forward to Development / IT / agency teams
  • the time these teams needed to read, understand it and then fix it
  • the time needed to explain to all kinds of folks why they have missing data from x to y and why that data won’t come back
  • the money lost due to data being not there and that leading to bad decisions
  • all the time that we did not have for the really beneficial things we actually wanted to do
  • etc.

And you end up in a looot of money being essentially dumped! This “escalation, fixing and patching” money is 100% wasted, because it is 100% unnecessary, and the client had zero benefit from the hours I billed to them because of it.

Don’t get me wrong:

Content Security Policies do have their benefits, but they need to be implemented in a professional manner.

What are Content Security Policies?

Content Security Policies can be useful for IT security. They prevent requests from leaving a user’s browser on your website if these requests do not pass a whitelist. So e.g. if your Analytics tracking script needs to be loaded from www.google-analytics.com, and www.google-analytics.com is not in the whitelist, there will be no tracking.

Having to file a JIRA ticket for a CSP issue can show the root cause of CSP issues: No awareness of their impact on the side of those who implement the CSPs.

If you work with Online Marketing tools, especially Display Ads, the number of domains you need to whitelist for “full functionality” can be quite long, and it is even hard to keep an overview over which ones are actually needed (similar to Cookies), because the ad networks tend to change the domains sometimes. In any case, the good thing about CSPs is that you can prevent e.g. hi-jacked third-party scripts injecting other scripts from malicious domains that could install harmful software, track data from your website to places you don’t want etc… So all in all, CSPs are a reasonable thing. But…

Go-live, blind-folded

Some of my clients have no issues with CSPs. They don’t need them or have a proper process. Others keep having issues again and again. Some learn, and the issues have been reduced lately. Others seemingly don’t learn. Every month, another CSP issue. Most of the issues are minor ones luckily. One LinkedIn tracker being blocked is not a big issue usually if it is quickly fixed. Some of the issues however are devastating:

Going live with Analytics while not going live really because the CSP is still blocking all your Analytics (even after warnings that this needs to change)

Just the other week, a client went live with a fresh new website and Adobe Analytics on top of it. During testing on the staging server, we discovered that the Content Security Policy did not allow the Analytics tracking script (AppMeasurement.min.js) to be loaded. We thus were unable to test the implementation. We alerted that. Too late — they went live with the untested Analytics implementation. But actually they did not go live with Analytics, because they also did not change their CSP. So the whole Analytics was blocked anyway, and the impact of the costly launch campaigns for the new site could not be evaluated.

Other examples:

  • An IT security consultancy did an audit and came to the conclusion that the client’s website needed a CSP. The client’s development agency implemented one — without talking to anybody, since — so practical! — this does not even need a release. It is just a central setting somewhere on the server. After the CSP was switched on, weeks of escalations and data losses ensued. Weekends broken, holidays interrupted.
  • A client wanted to test a new NPS tracking tool. We had the implementation done, but could not go live with it for months because it took so long to get to the IT security person who was able to do the CSP changes. I also could not even test it on a staging website or in a Tag Management System’s Preview mode in my local browser without applying some really tedious Fiddler tricks. Sometimes even such tricks are not an option, e.g. with the SEO team’s PoC mentioned in the initial screenshot.
  • A new website was launched with all kinds of marketing money thrown around. All kinds of tracking pixels failed. Took weeks to change, some did still not fire after months.
  • An international player tested its Content Security Policy only from Switzerland. Some ad companies (e.g. Google Remarketing) use different domains (e.g. google.fr) for their trackers depending on the country you are coming from, so any remarketing was blocked for folks outside of Switzerland.
Why not block your entire Google Tag Manager and thus basically everything?

What strikes me most:

  • How is it possible this happens so frequently when the most glaring issues are so easy to spot and these policies are usually implemented by tech-savvy folks? Just open your browser console, deactivate your Ad Blocker, and look at all the red lines your browser spits out. It even explicitly tells you it is red due to the Content Security Policy directive (see examples above).
  • Why does it take so long to change a failing CSP, especially if that does NOT require a release? The lack of urgency in some developers and IT security folks (in data-immature organizations) is breathtaking. Just the other week, we reminded a developer that this one Twitter Pixel which would tell Marketing if the tens of thousands of ad dollars spent on Twitter ads is good for anything is still blocked by the CSP (for three months now). His answer: “Ah yes, I was going to do that tomorrow.” In data-mature places, a failing CSP is a picket-service emergency case. In data-immature orgs, you need to file a Jira ticket and then escalate that one three times until anything happens. And then weeks later, people ask you why there is no data in that week.

What should be done differently

  1. Before implementing a CSP, do the browser sanity check (see “what strikes me most”) to collect domains to be whitelisted in a central list accessible to any relevant stakeholder at your company.
  2. Show that list to Marketing and Analytics and give them some time to check (sometimes agency help is required) which of those domains are really needed (for what) and if others need to be added. Not all scripts are loaded on all pages! Some only load on the order confirmation page, others only on product pages, others only after certain interactions (add to cart) etc… So opening the console on the homepage will not be enough, you need to talk to people!
  3. Implement a fast process to allow for CSP changes within a maximum of a day or a week (depending on what is common at your place) so Marketing, Product Management or Analytics are not blocked for weeks or months just because of 10 characters that need to be added to a CSP.
  4. CSP changes that break existing things need to be part of the emergency picket service. Lost data won’t come back and creates all kinds of issues even months after the loss: E.g. you cannot compare certain weeks/months to each other, you have to explain in graphs that the data for this month is incomplete etc…
  5. Allow people from your organization to switch off/circumvent CSPs e.g. on staging systems, or with a certain setting/password/ cookie/parameter/login even on production (production is better because especially in tracking, not everything works the same way if your domain is different). Otherwise people like me cannot test new tools or changes that require requests to not-whitelisted domains in their local browsers.
  6. Define a Reporting Endpoint in your CSP Headers so you become aware quickly and automatically of issues. As so often, Dr. Urs Boller had a valuable addition: Within the CSP header, you can set a reporting-API endpoint. This could be your Server-Side Tag Management System’s URL endpoint (e.g. Tealium Collect Server URL) which would then forward the data to Analytics for reporting, or anything that IT teams like to work with (a Google Cloud Function, a Slack channel etc.). That means every issue with the CSP is reported to a defined API endpoint, where you can setup some rules to watch for e.g. Analytics tags. This was really useful for Urs because they detected issues in the CSP changes really early (before they were put into production). Read more here.

That’s all, folks. CSPs seemingly became popular in Europe about 1–2 years ago. Now they need to grow up.

--

--

--

Digital Analytics Expert. Owner of dim28.ch. Creator of the Adobe Analytics Component Manager for Google Sheets: https://bit.ly/component-manager

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Testing Coroutines — Update 1.6.0

Why it is so difficult to learn programming

Installing Multiple Python Versions on Windows Using Virtualenv

Strobogrammatic Number II — Day 91(Python)

Checklist For A Reliable Load-Test

Mastering the concepts of OOPs.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lukas Oldenburg

Lukas Oldenburg

Digital Analytics Expert. Owner of dim28.ch. Creator of the Adobe Analytics Component Manager for Google Sheets: https://bit.ly/component-manager

More from Medium

How to create a team formation tool in Airtable, part 2: The Fields

8 Points to Consider Before Migrating Your Data To The Cloud

Top 10 Keys Benefits of Cloud Automation in The Healthcare Industry | HData Systems

Top 10 Keys Benefits of Cloud Automation in The Healthcare Industry | HData Systems

Vertical Mind Maps Are on the Rise (Part 1): Here’s What You Need to Know