Life with Adobe Launch 1/3: The rocky history of Adobe Tag Managers
Shining at the fancy things while failing at scaling
As amazing as Adobe is in Analytics, as inglorious is Adobe’s history with Tag Management Systems. Launch is the 3rd TMS in 6 years*. After the failed “Adobe Tag Manager”, Adobe “DTM” did the fancy, unimportant things well, but choked at what is most important, especially for large enterprises: scale. Launch still has many of the same problems, but offers solutions now.
In August 2019, I took on a new client. It was not a completely new stint because I had worked for them already in the past (2013–15) as a consultant with Unic. Back then, that client was one of the many to adopt a Tag Management System (TMS). Since innovating at large companies feels like lifting a rhino, there was not much energy left to also consider a sensible TMS after having accomplished the revolution of moving from Webtrends to Adobe Analytics. Just having any TMS would already mean so much more flexibility! So Adobe’s “Dynamic Tag Management” (DTM) was the choice, because it was free, it was by Adobe, it required no complicated purchase approval process, and because people saw Tag Management Systems merely as a way to publish more and more trackers without having to wait half a year for IT releases. They were not the only client who took this decision. I warned everybody about DTM back then because (unlike most Adobe Consultants who lived in their Adobe-only tunnel) I knew what other tools felt like (mostly Google Tag Manager and Tealium). My relationship with an Adobe Sales Consultant got awry because I refused to recommend DTM.
Adobe DTM did the fancy, but unimportant stuff well, but was a nightmare at scaling
DTM shone in sales presentations, because it had a simple built-in approval workflow, and folks loved the fancy rules logic it had. Everything seemed possible with just a few clicks and no developer. Adobe showed rules like these in their demos: “If the user is on an iPad, has seen at least 4 pageviews before and has a cookie named ‘salad’ with a value of ‘cucumber’”, then trigger xyz”. Nice. But these things are secondary.
When you seriously started to work with DTM, you were amazed at how clunky and redundant it felt and how terribly it scaled. It was ok if you had to maintain only a single website, but Adobe clients usually are large corporation monsters with many platforms and thus need scalability more than anything else. That means:
Common, enforcible standards and data collection logic across all your sites and apps with near-zero redundancy.
DTM was miserable at this:
- There was no support for a Data Layer, with Adobe claiming that this was not necessary as DTM could work with any Data Layer — which merely demonstrated that Adobe did not understand what a Data Layer really was. They still did not get it until very recently, seeing a Data Layer as nothing but this one static JS object you insert on every page, and not as a transportation layer for any type of data collection and distribution.
- You really want the same thing on two or more sites? Get ready to copy-paste! Another eVar should be tracked with all Event Hits? There is an improved method to clean URLs from PII? We want tool X to run on all our sites? Go change that manually in all your 15 DTM properties (and in the case of the eVars, in all your Event rules). And if you make a typo, start over again. And then spend the rest of the week testing. And then realize that you forgot to update that one snippet in property 6.
- DTM had all kinds of shortcomings that made scaling impossible. E.g. changing the name of a Data Element (a bit like a “variable” in GTM) meant you needed to manually change all references to this element everywhere in your code, your tags and rules. And of course, finding those pieces of code was a nightmare in itself because you had to open each rule, scroll all the way down, then open the “custom code” section, then remember if your code was in the “HTML” / “Synchronous JS” or “Asynchronous JS” section and then open that. And now do this for 50 rules or more in 15 properties.
- No wonder data quality suffers in such a setup. Entire sites are left out of tracking upgrades because it just is too tedious to include them all. Innovation in the beginning is quick, because the fancy, unimportant stuff that DTM (and Launch) do well make that possible. In the long run however, your innovation chokes, and your tech debt reaches unchartered heights.
- So the only half-way scalable approach was to mostly ignore the interface-based features (e.g. Data Elements) and write your own scalable logic in one large script that you included on all pages. DTM was then reduced to a container for that main script and some Event triggers. Or — even worse, but common — ALL websites were put into one super-large property, and the website-specific stuff was then handled with complicated “not on domain xy.com” conditions. A recipe for bugs and resignation letters from Digital Analysts.
- Page Speed: Back then, the official recommendation by Adobe was to use a synchronous render-blocking script in the head to load DTM (which means the web page stays white until DTM’s script has been downloaded and executed), even when you did not need any render-blocking technologies (e.g. no AB Testing). With all the unscalable monstrosity now in your DTM library and lots of that needing to load on “top of page”, this rightfully caused escalations by numerous IT departments.
- In the same area, Event Listeners for clicks needed to download the unminified code (including comments!) from an Adobe CDN before they parsed it (via eval) and then executed it. It is hard to think of a less efficient click-handler logic, and unfortunately, Launch has not changed much here. So if you want to track clicks on links that open a page in the same window, you have to wait until that download has happened. That in turn means that usually 50% and more of your clicks are lost if you do not resort to creative, but technically awful workarounds like loading the code to execute on top of page already (and thus further bloating up the blocking code) or delaying clicks by several seconds which often destroys website functionality. And no, using s.Beacon does not help because the beacon needs to get sent first, and that can only happen after the download of the JS file from the CDN is complete and executed.
Let’s remember, DTM (aka “Satellite, acquired by Adobe” ;)) was already the second Tag Management System by Adobe within just a few years. Before that, Adobe had tried to build their own “Adobe Tag Manager”, which quickly had to be upgraded to “version 2”. And not soon after (in 2013), there was DTM. So if you wanted to be ahead of the game and use Adobe’s newest products (i.e. become an Adobe guineapig), you had to switch to a different Tag Management System three times in less than five years! An insane load on your Tech and Analytics teams, and millions of dollars wasted which you could not invoice to Adobe for delivering unfinished products.
Now in 2018, yet another TMS came out with Adobe Launch (I shun the official name because it sounds ridiculous). Tag Management System number 3* since 2013 (See also Jim Gordon’s history of Adobe Tag Management Systems). Adobe then sold the story that migrating from DTM to Launch is as easy as clicking a couple of links. It isn’t. That client tried this and pushed their Launch migration through within just a couple of months. Half a year of seemingly endless and severe tracking issues and data losses ensued.
When I joined the client, they were just about to complete this migration, and I got my first glimpse of Adobe Launch. It left me puzzled. It was an upgrade over DTM. It was much leaner, it had support for APIs and Extensions built by non-Adobe folks, but it still had many of those scale-preventing DTM diseases and architectural flaws inside:
- the official recommendation was still to include it in the head with a blocking script (and the client implemented it that way)
- it still has no built-in support for a Data Layer — with an Extension, that is easier now though
- Changing names of Data Elements still means changing code everywhere
- Without Custom Built Extensions (see part 2), it still scales poorly because you still have to copy paste things in every property or jam everything into one.
- The ill-fated logic to load scripts for rules (including click listeners) from a CDN first before parsing and executing them (=lost clicks, see above) has not changed.
- Launch wanted to be modern. Modern means asynchronous, but Launch overdid it, so we had numerous issues where Actions ran out of the order that was set in the interface. This led to unexpected effects and broken data, especially if two rules fired right after each other, responding to different events (e.g. an Event that is fired right after a Pageview). This is still only half-way solved — you can now set an additional checkbox in the property settings to execute Actions in order (why would that be an extra option?), but then they still time out after 2 seconds (can be increased). 2 seconds is not enough for 3G or otherwise slow networks, as we saw when testing a new website recently. A partner agency of mine, just to enforce in-order execution, had to build an entire extension just for that purpose. Otherwise, relying on something else having finished is quite a challenge in Launch (Direct Call rules help, but calling them requires yet another Custom Code snippet that first needs to get downloaded from the CDN…).
- The interface still leaves much to be desired. It takes too many clicks to get to the actual code, and the most frequent click I do is to confirm that I want to discard changes even though I did not change anything. Finding out WHO changed WHAT and WHEN is critical if you work in a TMS with a team, but near-to-impossible in Launch. Inline documentation of releases is not possible (at least some text field for comments to summarize changes would be helpful), many text fields are way too short, especially rule names, so I often lose orientation when rules are named similarly, etc...
Don’t get me wrong: There are lots of really good improvements in Launch (e.g. any number of DEV environments, the API-based approach, and of course the fancy, but unimportant rule and data element logic etc.), and I will write about Custom Launch Extensions as the way to scale in part 2. But I still feel like Adobe and I disagree on how to handle Tag Management.
So what do other systems do better?
Let’s look at Tealium iQ, the Tag Management System of Tealium. Tealium stinks at fancy things like the advanced rule logic I mentioned earlier. You also mostly have to program e.g. Event Listeners (e.g. for click tracking) yourself or use outdated “jQuery Listeners” (seriously!), but with some minor JS know-how, writing an Event Listener is a piece of cake nowadays. Tealium in general also has a steeper initial learning curve, because it abstracts more. Innovation in the beginning thus is slower, and you run into outdated Tag templates and Extension types that do not work anymore, but are still in the interface etc.
Therefore, Tealium is better at something essential which is much, much more important: It allows for a scalable data collection architecture where one or more central company standards can be inherited automatically by as many profiles (=Launch properties) as you like: E.g. load rules, definitions of Data Layer variables, tag configurations (“e.g. map Data Layer variable X to eVarY” on any of our websites), or Extensions (in Tealium’s world, those are operations to enrich or transform the Data Layer provided by the shop and other custom scripts). Tealium is a system where everything can and should be built on top of an Event-Driven Data Layer, so you learn to think “Data Layer first” instead of “which script do I want to execute” or “which data do I want to make available for distribution” instead of “which eVar do I need to set”. That is difficult to understand because it is more abstract, but it is essential.
Google Tag Manager 360 has “zones” to make working on multiple platforms with one standard easier. It also offers several API-based tools in the community so it is at least easier to copy things from one container to another. It is superb at code consistency checks, and you can rename variables as often as you want without fearing anything to break anywhere. And you even see where a variable is used instead of searching the whole code for it like in Launch. In short, GTM gives you a modularized, encapsulated experience, where the Data Layer and Data Layer Events are also the central pieces and not something you have to add to the TMS functionality first via some Extension. GTM for Adobe Analytics implementation however? I would not want to try that.
But enough shaming of Adobe Launch. They have a nice and friendly and committed team. That team needed to release Launch much sooner than they wanted to, and they needed to make Launch partially backwards-compatible to DTM, which is where many of the issues come from. Launch is definitely useful and a huge upgrade over DTM which I outright hated (I was so happy not having to use DTM anymore after quitting Unic in 2015). Launch is nicely integrated with the Adobe Experience Cloud tool stack, and with your own Custom Extensions, you can make it scalable, and in a very professional manner. That takes some skills and time and money initially, but you need to invest that if you are a large company and want to avoid the era of tech debt and innovation freeze. Part II will explain more…
* regarding “3 TMSs in 6 years”: @cataLuc (Lukáš Čech) rightfully mentioned that V1 of Adobe Tag Manager was so different from V2 that it was actually like a separate tool. So we are at 4 TMSs in maybe not 6, but 7 or 8 years…