Multi Tenancy with Adobe Cloud Manager - Part 1



Multi Tenancy with Adobe Cloud Manager
Author Photo
Haroon Saifdueen
Senior Software Engineer
Author Photo
Manuel Galle
Senior Lead Developer bei comwrap

Have you ever had to share a single environment across multiple teams or even business units? Any DevOps of a big organization can tell you, sharing an Adobe Experience Manager as a Cloudservice Program across multiple projects can be a pain. Every team/business units has their own schedules, deployment cycles and general structure on how to approach a project. This series of post will outline some options to mitigate this headache. 

It’s an undeniable fact that Multitenancy is inevitable for some organizations leveraging AEM as a Cloud Service.  Architects and DevOps teams can think of many types of deployment techniques based on the requirements and governance in place for the different tenants. It’s not easy to come up with a suitable one especially when we have the option to create only one repository in Adobe Cloud Manager (CM). That was the case until this August.


Adobe, in its release 2021.8.0 of Cloud Manager has introduced the self-service capability to create and manage multiple repositories via Cloud Manager UI. Well, is that a game changer for the multi-tenant project setup in Adobe Experience Manager as a Cloud Service (AEMaaCS)? May be ‘yes’ based on how below areas are improved with it.

  • Independent working of multiple teams
  • Auto-triggering from upstream projects
  • CM code quality scans

Are these areas improved with the new feature in Cloud Manager? It may not be so easy to understand unless we know a few other alternative models.

Deployment Model 1

For a while, Adobe has been suggesting a multi-branch based approach as described here to work with multiple source Git repositories. Here Adobe explains how we can sync customer managed Git repositories with CM repos by making use of Git actions and CI jobs.


Deployment Model 2 (using shared repository managers)

For those who do not choose the above solution can look at this alternative explained below. This is also a successful model that we have tried for many customers. We feel it’s important to share the details here as you may not find many articles with a different setup for cloud manager other than the official one above. Those who are less interested to know the details can skip the below explanation and continue from the deployment model 3 section below.

For the ease of understanding, we would like to detail out one example from one of our live projects. Having said that, we would also like to mention our tools of choice (Gitlab CI, Nexus/Gitlab package registry etc), but it can be easily replaced with any alternative. Let us pick an example project for our Customer-A. Customer-A has got tenants TEN-1 & TEN-2 for its AEM program. There is only one AEM program for Customer-A and both tenants will share the same program, its environments and other resources. In Adobe CM there is only one repository per program. Both TEN-1 & TEN-2 would have to share the same repository. That being the case how would you achieve a best multi-tenancy deployment model?

Before getting into that, a few additional things to be aware of is that most of the companies would also be keeping their own Git or similar repository manager for the projects. Code from these repositories is being pushed to Adobe CM through the CI/CD pipelines. We are picking similar setup for a Customer-A program. Under this program, there is a repository for TEN-1, one for TEN-2 and one each for common configurations (one-per-aem configurations like Jcr Resource Resolver factory), third party modules (a single repo that takes care of the inclusion of vendor packages like ACS commons, Netcentric AC Tool etc) and finally for the parent/main/something we call as ‘Skeleton’. The list can be more based on the shared code used by the tenants. Mostly tenant projects contain AEM archetype-based multi modular structure. A sample repository structure below.



[1] Now, what is Skeleton?

Skeleton is an orchestrator(parent/main) project that merges all the tenant packages and common artifacts to make it ready for deploying over to CM environments through its pipeline.

So, Skeleton will be the only repo that is being pushed to the CM repository.

In most cases Skeleton doesn’t contain code of its own (hence we chose the name Skeleton) except a few pom files to handle the orchestration. But it can contain the dispatcher configuration of all tenants too if you prefer to not create a separate repository for the same and to avoid the mess of packaging, repackaging later during the deployment.

Now, how do we get the job done using Skeleton? Here comes the role of a shared repository management tool. We use repository management tool (nexus, jFrog etc) or even Gitlab package registries to keep or store the artifacts generated from different tenant source codes, common configuration, third-party module and any shared module (components, services, integration etc). In AEM these artifacts are mostly the content packages generated using maven build tool. These artifacts are later embedded in the Skeleton repo.

In tenants and other common repos, when the feature branches are being merged to the ‘develop’ branch a Gitlab pipeline job will build and deploy the content packages to a repo management tool or to Gitlab registry. So, TEN-1 will generate its content packages like ten1.ui.apps, ten1.ui.config, ten1.ui.content so on and publish to the repo manager/registry. Similarly, TEN-2, common-config, third-party will generate and publish their own content packages. During development phase, these will be SNAPSHOT packages and during a release it will be an incremental version without SNAPSHOT.

Now, these tenant, common config, thirdparty packages are used as dependent packages in the Skeleton. The dependent packages are embedded in the POM file of Skeleton. Here we can even follow the same AEM archetype-based structure of ‘all’ module in Skeleton. So, in the pom file of ‘all’ module, we make use of the handy filevault plugin to embed our tenants and all other common packages as seen below.

The dependencies section of Skeleton pom looks like

 

    <!-- CustomerA dependencies -->

<dependency>
    <groupId>com.customera.aem</groupId>
    <artifactId>customera.thirdparty</artifactId>
    <version>${customera.aem.thirdParty}</version>
    <type>zip</type>
</dependency>
<dependency>
    <groupId>com.customera.aem</groupId>
    <artifactId>customera.common-configs</artifactId>
    <version>${customera.aem.commonConfigs}</version>
    <type>zip</type>
</dependency>
<!— TEN-1 project bundle-->
<dependency>
    <groupId>com.customera.ten1</groupId>
    <artifactId>ten1.ui.apps</artifactId>
    <version>${customera.ten1}</version>
    <type>zip</type>
</dependency>
<dependency>
    <groupId>com.customera.ten1</groupId>
    <artifactId>ten1.ui.content</artifactId>
    <version>${customera.ten1}</version>
    <type>zip</type>
</dependency>
<dependency>
    <groupId>com.customera.ten1</groupId>
    <artifactId>ten1.ui.config</artifactId>
    <version>${customera.ten1}</version>
    <type>zip</type>
</dependency>
<dependency>
    <groupId>com.customera.ten1</groupId>
    <artifactId>ten1.core</artifactId>
    <version>${customera.ten1}</version>
</dependency>
<!— TEN-2 project bundle-->
<dependency>
    <groupId>com.customera.ten2</groupId>
    <artifactId>ten2.ui.apps</artifactId>
    <version>${customera.ten2}</version>
    <type>zip</type>
</dependency>
<dependency>
    <groupId>com.customera.ten2</groupId>
    <artifactId>ten2.ui.content</artifactId>
    <version>${customera.ten2}</version>
    <type>zip</type>
</dependency>
<dependency>
    <groupId>com.customera.ten2</groupId>
    <artifactId>ten2.ui.config</artifactId>
    <version>${customera.ten2}</version>
    <type>zip</type>
</dependency>
<dependency>
    <groupId>com.customera.ten2</groupId>
    <artifactId>ten2.core</artifactId>
    <version>${customera.ten2}</version>
</dependency>



[2] How is this handled for the local development?

We can make use of a special ‘localDep’ maven profile to include all the common code+config binaries from the repository manager. By doing this, we can make sure that individual tenant teams get all the necessary common dependencies during their build and do not need access to other repositories, that might be developed by other integrators. This way the tenant teams can work independently in their local machines even without checking out the Skeleton(main) project.

<profile>
    <id>localDep</id>
    <activation>
        <activeByDefault>false</activeByDefault>
    </activation>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.jackrabbit</groupId>
                <artifactId>filevault-package-maven-plugin</artifactId>
                <configuration>
                    …                             
                      <filters>
                        <filter><root>/apps/ten-1-packages/local-dependencies</root></filter>
                    </filters>
                    <embeddeds>
                        <embedded>
                            <groupId>com.customera.aem</groupId>
                            <artifactId>customera.thirdparty</artifactId>
                            <type>zip</type>
                            <target>/apps/ten-1-packages/local-dependencies/install</target>
                        </embedded>
                        <embedded>
                            <groupId>com.customera.aem</groupId>
                            <artifactId>customera.common-configs</artifactId>
                            <type>zip</type>
                            <target>/apps/ten-1-packages/local-dependencies/install</target>
                        </embedded>
                    </embeddeds>
                    …
                   
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>              

[3] Now, how does the deployment auto-triggers work from the tenant?

We can make use of the GitlabCI multi-project pipelines or any alternative for this. So, whenever a merge is happening to a main development branch of an upstream project (TEN-1, 2, common-config etc) a CI job will trigger another job in the downstream project which is the Skeleton/main project in this example. This makes sure that any merges to a main/development branch of a tenant project will always push the latest code to CM repo through the Skeleton/main project because skeleton is relying on the latest binaries from shared registries.

[4] How do you pull the artifacts in CM pipeline?

As explained here, if the shared private repository is available on internet then we can use the ‘env.CM_BUILD’ profile to pull the dependencies in CM pipeline.

<profile>
    <id>cmBuild</id>
    <activation>
        <property>
            <name>env.CM_BUILD</name>
        </property>
    </activation>
    <properties>
        …
    </properties>

    <repositories>
        <repository>
            <id>repoID</id>
            <url>repoURL</url>
        </repository>
    </repositories>

    <pluginRepositories>
        <pluginRepository>
            <id>repoID</id>
            <url>repoURL</url>
            <snapshots>
                <updatePolicy>always</updatePolicy>
            </snapshots>
        </pluginRepository>
    </pluginRepositories>
   
</profile>

One thing to be noticed here is that the tenant codes are not getting scanned during the CM pipeline and whatever we placed inside Skeleton will only be scanned for code quality. In CM, some of the code quality rules are now extended to at least some of the embedded packages. We can overcome this situation also by integrating SonarQube like tools to the Gitlab pipeline as quality gates for each repository.

That’s it, this is a good example of an alternative model which is also our favorite solution we implemented for many customers at comwrap. But we are also enthusiastic to try out the new Git submodule based multiple repositories feature of Cloud Manager for our ongoing projects. That made us to share some thoughts on to the capabilities we are expecting in Cloud manager.


Deployment Model 3 (using Git Submodules)

With the introduction of multiple repositories in CM, Adobe is putting forward a potential alternative to the model 1 as detailed here

As in the above deployment methods, there is a main project or a parent project that collects all the tenant codes as Git submodules together to build into one project. The main repo is the one attached to the Adobe Cloud Manager build pipeline and the others are just the submodules that have individual project/tenant. The individual tenants and other upstream projects can be synchronized at their own discretion, instead of having everything in one repo and troubling to manage the merges.

An example repository structure is a parent that includes the submodule definition with all the shared modules (one-per-aem configs, shared component, services, integration) and tenant projects as submodules. Parent/Main project stores reference to other source codes, import them and recompile.



However, it needs to be designed in such a way that there aren’t any circular dependencies otherwise everything should be consolidated to a single codebase.

Does that cover all our needs especially concerning the points mentioned in the beginning?

Unfortunately, one drawback we could notice with this approach is that the developers would be needing access to all submodules to be able to build at least once locally. The multi-repo feature is still missing to solve this infrastructure requirement. This will be a major obstacle when there are tenant projects that are being worked on by different teams or even different vendors. If sharing source code is a real concern among different teams then they will have to rely on other solutions like what explained in [1] above.

Another area which we would love to see as an improvement is that the submodules directly triggering the cloud manager pipeline on any Git changes. As per the official document, the submodules must be updated to the newest commit (git submodule update –remote) prior to the deployment which isn’t happening automatically. So here, we are again dependent on external tools to get the auto-trigger working from upstream projects. See [3] above.

It would also have been great if multiple repositories are added to a single pipeline and for each repo the artifacts would get built and installed. At present, this setup requires the support of some external tools like in the previous deployment model.

Even though we can straight away implement the Git submodule setup for our projects, we would still have to use other solutions for bringing the complete system in action.

Below table compares and summarizes 2 and 3rd model concerning above points and gives us an idea on which model would be our best fit.

Focus points

Binary from shared repository manager (Nexus, jFrog, Gitlab registry etc)

Git submodule-based approach

Working of independent teams

Only build artifacts to be shared, access to source code is not needed

Access to source codes needed at least once

Code scanning of distinct modules in CM

Only the parent/reactor/orchestrator project is scanned

All the modules get scanned for code quality

Auto-trigger from upstream projects

Easily possible with the features from external tools like GitlabCI

Submodules need to be updated to the newest commit

Independent release lifecycle

Can have separate release cycle for each module

Everything compiled to one package due to which separate release cycle doesn’t make sense

 


So, is the new feature a game changer?

You can see in the above table that, except in the case of code scanning of distinct modules, the new Git submodule based multi-repo solution is not having an upper hand when overall deployment process is considered. We would still be requiring a hybrid approach until cloud manager is ready to cover all our needs.

Till then, we can’t say the new feature as a complete game changer, but it is also not too far away. We hope Adobe is moving closer towards the ‘one-solution-fits-all’ of cloud manager in its upcoming releases.

Stay tuned for our upcoming article on this topic!


Photo by Tudor Adrian on Unsplash

Related Articles

  • 2024-02-26
    Tutorial: How to leverage Adobe Targets Category Affinity feature
    Author Photo
    Martin Altmann
    Adobe Practise Lead
    This tutorial is intended for AEM / Adobe Target / Adobe Launch rookies and should help to use the Adobe Target feature "Category Affinity".
  • 2023-07-18
    DAM Capabilities Unleashed: Building Your New DAM
    Author Photo
    Timur Asar
    Partner at comwrap UK
    The DAM Europe 2023 Conference shed some light on various aspects of DAM implementation. We explore the best takeaways and delve into best practices when starting any new DAM project.
  • 2023-03-27
    Comwrap Reply named Adobe 2023 Digital Experience Emerging Partner of the Year - Central Europe
    Author Photo
    Martin Altmann
    Adobe Practise Lead
    Comwrap Reply has been awarded the prestigious Adobe 2023 Digital Experience Emerging Partner of the Year - Central Europe. The award honours companies that have made leading contributions to Adobe’s business and significantly impacted customer success.
  • 2023-01-10
    Mastering Tagging in AEM
    Author Photo
    Martin Altmann
    Adobe Practise Lead
    In this blog post, we will provide tips on how to optimize your content strategy in the Adobe Experience Manager by effectively organizing and tagging your assets. Tagging your assets in AEM can improve their discoverability, searchability, and accessibility, as well as boost the effectiveness of your content marketing efforts. We will introduce a workshop approach that can help your team create a taxonomy for your assets, and provide best practices for using tags in AEM. By following these tips, you can ensure that your tagging efforts are on track and that your assets are properly organized and easily accessible.
  • 2022-10-26
    Why Your Customer Experience Strategy Needs a DAM
    Author Photo
    Timur Asar
    Partner at comwrap UK
    It’s time to rethink the role of the DAM in your digital experience strategy. Your digital asset management (DAM) platform has a superpower, one that still isn’t being exploited to its full extent.