A Chimpan-Z Use Case
AI Governance is important not only to comply to rules and regulations such as the AI Act, but also to ensure that our AI models operate transparently, ethically and with as less bias as possible taking account of our privacy. However, many companies struggle with incorporating AI Governance. We already created a blog post on how you can derive value from your AI Governance with our AI Governance framework, but our help does not end there. Today, we created a demo for you where we walk through our fictive business-to-business fast grocery delivery company: Chimpan-Z. In this demo, we will walk through all the steps for governing our Customer Churn model in Collibra, one of our partner’s AI Governance module. We will explain the purpose of all the sections in Collibra’s AI governance module, and provide an example from our Customer Churn model.
Within our Collibra instance, we have a separate section for AI Governance. We govern both our data, and our AI models in Collibra.

Note that for readers that work with Collibra: the module is only visible when you have the add-on for AI Governance. It is not included in the regular AI Governance module. Additionally, you can only view the AI Governance module if either one of the roles of AI Legal Reviewer, or AI Business User are assigned to you.
With the AI Legal Reviewer Role, you can create and delete AI Governance Use Cases and create the assessments for the AI reviews. You can easily go to ‘AI Legal Reviews’ to either register the AI Use Case, or start a (new) assessment.

You can also view recently created use cases in ‘Recently Created’ and Assessments you should review in ‘To Review’.

Below your recently created use cases and the assessments you need to review, you can view your Watch List. The Watch List is a list of all use cases that are active. There are different risk layers: low risk, medium risk, and high risk. In our example, we can see that our Z-Churn model is in the monitoring phase and the model has a medium risk.

With the AI Business User Role, you can create and update AI Governance use cases. There are three phases where the AI Use case can be in: Ideation, Development, and Monitoring.
The Ideation phase
The Ideation phase entails use cases where you document use cases that you would like to start. So, before you create your model, you want to specify what the model does.
For our Churn Model, the following steps were taken during the Ideation phase:
- Provide Business Context
- Data and AI Models
- Legal and Ethics
- Risks and Safeguards

For the Business Context, we provided information on the following:
- Business Case: focusing on the “why” the model would be developed. It is the concrete business challenge, and what needs to be changed to solve issues with the current business processes.
- Business Value: describing the business value of the product or service. Values are defined in terms of new revenue, cost reduction, or risk mitigation.
- Use Case Application: defines if this use case will be used internally, or will become a customer-facing product, so put on the market.
- Maintenance Costs: thorough cost frame to explain the cost of running the use case for a selected amount of time.
- Business Sponsor: who will sponsor the initiative throughout your organisation.
This helps us determine how the model aligns with the broader scope of the organisation.
The Churn model for our fast delivery service looks like this:

In the Data and AI Models, we provide information on which models are used in the use case. We define if it’s a third party model, an internal model. Also, the description of the training data, inference data, and model output. We see the data storage, model monitoring, and automation level. Note that as we use Collibra also for our Data Governance, all our metadata is present in Collibra. So, we exactly know on which data the model works on, which table and schema and database it is, and also the quality of the data.

For legal and ethics, we define pre-set thresholds in a risk assessment. Initiators of the use case should fill in the assessment honestly, and then they will generate a risk rating based on the input. Here, it should also be documented which Policies are related to the AI model. Also, it is documented by which assessment review the AI Model is assessed.

The assessment usually is made up of several categories. They might include the purpose of the AI model, security protocols, risks for intellectual property, data privacy, business, ethical and other risks. Here you also state the transparency of your AI model, and the legal approval and renewal date.

If the model gets approved, it will move to the development phase.
The Development phase
When the AI model is approved, the developers can start building the AI model. This integrates the AI Governance principles from the Ideation phase into the AI model lifecycle to reduce risks before deployment. This ensures that models are correctly documented, explainable and compliant before they are deployed. When the models are well-documented, bias-checked, aligned with governance policies and checked with the corresponding stakeholder(s), it will move to the final stage: Monitoring.
The Monitoring phase
Finally, the monitoring phase starts. In Collibra you can view which models are currently in the Monitoring phase, including their risk level. This phase ensures that all AI Models remain compliant, accurate and trustworthy over time. This phase includes continuous tracking of AI performance, bias and regulatory adherence.
Of course, for every organisation use cases might differ based on legal requirements, transparency, bias and so forth. Clever Republic specialises in creating custom solutions for each of our clients. If you need any help with your AI Governance solution, feel free to reach out to us!