Back to Industry Insights

Delivering AI in Corporates: How It's Really Working

An in-depth discussion with Maaike Van Den Branden on the practical realities of implementing AI in enterprise organizations

Maaike Van Den Branden

Maaike Van Den Branden

Chief Data Officer

Use Cases & Prioritisation

At Mumsnet, when generative AI became accessible, every team was urged to access and play with ChatGPT as much as possible. Workshops were held to understand which processes were taking up a lot of time and could potentially be automated/supported by generative AI. Of course, generative AI was not the solution to every project idea. Actually, a lot of ideas merely required automation. These were looked into in more detail, and based on a cost benefit analysis prioritised.

A lot of the ideas could just be resolved by embedding the usage of ChatGPT into processes or by creating custom GPTs. A few larger projects however kicked off too, where generative AI was critical to their success. MumsGPT, a qualitative research tool, was one of them. Another example was the automatic generation of advertising posts for Mumsnet's affiliate programme.

Key Insight

"For a smallish firm, it was quite straightforward to go through this process, however I can imagine that larger firms will struggle. We are talking about a completely new technology, not always well understood by people without technical backgrounds, which is suddenly available and can be incredibly impactful for some businesses and business areas."

This process requires an organised approach, led and guided by some visionaries, preferably people who understand the art of the possible, but also who are passionate, open-minded, quick thinkers. These individuals need to be able to look at business processes and try and re-imagine the process in light of these new technologies.

I believe that there are still a lot of firms stuck in the stage where they are trying to implement ChatGPT's and Copilots across the business. Whilst that will certainly improve individual productivity, it's most likely not going to change the dial for these organisations. Unfortunately, it's often already a massive project to enable this access alone: agreeing on guidelines, ethics, guardrails, what data can be uploaded, providing training courses etc.

The Competitive Reality

However businesses need to get beyond this stage to really make genAI a success and that means undertaking a thorough examination of their processes, working with experts to see where and how genAI could be useful for them, or create competitive advantages. If they don't, their competition will. This is not one companies can just sit out... It's not going to go away. It's like when motorised vehicles were introduced to the world. The companies that kept delivering with horse and cart just couldn't survive and went bust...

Where Does AI Sit in the Organization?

As you can see, I keep using the term genAI, rather than generalising the term AI. Having worked in data science for the last two decades, I know most businesses are already truly experienced in AI use cases. Machine Learning is also a form of Artificial Intelligence, so every company that has built a customer segmentation, churn models, propensity models etc, has had AI in their roadmap for years. Machine Learning and data science use cases are usually the responsibility of the CDO.

The complexity with GenAI arises from the fact that not every company requires finetuning a large language model or even building and training their own; that will be the minority of companies. The vast majority can use off-the-shelf solutions, hosted in-house or using API connections. The skill to get projects like this up and running, is not necessarily a data skill. This could perfectly well sit within a technology team with traditional engineers, so one could argue that the responsibility for AI should sit with the CTO.

However, like with traditional AI projects, many genAI projects' success is dependent on the quality of the data and that's where it keeps reverting back into the remit of the CDO. I even see newly created functions and roles, focusing on AI alone. Whilst I agree there should be individuals responsible for the delivery of (gen)AI projects and new roles created, I think that genAI projects should be treated and prioritised like any other AI and/or software project and that these projects should sit with the teams that are most impacted by introducing them.

Value Measurement

GenAI projects can drive value in many different ways: cutting costs through working more efficiently, but also generating more with the time that has been freed-up. These projects can become completely new revenue streams too (eg: new premium features, or a standalone new product).

Value Measurement Framework

In my opinion, value measurement for genAI is not any different than it was before for other projects. Investment in a new technology or new machine learning technique requires a clear business case:

  • What are we going to achieve through it?
  • Why are we doing it?
  • How much is it going to cost us to deliver this project (and to keep it running)?
  • How much will it generate for us?

Once all of these questions are answered, the project can be prioritised accordingly. I think it is incredibly important for businesses to do this rigorous exercise and not just get excited by the sparkly new thing and put cost versus outcome in perspective compared to other (non-(gen)-AI) projects. Most of all, genAI is not an end goal in itself, it's a means to deliver a solution that will resolve a true business problem.

How Does It Affect Data Functions?

As I already alluded to before, GenAI does not necessarily need to be built by a data team, whereas traditional AI/ML projects do. The data team members most likely to be influenced would be the data scientists (if required to finetune or build LLMs), (ML) engineers to create pipelines and data quality managers to ensure the quality of the unstructured text data is up to scratch.

However, every data team member can optimise their way of working with generative AI through using it as a sparring partner for coding, help with writing documentations or even generating basic insights. A lot of the boring ground work can be outsourced to focus on the value-add work.

Shifting Skill Requirements

There will likely be a shift in required skills, with less focus on basic script writing and more on prompt engineering, advanced analytics and domain expertise & strategy. Insights will be delivered faster, but the focus will be on qualitative insights. With AI's proneness to hallucination, quality assurance (explainability, traceability, rigorous testing & validation) and an in-depth understanding of the business will become even more critical than before.

The arrival of genAI likely also means the arrival of new roles within the team/company, like for example prompt experts, ML ops engineers or AI governance/ethic leads. I believe there will also be a shift in processes as a result of everyone in a company having access to genAI. I think many stakeholders will initially try and resolve their questions themselves and only reach out to the data team when it becomes more complicated.

Finally, AI brings increased risks in terms of privacy and security, ethical decision making and model performance drift over time. The arrival of genAI requires an even closer collaboration between data teams and risk and compliance teams.

How Do You Get It to Land? Going Beyond Copilots

Companies get generative AI projects to land by treating them less like shiny tech experiments and more like disciplined business change. The pattern is pretty consistent across organisations that succeed.

Success Patterns

  • Start from a business problem, not from 'look at what this shiny new thing can do'
  • GenAI projects need a business owner to sponsor and drive the project
  • Many AI projects require a change in processes, rather than becoming a bolt on
  • It has to be super clear where AI assists, where humans decide, and where controls sit
  • Process steps that become redundant because of AI need to be removed

If people need to go out of their way to use the AI, they probably won't adopt it. From experience, if there is too much focus on model quality rather than adoption, the uptake will be low. Businesses need to invest equally in UX, feedback loops, training and trust. If we understand who uses the tools, why they use them, what bottlenecks exist, the solutions we create will become much more embedded in daily work and lead to actual behaviour changes.

The Marathon Approach

The most successful AI projects are not big bang projects. Successful AI projects tend to start small, gain trust and then expand. By starting small I mean:

  • Narrow data sources accessible
  • Clear permissions
  • Defined output formats
  • Human-in-the-loop by default

Starting this way avoids hallucinations becoming decisions, compliance teams coming in and killing the project late and erosion of trust because of a few bad answers. Starting with a constrained environment might appear like there is a lack of ambition, but a reduced initial blast radius will ensure a more successful landing. It's a marathon, not a sprint.

Skills & Ethics

I already mentioned that new roles will be created (and are being created already), but for existing data team members, there will also be a shift required in skills as I mentioned before. Skills like critical thinking, problem framing, system design and data intuition are going to become much more important in early careers than they used to be, as genAI can act as junior analyst/data scientist and perform a lot of basic tasks like coding, one-off analyses and building basic reports already.

That doesn't mean humans can't add value anymore. Understanding how genAI works, means that analysts need to be able to quickly spot and challenge mistakes or problems using statistics, domain knowledge and logic. This is where ethics comes into play as well.

Ethical Risks GenAI Introduces

  • Authority bias: People trust well-written outputs
  • Hidden bias amplification: GenAI learns from historical bias and amplifies this in the outputs
  • Hallucinations: Outputs will sound precise and correct even when confidence should be low
  • Data leakage and IP risk: Prompts can unintentionally expose sensitive data, proprietary logic and/or regulated information

This means analysts need to design mandatory review points, bias testing processes, calibration tests and embed information security as part of the process. As a result, teams should be explicitly developing skills like model literacy (to understand how LLM's can fail), explainability skills (for non-technical stakeholders), governance-by-design (to ensure logging, traceability and auditability) and being able to perform ethical reasoning under ambiguity.

Whilst some of these skills were already required sometimes before, they are now critical survival skills. For senior leaders this also means a shift in skills. Before genAI, experience showed up in what you build, now it will also show up in what you prevent.

How Do Companies Weigh Up Risks Versus Reward?

This depends very much on the sector in which companies operate and if the genAI is being used for internal processes or external facing processes. With Mumsnet, we immediately decided to make it clear to our users that no AI would be used to generate posts, although it would be an incredibly easy thing to do. The value and strength of the platform is that it's from parents to parents and our CEO wanted users to have trust in those conversations.

Even with making that commitment, you can't ensure that users themselves don't use AI to draft their answers, leading to speculation and posts being reported for being AI generated. Tackling this wrong can really undermine brand value and perception, so companies need to be cautious.

Involving risk and compliance functions and making sure there are clear guardrails and ethical guidelines are even more needed than before. Some processes/customer outcomes require complete transparency and explainability and in that case, black box LLM's are probably not your best bet. I personally don't think the risk versus reward is any different than any other project, except that it is heightened due to potential hallucinations, bad answers, the black box nature of LLMs, so the response should be heightened too.

Vendor Lock-in & Build Versus Buy

The risk for vendor lock-in with GenAI is real. There are numerous LLM providers out there and the market is changing at lightning speed. Currently, token costs are very reasonable and choices can be made between running on premises/in the cloud or connecting via API, the latter definitely being cheaper currently for companies such as Mumsnet.

Scenario Planning Questions

  • What is going to happen if our uptake/engagement with the tools go up?
  • What is going to happen to our costs if we have more data?
  • Where do we reach the tipping point between both solutions?
  • Is it worth investing more initially to have cost-certainty in the future?

However, I don't believe this is going to last and that once enough users are locked-in and there is some stability, I believe these vendors will raise their prices, so although it might seem fast and cheap to build solutions using API connections, businesses may want to think this decision through and do some proper scenario planning.

Build vs Buy Guidelines

With regards to build versus buy, I think this is something that should be evaluated on a project by project basis as this definitely depends on complexity of the project and what is already available in the market, but there are definitely some clear guidelines that can be used:

  • For any commodity problems (like lead scoring or internal knowledge bot) go with API architecture first, but ensure exit clauses exist
  • Use build to maintain complete control over core IP projects (for example anything that touches pricing or client facing)
  • Work with modular designs

Ready to Transform Your Business with AI?

Learn how Epoch AI can help you navigate the complexities of AI implementation and deliver real business value.

Get in Touch