It’s 2025, and we would be remiss if we didn’t discuss AI. With two thirds of Australians reporting their organisation uses AI (1), it’s seen such tremendous growth that not even the most sceptical business leader can ignore it. AI is moving fast and forcing leaders to revisit the fundamentals of how we deliver value. So, it felt only right to make ‘AI means business’ the topic of conversation at our second Referrer’s Lunch of the year.

To help understand how to adopt AI fully and safely, we invited none other than Simon Kriss, one of the world’s most influential AI thought leaders. As Chief AI Officer and advisor to multiple organisations, Simon’s work spans business and government across Australia and the globe, with a focus on the ethical adoption of AI. A recent keynote speaker at the UN in Geneva, Simon makes AI accessible and exciting.

Our Co-Founder and Executive Director, Brett Bonser, opened the event with a very fitting quote from none other than Bill Gates (2), who described AI as “very profound and even a little bit scary.” Right now, we’re standing at the edge, peering over the horizon with no clear view of the destination. This lunch was about surfacing some early insights about the path ahead.

What ‘good enough’ means in AI marketing – and beyond

Hugh Macfarlane, our Founder and CEO, kicked off the proceedings by redefining what ‘good enough’ means right now. And especially what it means in the context of AI in B2B marketing.

ChatGPT is the Starbucks of AI. Starbucks didn’t elevate great coffee. It significantly raised the bar for bad coffee. And just like Starbucks, ChatGPT has changed the game by raising the bar for baseline competence in many domains, including AI marketing.

In Hugh’s words, at align.me, “good enough means knowing when we need perfect and when we need done, and delivering the right one of those at the right time. Two years ago, ‘good enough’ meant better than you could do. Today, it means better and more efficient than you can do with ChatGPT.”

At align.me, we’ve responded to this shift by integrating AI into our planning, coaching and day-to-day execution. Our Funnel Plan™ software includes AI agents. In marketing, we have custom GPTs for every tactic.

Whether your focus is AI driven marketing, sales, customer service, supply chain, or HR, the goal isn’t just to adopt AI, but to set the standard for responsible, ethical, and transparent use. Bold moves need smart safeguards, because our reputation, intellectual property, and client trust depend on it.

But safeguards don’t start with policy – they start with people. Building the right mindset and skillset across your organisation is critical. Before choosing a platform or tool, you need AI literacy.

Are you AI literate?

In adopting AI, mindset and skillset come before toolset. So, subscribing to whatever AI platform has captured your attention should be the last step – though many put it first. So, is your organisation AI literate? Are your boards?

Let’s focus on some upskilling. Here are four foundational topics to consider when discussing AI.

The four types of AI

While AI is a broad technology array, there are four main types in widespread use in the market today:

  • Machine learning – the original predictive model (e.g. banks use machine learning to help figure out whether a credit card transaction is fraudulent).
  • Traditional AI – the narrow AI behind Google Maps, Netflix, or Amazon suggestions.
  • Generative AI – capable of creating content and interacting in natural language (ChatGPT, Claude, Gemini).
  • Cognitive AI – explains its thinking and justifies outcomes.

We must also clarify the difference between AI agents (which perform a single task) and agentic AI (which can choose which agents to use and make decisions independently).

Adoption pathways

There are four common strategies organisations are taking in their adoption:

  • Data-first (slowest) – get all the organisation’s data together in one place to understand it, catalogue it, and prioritise it.
  • Experiment-first (fastest) – in Simon’s words, “put two or three people in a room, give them a couple of tools, lock the door and tell them they can’t come out until they’ve solved a couple of business problems.”
  • Process-first (targeted and pragmatic) – where you have a broken or a clunky process, and you want to try and make that part a little quicker (e.g. transcription software integrated into your CRM).
  • Governance-first (most cautious) – for organisations with highly vulnerable clients that want to get all set up before doing anything with AI.

Most Australian businesses, Simon notes, are leaning toward experiment or process-first, followed by governance. That often comes down to how much risk appetite the board has and how entrepreneurial the organisation is.

A peek into the future

Simon explored emerging AI trends:

  • The ‘fight for the desktop’ – as every major platform (Salesforce, SAP, Microsoft, browsers like Perplexity’s) races to become your primary AI assistant.
  • The push for ‘zero learning’ – where we don’t have to train AI.
  • The coming collision of AI and quantum computing – which is set to have a significant impact on speed.
  • The rise of sovereign AI – a major topic in Australia, with countries like Switzerland, Taiwan, and Japan already developing their own models. It’s worth noting that the moment you hit enter on a prompt, your input is processed on an AI chip – often located offshore. That poses real security and privacy risks.

As these trends accelerate, AI competence is becoming the new baseline. SAP (3) is upskilling its global workforce, while Shopify (4) now expects reflexive AI use in every role. The question will no longer be whether to use AI, but how well.

We’re no longer asking, ‘Will AI reshape the nature of employment?’, but how much. To avoid the trap of doomsday scenarios, we can look at history: we’ve seen this before, and we’ve adapted. To emphasise this, Simon predicts a shift away from hiring based on IQ, toward EQ (emotional intelligence), LQ (learning ability), and RQ (resilience quotient). In a world where AI can automate routine tasks, human strengths like judgment, empathy, and adaptability will become increasingly important.

Governance and transparency statements

There’s a new mandate requiring all Australian Federal Government Departments to publish AI transparency statements. While only half met the February deadline, the Digital Transformation Agency (DTA) is now pushing these departments to get it right.

This matters because once your business starts providing services to the government, they’ll ask you for your transparency statement – so they can roll it into theirs. And this isn’t just a token effort. They’ll need to know not just what you do with AI, but also how your vendors (yes, even Deliveroo) use it.

This ripple effect will increase pressure on suppliers and partners to demonstrate real governance.

Now, two parts of AI governance interplay with each other:

1. Responsible, ethical and safe AI framework – the rules that we play by. For example, one could be ‘we will never knowingly productionise AI that causes any harm (physical or psychological) to a human.’

There’s customer transparency: how open are we about AI use in products or communications? Organisational transparency: what do we share publicly or with regulators? Then there’s internal transparency: how honest are we with our teams about when and where AI is used? And finally, personal transparency: when should individuals declare AI involvement?

Every organisation will need to go through this and set a personal ethical level around what needs to be marked as AI derived and what doesn’t. And it will change over time.

2. AI governance model – how the rules that we previously designed are going to be implemented. Who meets, how often, how are decisions made, and how do we differentiate between low-risk applications of AI and very high-risk or prohibited.

An important concept is ‘situational ethics’ – the understanding that governance isn’t always black and white, but varies depending on the use case, data sensitivity, and potential impact.

Group discussions

With that context in mind, it was time to get into the nitty gritty, one of our favourite parts of the lunch. Attendees broke into four groups to discuss the issues that Simon had raised.

The first workshop was centred around the rules we play by, and the question we asked was “What rules should govern how we allow our team to adopt and use AI?​”

1. ‘Do no harm’ emerged as a guiding principle – but before rules can be set, it’s important to define who is being protected: people, organisations, communities, privacy, or data.

2. Five ‘human in the loop’ rules were proposed, focused on:

  • content editing
  • AI training
  • determining when AI can and cannot make a decision
  • ensuring that accountability remains with humans
  • defining access and risks through a delegation of authority

3. A working definition of AI: “systems that can complete tasks that previously only humans could do.”

4. Risk appetite must be defined, with the acknowledgement that moving too slowly can be just as risky as moving too fast.

Our attendees had a tendency to answer the how rather than the what. We’re all eager to move forward but sometimes skip the foundational questions.

Simon suggests a practical path forward is not trying to write a perfect rule. But to think about it as managing 10 big rules or tenants, with each having little nuances underneath. For your entire framework, these can be as many as 60 to 80 rules. You might start with a very simple framework and as your use cases for AI become more complex, your framework starts to expand.

We then moved straight to the second conversation: How will we enforce the rules​? How might we control the deployment of AI systems​?

1. Ideas emerged around the practical structures businesses need to manage AI safely and effectively:

  • Have the board oversee AI use
  • Set up a dedicated AI ethics or governance committee
  • Consider inbuilt rules, and let AI help run them
  • Weigh the balance between too much and too little governance
  • Be careful about public access to sensitive data

2. Another line of thinking explored how existing structures – particularly audit and risk committees – could take on responsibility for AI governance. Suggestions included using AI itself to monitor and enforce rules and aligning with a shared set of guiding principles.

3. A strong theme was that the greatest risk may be inaction (think of Blockbuster). Key recommendations included:

  • Open up, don’t lock down – if you’re blocking AI use within your business, people will simply use it another way (shadow IT)
  • Bring IT and HR into the boardroom and make education the first step
  • Ensure the board understands AI well enough to lead with it – not just react to it
  • Note that having a generic policy around AI is akin to having a generic policy on how to use a computer. It’s not sufficient anymore.
  • Create an AI code of conduct – one that actively guides behaviour, not one that sits in a drawer

Key takeaways

The key takeaway from this discussion is that AI must be approached with intention. We must be aware of the risks and adopt an education-first mindset for the entire organisation. The goal is not to slow down innovation, but to steer it in the right direction. Humans need to be at the centre of the conversation.

Simon shared a powerful final model: a 2×2 matrix mapping use case risk against data sensitivity to determine appropriate governance levels. “The same AI tool might require different controls depending on what data it’s handling,” he explained. By mapping your governance to your risk, you can make sure you’re not being overly restrictive. A low-risk use case involving highly sensitive data deserves very different treatment from a high-risk use case with low-sensitivity data. This way, we can apply AI principles with nuance.

While the future of AI is still unfolding, one thing is clear: now is the time to act.

Sources

(1) KPMG, Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025

(2) https://www.harvardmagazine.com/2025/02/harvard-bill-gates-ai-and-innovation

(3) https://news.sap.com/2025/07/sap-learning-time-to-competency-ai-age

(4) https://www.digitalcommerce360.com/2025/04/08/internal-memo-shopify-ceo-declares-ai-non-optional

Author - Hugh Macfarlane

Hugh Macfarlane is founder and CEO of align.me. He's the author of 'The Leaky Funnel', and hundreds of video blogs, papers and ebooks and a handful of research reports on all things alignment.