Sorting Fact From Fiction: Nygina Mills on Building Brand Trust in an AI Future

Building Brand Trust in an AI Future | ProductiveandFree
 

In a future dominated more and more by artificially intelligent agents and algorithms, how can brands build and retain trust? And what can they do to center concerns about safety, ethics, and systemic bias — to say nothing of the potential for “runaway” intelligence — as they integrate AI into their operations?

AI and regulatory compliance expert Nygina Mills brought this perspective to a Harvard Business School symposium last year that focused on the intersection of AI ethics and corporate strategy.

Mills and her fellow experts tackled a range of challenging questions about the future role of AI in the boardroom and the broader economy.

Some, they admitted, are difficult to answer conclusively with the information available right now. But they were able to address some of the most pressing AI ethics and compliance issues facing decision-makers right now.

algorithms | ProductiveandFree

Will Companies One Day Run on Algorithms?

For all AI’s benefits today, companies don’t “run” on it today. At best, human decision-makers use models like ChatGPT to gather information and feedback in advance of setting strategy, or perhaps as part of an iterative decision-making process.

However, in the not too distant future, AI models will have substantially more autonomy to make impactful decisions that affect company personnel, customers, vendors, and others.

In this paradigm, brands will need to show internal and external stakeholders that they have not “handed over the keys to the computers.” This will require a new type of strategic communications and brand-building — one almost certainly assisted or even directed by AI, but ultimately accountable to humans.

What’s the Role of AI and ML in High-Level Decision-Making Right Now?

For now, AI and machine learning (often synonymous with “narrow AI” models, as opposed to generative AI) have relatively proscribed roles to play within organizations.

These roles often involve automating or otherwise streamlining rote, repeatable tasks like customer relationship management and automated cybersecurity processes, notes business strategy expert Adam Uzialka. We should not discount their importance, as AI and ML really do drive “faster, better-informed decisions” at every level of the organization, says Forbes’ David Morel.

Indeed, in the near term, forward-thinking business leaders will look for ways to expand these roles while demonstrating to external stakeholders the value they generate.

What Are the Most Important AI Ethics Issues of Our Time?

This question is crucial for those tasked with maintaining or growing brand trust in an AI present and future. Today, AI ethics questions tend to revolve around instances of bias, misinformation, or inaccuracies (“hallucinations”) that produce harmful and/or value-destroying outputs.

For brands that leverage AI to be seen as trustworthy, they need to be seen as actively addressing these near-term challenges while laying the strategic groundwork for a more distant future where AI begins to engage in meaningful executive activity.

artificial general intelligence | ProductiveandFree

What Might Future AI Breakthroughs Mean for American Businesses?

ChatGPT went farther, faster, than many AI experts believed possible. However, many remain skeptical that further LLM development will result in the breakthroughs necessary to achieve the precursors for artificial general intelligence.

Those breakthroughs will probably occur — but they’ll be built on different architectures, possibly ones that haven’t yet been designed. Businesses interested in remaining competitive in an AI future must pay close attention to highly technical developments within the space, because the risks for not engaging as the state of the art advances are simply too great.

Final Thoughts

Many experts believe that AI will one day be as important for organizations as human capital is today, if not more so. If and when that day arrives, how will tomorrow’s leaders manage it?

And will they be grateful or resentful of the preparations today’s leaders made in anticipation?

We don’t know exactly what the future will look like. But we do know that AI ethics will become ever more important as models’ capabilities improve. We can also expect future breakthroughs in AI models — moving beyond the LLM status quo of today — even if we can’t predict exactly when they’ll occur.

Finally, we know that the risks for companies that don’t embrace AI will be just as great as those facing companies that don’t build proper risk management and compliance frameworks into their AI models and AI-assisted processes.

The future is (almost) now. It’s past time to prepare.



Share in the comments below: Questions go here