AI and Enterprise Architecture: A Five-Part Blog Series Unpacking the Good, the Bad, and the Dubious
The rise of AI is simultaneously exciting and terrifying. To have it amongst us now, changing how we work and live, opens what feels like a Pandora’s box of possibilities. Immense potential, but at what cost? And how will it continue to evolve?
These burning questions are just as pressing in the field of Enterprise Architecture. While there is no crystal ball with infallible answers, we do have the next best thing. Tapping over 40 years of combined practitioner expertise, this blog series “Generative AI and Enterprise Architecture” provides an in-depth perspective on how generative AI will reshape EA as we know it. Cowritten by Ed Granger and Ardoq’s Chief Enterprise Architect Jason Baragry, this blog will set the scene, introducing the five axioms that define EA.
Here are the following four parts of the blog series if you'd like to skip ahead:
- Generative AI and Enterprise Architecture: Modeling the Enterprise
- Generative AI and Enterprise Architecture: Roadmapping the Future
- Generative AI and Enterprise Architecture: Driving Business Outcomes
- Generative AI and Enterprise Architecture: Empowering Changemakers
Introduction: Future Shock
For Enterprise Architects, these are exciting and uneasy times.
More than a year after ChatGPT’s public debut, the profound future shock from generative AI’s eruption into the mainstream shows no signs of letting up. At the heart of this is the essential unknowability of these foundation models, which, coupled with their impressively human-like capabilities, makes them something like Rorschach inkblots onto which we project our own hopes and fears.
Are we on the verge of Artificial General Intelligence, or are these AIs just “stochastic parrots”, brilliant mimics with no real understanding of linguistic meaning?
That’s essentially the debate around what generative AI is versus what it does.
While we don’t have much to add to the first topic, we do want to address the second question of what AI does and reframe it into a far narrower context: Enterprise Architecture.
As architects ourselves, we have no doubt that generative AI is consequential.
The more foundational a technology, the bigger the disruption. Just as James Watt’s steam engine gifted the nineteenth century with the motive power to automate everything from transport to weaving to farming, generative AI promises to transform the production of digital products and services in the twenty-first.
And that means its potential to impact the processes and systems that make up the organization is huge.
But all new technology also has to undergo a twofold process of optimization and integration. It has to be made both reliable and integral. Generative AI is not quite at that point, but it will be soon.
And when it is, will the architects be ready? It’s far from certain.
While Enterprise Architects already have frameworks and methods for planning the introduction and governing the implementation of technologies into the enterprise ecosystem, many of us would also reluctantly admit that those same frameworks and methods, mostly developed in an earlier era of computing, were already under duress in the pre-generative era of digital transformation.
Now, new technology is again challenging not just what we architect but how we architect.
So, will EA be completely overmatched by the coming AI wave?
When the foundations themselves are moving, nothing is certain. Enterprise Architects can be forgiven for having their own faith in their practice more than a little shaken.
Crises of faith demand articles of faith. So, to understand how Enterprise Architecture can step confidently into this new era, we really need to start with what’s not going to change.
Self-Evident Truths: 5 Axioms of Enterprise Architecture
The authors of the US Declaration of Independence wrote in that document the famous phrase, “We hold these truths to be self-evident.” In doing that, they created axioms — statements that don’t require further proof.
In mathematics and formal logic, axioms are important because they give us solid foundations on which to build a chain of reasoning. Without them, we tend to spiral into doubt and confusion.
So, in the face of profound technology disruption, Enterprise Architecture needs to state its own axioms—not as a defensive posture but as a platform for evolving our practice to suit this fast-emerging world while still holding true to our mission.
These are the five self-evident truths about Enterprise Architecture we believe were true yesterday, are true today, and will be true tomorrow.
- Enterprise complexity will tend to increase.
- Enterprise Architecture must model the enterprise.
- Enterprise Architecture must roadmap the future.
- Enterprise Architecture must be driven by business outcomes.
- Enterprise Architecture must be close to the changemakers.
In this blog series, we’ll take each one in turn and test it, to the best of our abilities, against the current and projected future capabilities of generative artificial intelligence.
Axiom 1: Enterprise Complexity Will Tend To Increase
Whether we know it or not, managing the complexity of the enterprise is a primary focus for Enterprise Architects.
We say ‘whether we know it’, because complexity is a pretty abstract concept, and it tends to hide itself behind a host of other labels, for example:
- Technical debt
- Broken customer journeys
- Legacy platforms
- Cyber vulnerabilities
- No single view of customer/product/sales
- Duplicate business or technology capabilities
- Portfolio misalignment
When we don't understand how the parts fit together or the effects of their evolution, business-IT alignment suffers.
But it’s also important to understand that enterprise complexity isn’t automatically a bad thing.
Complexity comes in different flavors: value-creating and value-destroying. The above examples fall naturally into the ‘bad’, value-destroying category because they’re the kinds of problems architects are paid to solve. However, an enterprise's ability to survive in a marketplace is also because of its complexity.
An enterprise can effect complex value transformations to create new products and services beyond the capabilities of any individual and engage with complex segmented marketplaces to tailor those products and services to meet individual needs.
There are an awful lot of enterprise roles, from product managers to engineers, whose roles are to build that value-creating complexity.
So when does enterprise complexity become a problem?
It’s hard to say exactly. What we do know is that as enterprise systems — technology systems, process systems, people systems — scale and evolve, they tend to lose coherence. Processes and platforms proliferate; information fragments. We equally have many names for the results of this:
- Slow time to market
- High risk
- High operating costs
- Poor alignment on execution
Not to mention, on an experiential level, frustration — from customers, employees, and leaders — that at every turn, our intent is thwarted and momentum killed.
So, for Enterprise Architects, it’s an unending battle to balance the high complexity the enterprise’s products and services need to interface with complex marketplaces with the low complexity its operations need for efficiency and transparency.
That’s why it’s our first axiom.
So, what will generative AI do for the complexity of the enterprise?
Will it usher in a new age of productivity, operational excellence, and product innovation? Or will it proliferate a toxic complexity and have the opposite effect?
Most likely both.
Complexity Good and Bad
On the side of value-creating complexity, generative AI has the potential to provide truly personalized experiences in education, customer experience, lifestyle advice, and more.
So many of our own experiences, as both consumers and employees of organizations, are standardized and mass-produced beneath a thin veneer of personalization because providing a truly bespoke experience is uneconomic. Rules-driven personalization that addresses us by our names and analyzes our preferences is a common commodity but not truly an adaptive experience. Generative AI offers the potential for everybody to receive true gold-card treatment.
Here, we’re beginning to talk about product and service innovation. However, Enterprise Architects tend to be more focused on what happens inside the enterprise rather than at its edges, which is traditionally the domain of product, sales, and marketing.
So, what are the corresponding implications for business and IT operations?
No matter which architecture dimension we measure them on—business, technology, or information—there are big complexity risks. So, let’s choose one, for example—a common EA mission: Application Portfolio Management (APM).
APM manages the health and complexity of the enterprise’s application estate by governing the lifecycle of those applications. The motives for doing this typically boil down to three fundamental drivers: cost, risk, and agility.
How will Generative AI affect the complexity of the enterprise application estate? It will almost certainly increase it.
To see why, let’s split our applications into two categories: Those we buy (i.e. COTS, commercial-off-the-shelf), and those we build ourselves in-house.
Proliferation in COTS
For COTS applications, we should first ask if generative AI will increase the number and diversity of enterprise applications. Will our total application counts rise?
Probably not: We’d already reached peak automation, where every business function is underpinned by some kind of technology foundation.
More likely, generative AI interfaces will rapidly proliferate throughout the enterprise.
In a recent blog post, Bill Gates sketched a vision of autonomous Gen AI agents that can independently understand, plan, and act to provision to users’ needs. Gen AI agents become something like a butler, a semi-autonomous personalized aggregator of third-party application services.
The idea of a single AI interface is compelling, but it’s a vision we suspect won’t materialize for enterprise IT. It’s just not in the interests of B2B software vendors to cede ownership of UX.
Thanks to OpenAI, Meta, Google, Anthropic, and the like, the LLM, although revolutionary, is also immediately commoditized and accessible via subscription or download. But where UX is commoditized, what’s the unit of differentiation?
Most likely, that’s the database and the transaction and analytic processes that surround it. This is the value lever because the LLM is dependent on the data it has access to — either at the time of training or at the point of query — to provide relevant answers.
And will enterprise application vendors be willing to relegate themselves to ‘headless’ service providers whose sole task is to feed somebody else’s AI? Probably not. Without a visible unit of value, their subscriptions become harder to defend.
Instead, we may well see a hardening of data access and a proliferation of Generative AI interfaces.
US venture capital firm Sequoia’s excellent analysis of the state of generative AI B2B applications shows this process is already well underway, with new entrants and existing players racing to incorporate CoPilots or conversational interfaces into their offerings.
Instead of the single all-knowing AI servant, we may be on the verge of a new age of animism. While our neolithic ancestors had to negotiate rites of passage with the gods of the forest and the spirit of the waterfall through ritual and sacrifice, we may soon need to negotiate a pantheon of minor AI “deities”: the spirit of the expense system and the djinn of holiday bookings.
Flights of fancy aside, with generative AI, the problems of functional duplication and process integration that architects have grappled with for more than two decades may simply reassert themselves in a new form.
Code Begets Code
So that’s COTS – what about in-house?
Here, it’s a pretty safe bet that technical debt will go through the roof; we’ll see an explosion of home-cooked AI-driven automation across all parts of the enterprise.
This is simple economics built on top of existing trends in enterprise demographics.
Demographics: The Rise of Business Technologists
Successive waves of technological innovation have already democratized the process of technology change far beyond the borders of IT. A 2022 Gartner study reported:
“41% of polled organizations’ workforces were business technologists – technology workers not reporting to the CIO – while only 10% made up the IT department.”
- Gartner, 2022
This matters because business technologists are usually not subject to the same kinds of risk and budgetary constraints as IT. Much of that is because they can categorize their activities as revenue generation rather than being a pure cost center like their IT counterparts.
Economics: The Burgeoning Cost of Production for New Automation
We can’t put it better than Andrej Karpathy, former Director of AI at Tesla and OpenAI:
English is the hottest new programming language.
Prompt engineering can now create repeatable automation, content, and code without needing dedicated technical expertise. Generative UIs — where the LLM dynamically generates HTML based on user interaction — are now a reality.
To see the potential of LLMs in easing application governance on the Ardoq platform, read our latest guide to AI-based reference creation.
The near-term potential of generative AI to replace traditional hand-coding by engineers has probably been overstated. However, even in a resource-augmentation model, the claimed productivity gains are attention-grabbing: GitHub CoPilot claims a 55% increase in developer productivity.
Not everyone’s convinced that increased developer productivity is a good thing. There’s some smart analysis that more code just means code that is more complex and not necessarily better. But good code or bad code, when it comes to developer tools, the one thing generative AI isn’t promising is less code.
We already know from a decade or more's experience that low-code solutions that lower the cost of production can raise risk and maintenance costs if that code is poorly understood. COTS applications proliferate auto-generated integrations whose inner workings are obscure and which create undocumented process dependencies. There’s no reason to believe generative AI will do anything other than accelerate this trend.
The rafter of potential security issues this may bring could keep Chief EAs and CISOs awake at night, but experience suggests that Sales and Marketing teams, with their eyes fixed squarely on AI-driven competitors, will be much more ambivalent about the risks of rapid code proliferation.
Generative AI Generates
There is one big question mark over this hypothesis: This prediction depends on current demographic trends continuing or even accelerating. But will they?
What if generative AI destroys far more jobs than it creates? Will organizations need as many prompt engineers as they do low-code developers? Productivity gains can mean the enterprise does more work with its current resources or the same work with fewer resources.
This is the biggest unknown and one we’ll come back to.
Either way, a fundamental truth remains: Generative AI is there to generate.
However, keeping an organization competitive often depends on concision and efficiency. So, it’s easy to imagine how generative AI will add complexity to the enterprise, but it’s harder to see how it will take it out.
The next part of this blog series on generative AI and Enterprise Architecture will tackle its potential impact on the very core of EA as a practice: modeling the enterprise.
If you're hungry for more content on AI and EA, watch our webinar on-demand:
This blog series has been co-authored with Ardoq’s Chief Enterprise Architect, Jason Baragry.
- Generative AI and EA Blog Series Generative AI and EA: Modeling the Enterprise Generative AI and EA: Roadmapping the Future
- Blog Posts Leveraging AI for Strategic Enterprise Planning 4 Ways Artificial Intelligence Can Support Better Business Decisions
- Webinars Enhancing Enterprise Architecture With AI | 29 February 2024 | Recording The Future Is Now: How AI Will Power EA | 6 June 2023