2024 is an exciting time to be involved in Artificial Intelligence. As models improve at a blistering pace and companies jockey to one-up each other with increasingly more powerful tools, society stands to benefit. However, the details of implementation are not always immediately obvious or easy. Our company, Austin Ai, helps organizations of all sizes adopt AI in a predictable, secure, and effective way, to drive productivity, automate mundane tasks, and generally make life easier.
This post summarizes what we’re seeing on the ground as we help existing clients and have conversations with potential ones. If 2023 was the year that put generative AI in the public’s purview and imagination, 2024 has been the year where organizations are “starting to do something”. That “something” has ranged from purely explorative, educational conversations, to all-in commitments to adopt an AI strategy and team, and everything in between. A large part of the momentum stems from the popularity of generative AI tools, like LLM chat bots and image/video generators, and associated hype in the media and financial markets. However, other factors include the cheapness of cloud compute, which has removed many barriers to entry, and generational attitudes.
At any rate, some of the most common concerns we hear from clients and prospects include: “knowing where to start”, security, challenges surrounding data, and organizational and management challenges. Let’s briefly discuss each of these in turn.
“Knowing where to start” is the choice of which projects to tackle and what resources (human and capital) to allocate. What might seem like a simple decision can quickly become bogged down as one investigates the myriad available technologies, which are also changing so rapidly. Mapping the appropriate technology to business needs – with ROI ultimately in mind – can seem daunting once you peel the layers of the onion back and delve into the details. Our very strong recommendation to solve this is to 1. Identify as much low-hanging fruit as possible, starting with small automations or predictions rather than re-architecting everything or building SkyNet to run the company, and then 2. Use existing tools to solve those problems rather than re-inventing the wheel or paying huge SaaS license fees. There is currently so much incredibly powerful open-source technology available to society that the marginal cost of an Ai technology stack is headed towards zero!
Concerns surrounding security are completely natural and well-founded. Addressing them will take a longer conversation; we did so in a previous blog post Avoiding an Algorithmic Apocalypse. It boils down to using a set of basic guidelines surrounding quality engineering, parsimony, focusing on human rather than technical goals, proper use of statistics, boxing (keeping unauthorized users OUT while restricting AI systems IN), and incentives and alignment. See the article for more details, but a good rule of thumb is “use common sense, anticipate what could go wrong, and write robust code”. One example of how we have helped clients address security concerns is setting up an open-source LLM to run locally so that no proprietary data ever leaves their network. That setup, incidentally, costs far less than paying for LLM compute in the cloud.
In terms of data, the subject may seem passé and of the last decade compared to conversations about cutting-edge AI models. But the reality is that AI models, like any algorithm, are garbage-in == garbage-out. One of our clients has a great slogan actually: “A good AI strategy is a good data strategy”. Data cleanliness, coverage, accuracy, and availability across often disparate data sources are paramount, and despite enterprise SaaS frameworks which attempt to unify the data stack, we often find that clients’ data are not in good enough shape to feed to AI models. This is nonetheless completely understandable in the context of large organizations, and probably 60% or more of what we do on a daily basis involves data processing, engineering, cleaning, wrangling, interpretation, etc. At the end of the day we get paid for making the model and tying it to a business outcome, but clean data is a prerequisite, so this work is necessary.
Finally, clients often face management and organizational challenges. For what it’s worth, much of the interest in AI seems to spread from either the very top-down or very bottom-up. Not a few of our conversations have started with “our (board / CEO / CTO / investors / VC firm) have asked us to get AI-enabled”. But younger, more junior employees are sometimes more tech-savvy and open to change than middle management. In short, friction sometimes arises from one part of the organization being more enthusiastic about AI efforts than another. Delineation may occur across divisions or either up or down the managerial hierarchy. We have seen the most success when an organization adopts a well-articulated AI-strategy, putting a strong leader in place who is able to get buy-in for clearly defined projects with visible ROI. Heavily involving junior and mid-level employees in the process and allowing them to upskill (to assuage fears of AI replacing jobs) is crucial. Involvement from the business side (as opposed to just the technical one) is also key. Despite all the sophistication, an AI project can just be another IT project that goes off the rails without these elements.
Please contact us if you feel like discussing any of the above, or leave a comment below. At Austin Ai we are incredibly excited to see what the rest of 2024 brings!