When AI Frameworks Fail

Robert Corwin, CEO
|
April 25, 2023

Why Some AI Frameworks Fail

With AI growing at a staggering rate, we question why some AI frameworks fail and detail why human involvement and interaction are essential to the process of implementing AI into society.

(For better or worse, no part of this post was written by ChatGPT, except the poem at the end about how AI shouldn’t forget about humans!).

Now that many of us have witnessed the revolutionary (evolutionary?) abilities of ChatGPT 4, it might seem odd to pen a post suggesting that AI can fail.  The trajectory of Artificial Intelligence has turned exponential, and it seems likely that relatively soon, AI will be able to perform almost any task a human can.  However,  this post is not about the limits of AI performance on human tasks; rather, it is about the process of implementing these technologies into society.   This process is, for many businesses, governments, organizations, and individuals, still very much a human one and can be fraught.  There are numerous pitfalls, ranging from almost innumerable types of data quality problems, to finding talent, to internal and external politics.  Even the most popular AI/ML frameworks don't allow us to simply push a button and go!

Why Humans Are Still Integral When Using Artificial Intelligence

At least for the time being, AI is not the panacea it is portrayed to be in the media.  You can’t install PyTorch and just click a button to instantly get a computer to compute credit risk, optimize your supply chain, or revamp your website search algorithms.  You can do those things with a team of data scientists, machine learning engineers, and good Python developers, and they will go through considerable headache to ensure that all parts of the AI pipeline are of good quality.  Those parts include data collection and cleaning (the foundation upon which all else rests: garbage in == garbage out); proper experimental design (poorly formed or biased models make bad predictions), extremely nuanced decisions about models and their parameters; and productionizing the final result in the Cloud (which can be surprisingly difficult).  Most importantly, they will need to deeply understand the idiosyncrasies of the organization or business, and those of the people running it.

Currently, there are many commercial “frameworks” which help to unify, consolidate, automate, or make more efficient elements of AI pipelines.  Examples include systems that merge disparate data sources of different types, systems that automate the testing of AI models and consolidate the results, data visualization systems, and systems that take common “sub-tasks” in machine learning (like data labeling) and organize the underlying steps into an ostensibly simpler conceptual model and presentation.  Most of these frameworks provide a layer of abstraction in the form of a proprietary programming language, query structure, user portal, and/or data store in exchange for the promise of eventual efficiencies when everyone learns them.

AI Frameworks Cannot Replace Critical Thinking

We don’t have any a priori problems with such frameworks.  Especially in big organizations, the value of standardizing where thousands of people look for data about what the “ground truth” is, and how they access it, can add enormous value.  However, we also strongly feel that such frameworks cannot replace critical thinking and won’t (on their own) make good data science happen in an organization.  That may or may not sound obvious, but we do not infrequently see expectations to the contrary.  Once all the data and models are unified – we still need to do something with them.  Often, additional services (aka consulting hours, usually sold by the same vendor providing the framework) are needed to “make the framework work” in practice.  Frameworks also have the disadvantage of depriving users of the granular, technical, if boring knowledge of the underlying technologies which are being abstracted, leaving them less in control if something goes wrong or breaks.  With this kind of work, the devil is in the details, and being ignorant of them means that one cannot see the demons!

Savior AI versus the Devil in the Details

There is a broader point worth mentioning.  We might extend the commentary on frameworks here to any AI solution, service or system purporting to be the “easy button”, meaning one push solves all.  There are deeper reasons that we should resist the temptation to subscribe to or believe in the notion of the “savior AI” which will fix all problems.  From an immediate, practical standpoint, it simply seems that this doesn’t currently exist, as discussed above.  Also, we wonder if such a reliance might lead to our own mental weaknesses?  As an example, it's tempting to think that the new job of “prompt engineering” is going to let even completely non-technical people become highly paid technologists (we have had job seekers express this sentiment).  Here’s a prediction: the pay rate for that job is going to plummet!  Because it’s not that hard.  A creative middle schooler can do it.  It may exist and be necessary, but collectively, we should aim to have more sophistication than just knowing how to optimally ask a computer questions. Moreover, when the time comes, and AI really CAN do everything… it’s very unclear that it’d be wise to “hook everything up to it.”  That’s a longer discussion for another post perhaps.

How to Successfully Implement AI Into Society

None of the above is to suggest that we should become luddites or shun AI (we run an AI services firm by the way).  Our suggestions for implementing even the best AI frameworks into society in an effective, efficient, human-friendly and safe way include:

  • Inject critical, deep thinking about the problems being solved, paying attention to idiosyncrasies concerning the organization, the business if applicable, the people in it, and the existing technology stack.
  • Never apply something blindly off the shelf. Rather, study the mechanics of how any technology works, at least to the extent possible.
  • Using those understandings, heavily customize tools for the client.
  • Be extra vigilant with regards to proper coding practices (like commenting, good factoring, NOT hacking things together, good deployment practices, and especially TESTING).
  • Think very long and hard about what can go wrong once something is automated.
  • Constrain the resources that all AI systems are allowed to access (use a good OUTGOING firewall policy restricting access to the internet, amongst other things); document and understand exactly what links exist between systems.
  • Require strict user permissions and multiple layers of user confirmation to reduce the risk of “pushing the wrong button.”

Some AI Light-Hearted Fun!

To end, for entertainment, here is a poem written by ChatGPT on how frameworks sometimes miss the mark!

AI frameworks, a marvel of tech; A powerful tool, without a speck; Of doubt, they can crunch and analyze; Data points that human minds can't size

But even AI can sometimes miss; The finer details that are amiss; In the complex web of business needs; Where efficiency and growth intercede

AI frameworks can be precise; And provide insights that are nice; But they may not always meet the mark; For the problems that are stark

For businesses need more than just stats; They need context, and human hats; To guide them through the twists and turns; Of the market, where money burns

AI frameworks may be fast; But they can't always keep up the task; Of meeting business needs that demand; More than just algorithms at hand

So let us use AI with care; And not forget that humans are there; To bridge the gaps that machines may miss; And make sure that our business bliss

Is not just a product of code; But a combination, where we can unload; The full potential of tech and people; To build a future that's equal.

AI is here to stay and over the coming months and years, will of course become more sophisticated – but for now, it still needs human interaction to ensure that it does deliver what’s required, and that it doesn’t fail.  When using AI, take care to consider carefully what you need it for and whether it will solve your problem – it likely won’t solve every aspect.  Think carefully about what your organization and your people need, and never just use off-the-shelf AI.  Think about what could go wrong and pay attention to proper coding practices.  Testing is also absolutely essential!  AI should streamline and simplify your processes but having a deep understanding of what you’re planning to use, the advantages and disadvantages attached and how to navigate potential issues will help to facilitate user success.