We Are an AI Company and Are Banning the Use of AI in Certain Situations

Robert Corwin, CEO
|
Feb 11, 2025
|
Co-Author
Thin wave line.Thick wave line.White wave line.

Important Notice

Diamond-shaped blue icon as bullet point.
Diamond-shaped blue icon as bullet point.

Key Points

Diamond-shaped blue icon as bullet point.

Despite being obviously pro-AI in general, our company (an AI solutions provider) is banning the use of ChatGPT and other Gen AI tools for certain purposes.

Diamond-shaped blue icon as bullet point.

Over-reliance on them can cause undesirable side-effects on our employees, culture, and client relationships.

Diamond-shaped blue icon as bullet point.

As models grow increasingly powerful, the thoughtful and measured application of them will be as crucial as their raw abilities to ensure the best result for society and humanity.

Diamond-shaped blue icon as bullet point.
Diamond-shaped blue icon as bullet point.
Diamond-shaped blue icon as bullet point.

As an AI solutions provider, one might think we are all in on using ChatGPT or other Gen AI tools in every situation where it works reasonably well.  This couldn’t be more wrong!  The reason has less to do with the considerable technical prowess of the models … and more with the side effects of over-reliance on the tools.

Basically, the use of these tools is a double-edged sword.  On one hand, they massively speed up laborious writing and coding tasks of all kinds; we certainly make use of them and would be at a competitive disadvantage if we didn’t.  On the other hand, they produce non-differentiated, impersonal, often uninteresting, and sometimes inaccurate output.

Something that doesn’t seem to occur to people as often as it should: it’s at least now somewhat and often VERY obvious when something has been written using ChatGPT.  The text looks and feels a very specific way, devoid of spelling or grammatical errors, and lacks personality.  It provides no differentiation in a world with immense amounts of AI-generated content sloshing around (and we should remember that customers and consumers are not naïve).

We observe that overuse of Gen AI can cause:

  • A marked increase in impersonality in the output.
  • A marked decrease in the variability of the output, thereby diminishing the diversity of opinions expressed in thought provoking and compelling ways.
  • A tendency to not fact-check or question the output for accuracy.
    • This can happen for straight text-based output…
    • ...but also in code auto-completers, which are SO powerful now.  This is a really big issue as in programming, even one tiny uncaught bug can cause massive downstream consequences that might not even be apparent for months or years.
  • Sometimes, deprivation of self-improvement which might otherwise come from not using the tools.  This can be an issue with less experienced users who might not yet have years of learning under their belt. 
  • In extreme cases, deceitful behavior where people try to get Gen AI to do their job while they do something else.  (Ethics aside, this is unlikely to be profitable from an economic perspective, which will be the subject of a future blog).

As a result, we recently adopted a company policy which prohibits using AI of any kind to automate:

  • ANY communication with Clients whatsoever, with zero exceptions.  The Client relationship is so vital to our business that there can simply be no impression of impersonality, indifference, or laziness.
  • Communication with colleagues.  Similar reasoning.  Developing a good relationship with co-workers is crucial, and speaking to them with a chatbot is a hack which will hinder that.
  • The writing of public-facing reports, white papers, messaging or other materials without at least some amount of human scrutiny, interjection, customization and editing.
  • Performing one’s job function in a deceitful way: e.g. abusing a remote work policy by programming a computer to make it appear like you are online when you are not, or to mislead others about how long it takes to complete a task.  This is just dishonesty.
  • The uploading of any company materials, data or property whatsoever to any non-private LLM which may keep or use that information for any purpose.

Is it hypocritical for an AI firm to have this policy?  We don’t think so – advocating for human judgment in the application of AI is far from an anti-AI or hypocritical stance.  It’s a reflection of the fact that, like other revolutionary technologies, AI will ripple through society having a variety of effects.  For those whose priority is human prosperity, it simply seems like common sense to carefully evaluate those effects in the context of human well-being, safety, quality of life, relationships, and personal exchanges founded on trust.  It requires careful consideration of the second-order and longer-term effects of things.

And none of this is to say that we don’t use the tools.  We do for sure!  There are many great reasons to leverage them appropriately:

  • Chat, search and NLP tasks: we use these powerful tools in our actual projects to perform chat, enterprise search, NLP and other tasks for our Clients.
  • Coding: code auto-completers are insanely good now and save huge amounts of time, provided that results are scrutinized carefully.
  • Business analysis and research: our Succinctly AI platform understands how to query data sources of all kinds, retrieve and analyze the most relevant numerical and text data, automate tasks using agents, and create analyses and visualizations, acting as a virtual BI or research analyst, passing results on to human reviewers.
  • Quickly uncovering solutions to technical or other problems: LLMs are as good or better than internet search for a lot of technical and other questions.
  • Outlines for sales or marketing materials: getting the initial structure of such documents can spark ideas and save time.
  • Filling out long, bureaucratic forms like RFPs or applications: a lot of the text in those is just informational and needn’t be special as long as it’s accurate.

In sum, we should try our best to understand where AI will provide value and efficiency without diluting quality, personality and accuracy.  The careful review of all output of all AI for inaccuracies, hallucinations, failures in logic, and outright errors is crucial.  As models grow increasingly powerful, the thoughtful and measured application of them will be as crucial as their raw abilities to ensure the best result for society and humanity. 

Our Use Cases
Get in Touch

Read Next