Using Generative AI Every Day at Work? An AI Strategist’s Honest Take

June 17, 2024
professional focusing on computer screen close up
I use Generative AI tools because, as a developer, I can’t ignore them. My clients and peers expect me to be at least conversant in AI technologies if not an expert. Maintaining that expertise means engaging with new tools as they come out. I’ve made a point to use them where I could for the last six months. Here’s how I’ve used them and what I’ve learned.

Generative AI & Professional Writing

The Blank Page Problem: Using GenAI at Work

I would never publish unaltered machine-generated text. First of all, there are too many IP concerns there. I don’t know exactly what was trained against. This has implications for both IP and whether or not I’d agree with the points in the text.

I don’t love the way ChatGPT and similar tools write. The text they emit makes me angry. That’s great, though, because as soon as I see something there, I know what’s wrong with it. I know what I want to change and what I would write instead.

As such, while I struggle to get started with a blank page, I don’t need to anymore. I can ask a tool to present some text for me to start from. This means faster initiation of reports, proposals, or other content. The AI provides a foundation, allowing me to quickly identify key points, ultimately saving time and boosting productivity. I always throw it all away, but it gets me started, and that’s half of doing anything.

Quality Assurance in Content Creation

I avoid publishing unaltered AI-generated text due to uncertainties in intellectual property (IP). This practice ensures that all content aligns with my voice and maintains originality, which is crucial for upholding integrity. Another quality assurance measure when using generative AI at work is text improvement. AI-generated text can highlight areas for improvement, making it easier to refine and perfect the content. In business writing, using genAI as a junior editor helps enhance the quality and clarity of our communications.

Enhancing Message Clarity through AI Summarization

As much as I pride myself on the clarity and directness of my writing, I’ve been told perhaps I shouldn’t. Something I like to do after I’ve written an article is ask a generative tool for a summary of the article. If I agree with the summary produced, that’s good. If the AI thinks the point of my article is different, that’s cause for concern. This often leads to me promoting certain ideas I thought were central to my article, but the AI didn’t. Summarizing tools can ensure that key messages in reports, emails, and presentations are clear and accurately conveyed to stakeholders.

Office Use Cases of Generative AI

Streamlining External Communications: Email and Slack

I will never use generative tools to respond to a colleague or client. I feel like it violates the social contract. There’s an expectation that when I send an email, I write it.

Where I do use generated emails is dealing with sales people. I don’t have form letters for every interaction I have. Usually I need to convey that I’m curious about their product, but not in a seat to purchase it myself. I’ll ask an AI tool to help me generate text for a situation like that, but again, only when someone cold-emails me. This helps manage and respond to unsolicited emails efficiently, maintaining a professional tone while saving time.

Summarizing & Question Answering

I read a lot for work, including articles, white papers, books, and technical manuals. What I’m reading depends on my workload, but it’s always quite a bit. Generative AI tools can provide a rough summary of a document and answer some rudimentary questions from it.

Now, I never take these things as gospel. GenAI gets things wrong more often than is acceptable. Remember, it’s meant to generate plausible text. That’s different than either correct or verbatim text. Even so, it gives me an inkling of what’s likely to be true, which is extremely useful for searching the document for the specific information I need. Quickly extracting essential information from large volumes of text is a value-add in decision-making and research.

My Love/Hate Relationship With Code Gen

No article on generative AI will be complete without touching on its ability to write source code. So, a few things:

  • The bottleneck to software development has never been typing
  • Bad code is way more expensive than good code
  • Nearly-correct code is harder to detect than clearly wrong code

In my personal experience (with Github’s Copilot), code-generation tools are fairly capable. When asked questions, they’ll generate material falsehoods often enough that it’s noticeable. For example, I got into an argument with Copilot about switch statements in python3. It was convinced they didn’t exist; I knew better.

Worse, they don’t always make perfect code. Now, no programmer makes perfect code, but we tend to fail loudly. We do the wrong thing, we write stuff that doesn’t compile. Copilot made things that looked right but had subtle bugs. As any programmer will tell you, subtle bugs are hard to catch.

Test case generation is another area where the outputs look good but aren’t. The test cases I’ve had the tool generate don’t test interesting things and don’t provide meaningful coverage. This may be odd, but I’d rather have no tests instead of lousy tests. No tests indicate a problem clearly, while bad tests can hide a problem until it becomes an emergency.

My General Stance on Generative AI

I find generative AI techniques somewhat useful. This might be surprising, given my negative bent above. The thing is, I find the enthusiasm for these tools has far outpaced their utility. The only way I can reconcile people’s claims of time saved with my experience is by assuming others blindly accept LLM output.

The tools that exist today are helpful. However, they have to be used judiciously. That means we have to inspect the output of generative techniques with a critical eye. How much diligence is due depends on the use-case, but there’s always some. No one wants to commit bad code, make false statements, or be an also-ran in a business context. If you’re not careful, that’s exactly where generative AI will take you.

On the flip side, no one wants to be a Luddite. The technology that exists is helpful when used responsibly and can enhance productivity and innovation. If you’d like to talk further about the responsible use of AI, generative or otherwise, please reach out to us.

Unlock the Power of AI Engineering

From optimizing manufacturing materials to analyzing and predicting equipment maintenance schedules, see how we’re applying custom AI software solutions.

Explore AI-Enablement at SEP »

You Might Also Like