Artificial Intelligence has taken the world by storm. Some call it a transformative technology that will change the way we work and live. Others say it will bring about the end of the world as we know it. Meanwhile, nonprofits are trying to understand how to navigate this new frontier.
Until the fall of 2022, when Microsoft and Open AI introduced a free beta chatbot program called ChatGPT, Artificial Intelligence (AI) was mostly the domain of computer scientists, think tanks, and science fiction writers. Then ChatGPT started an AI frenzy.
What is AI?
AI is a technology designed to help automate a huge variety of tasks. A related technology, Machine Learning, teaches computers how to do things by recognizing patterns over time. Using these technologies together, computer scientists could reduce the laborious work of programming.
ChatGPT (short for generative pre-trained transformers) offered an easier way to do some things faster. It also allowed people already doing questionable things to do them faster and better. Students who had been plagiarizing term papers for generations found that they could now do it in minutes with ChatGPT. Artists, musicians, and writers who had been inappropriately “borrowing” copyrighted material found they could do it better using a Chabot, and nefarious people who for decades had created fake news found they could do it more realistically. Good people also benefited, however, the bad guys got all the attention.
ChatGPT was just the tip of the AI iceberg. Under the surface is a collection of technologies that include Machine Learning (MI), Large Language Models (LLMs), and Generative Artificial Intelligence (Gen AI). Together, they make up a powerful toolset that promises both exciting opportunities and threatening possibilities. In some ways, the argument over AI is similar to questions raised in the 1950s about atomic energy: how do you responsibly use a technology that has the ability to both power cities and blow them up?
How Does AI Work?
Artificial Intelligence uses predictive models to classify data, recognize patterns, identify trends, and predict outcomes. To make this possible, Machine Learning ingests huge amounts of information to learn how to identify relationships, find patterns, and evaluate results. This is done through a process called “training.” Generative AI models like ChatGPT are “trained” using massive amounts of information taken from libraries of newspapers, magazines, books, music, art, online information, and online databases. Most of this data was used without permission from the original creators, which has caused some ethical and legal challenges for these technologies.
Using the patterns identified in these models, generative AI creates new material – like a sentence or an image. This process potentially gives AI the ability to transform nearly every area of human activity. Some think it can enhance human creativity. Most agree that it has pushed the boundaries of what machines can accomplish.
Scientists at Google are hoping to go even farther by trying to learn how the human brain works. If they can understand how the brain takes in information, stores, and uses it, they can use these lessons to create even more sophisticated AI. This is no easy task. As one research scientist recently said, “If we were to map the whole human brain right now, it might take billions of dollars and hundreds of years.” That’s because the brain is incredibly complex, with billions of neurons and trillions of synapses that work together in myriad ways.
Beyond the Hype
It’s worth remembering that ChatGPT was not the first everyday use of artificial intelligence. Many of us have been using AI every day for years. Every time you ask Siri or Alexa a question you are interacting with AI.
Early versions of AI help us talk to computers using natural language, search for information on Google, generated translated video captions in real time, have CT scans read instantaneously by computers rather than radiologists, and get predictive weather forecasts. Researchers are working on self-driving cars (and airplanes), AI medical diagnosis, computer security solutions, and internet marketing tools all powered by more advanced generations of AI.
There are limitations though. Many of these technologies are years from being ready for everyday use. Reliability and bias are real issues researchers need to solve. Availability of technological components is also a limiting factor. And that’s before we consider the legal and ethical ramifications.
Risks, Pitfalls, and Challenges
The use of Artificial Intelligence is just starting to transform nonprofits, and some experts are urging caution. Stubborn bugs, lagging regulations, and frequent errors can create more problems for nonprofits.
“There is a risk of diving in too fast because the market is changing so quickly,” says Kevin Barenblat, a co-founder of Fast Forward, and longtime software entrepreneur who specializes in working with nonprofit organizations. The goal should not be to adopt AI as quickly as possible, but to use it when and where it fits your organization’s mission.
Organizations that experiment with AI should understand that large language models, like the one that underlies ChatGPT have a documented tendency to “hallucinate” or make up false information. In one highly publicized case, a New York lawyer delivered a legal brief written by ChatGPT. He had instructed the program to find cases that supported his position. When ChatGPT could not find relevant cases, it made some up. Needless to say, the lawyer learned an important lesson. Careful human intervention and review are always needed when dealing with chatbots.
According to one study by Stanford University, one in every six chatbot projects has a serious error in it. There are several reasons for this. Some hallucinations are caused by inaccurate data, other are the result of incomplete data, data that is already biased, a lack of context, or overly complex training models. This problem may be an unavoidable part of the software. AI researchers told Scientific American that “AI chatbots will never stop hallucinating.”
The best way to guard against AI hallucinations, and their potential impacts, is to make sure humans review and fact-check any AI generated material before it reaches the public.
How are Nonprofits Using Artificial Intelligence?
Despite the challenges, many nonprofits are finding ways to use AI to fulfill their missions. According to a survey of 4,600 nonprofits released in March by Google.org, the charitable arm of the company, more than half of nonprofits reported using AI tools daily. Most are using it for time-consuming tasks like drafting thank you notes to donors, writing lengthy grant proposals, and scheduling social media posts.
Others are developing AI tools for more complex tasks. For example, one environmental organization is using AI to monitor deforestation, another to train crisis counselors, and a third to translate emergency information into multiple languages. To help them, Google, Microsoft, Amazon and other major technology companies lead special efforts and grant programs to help nonprofits get started. The Google.org Accelerator, for example, has invested $20 million to help 21 nonprofits with generative AI projects by providing free training, expert consulting, and other free services.
This past April, I attended Amazon’s full-day Imagine conference in Arlington, Virginia, where the company presented examples of its cooperation with nonprofits and encouraged more organizations to sign up for its program of $200,000 grants.
In 2021, a Washington, D.C., organization called the Greater D.C. Diaper Bank used AI to cope with a pandemic-induced demand for diapers. They got help from IBM, which created a machine-learning tool designed to scrape government data to predict the areas with the highest shortages. Similarly, the American Red Cross launched more than twenty AI powered projects to help it provide disaster support and best deploy its resources. The Trevor Project, a nonprofit that provides crisis support to LGBTQ youths, worked with Google.org to build a chatbot that could identify high-risk young people and connect them with support services.
How to get started
Afua Bruce, an expert in public-interest technology and author of The Tech That Comes Next: How Change Makers, Technologists, and Philanthropists Can Build an Equitable World, recommends nonprofits should start small. First, identify priorities. Then, look for grants and free resources. Tools like the NTEN AI Readiness Checklist and IBM’s Data and AI Readiness Assessment can help.
Today, AI is making important contributions, but it has pitfalls, errors, and ethical problems as well. That is why AI needs human intervention and oversight to exercise its great powers.
To get support from real humans with deep knowledge of the nonprofit marketing space, contact the experts at Connect360. We offer PSA placement and digital campaign management for nonprofits of all sizes. Reach out today to learn more.
© 2024 Connect360 Multimedia
About The Author

Steven Edelman
Steve Edelman is a Partner and President of Connect360. He is a leading expert on the measurement, valuation, and financial reporting of Public Service Announcements by not-for-profit organizations.
About Connect 360
Connect360 is a leading media placement agency driving measurable results for some of Charity Navigator’s highest-ranked nonprofits, well-known associations, government agencies and public relations/marketing firms.