Navigating the Nonprofit Landscape with AI – George Weiner of Whole Whale

In this Health Nonprofit Digital Marketing episode, we discuss the much-talked-about world of artificial intelligence in the nonprofit sector. Join Spencer Brooks of Brooks Digital as he sits down with George Weiner, the Chief Whaler of Whole Whale, a digital agency at the forefront of leveraging data and technology to amplify the impact of nonprofits. Discover how AI is reshaping how nonprofits communicate, strategize, and enact social change—and what pitfalls to avoid along the way. 

About the guest

George Weiner is the Founder and CEO of Whole Whale, a digital agency that leverages data and tech to increase the impact of nonprofits and for-benefit companies. He is also the co-founder of Power Poetry, the largest teen poetry platform in the U.S, a safe, creative, free home to over 500k poets.

Prior to Whole Whale George was the CTO of DoSomething.org. George is also the host of Whole Whale’s in-house podcast, Using the Whole Whale, where he has interviewed guests from Seth Godin and Avinash Kaushik to representatives from the Mozilla Foundation, Lyft and the Environmental Defense Fund.

In nearly a decade of operations, Whole Whale has worked with over 100 nonprofit and social-impact organizations, spent over $6 million in Google Ad Grants dollars, and supported an additional 150,000+ organizations through free online content and trainings.

Resources

Contact George

Full Transcript

00:04

Welcome to Health Nonprofit Digital Marketing, a podcast for nonprofit marketers in the health space. Join us as we discuss how to use the web to drive awareness, engagement and Action for Health causes. This podcast is part of the thought education of Brooks Digital, the web agency for health nonprofits. Now, here’s your host, Spencer Brooks.

Spencer Brooks 00:26

Hello, and welcome back to another episode of health nonprofit digital marketing. My name is Spencer. And today I’m joined by George Weiner. George is the founder and CEO of Whole Whale, which is a digital agency that leverages data and tech to increase the impact of nonprofits. And for benefit companies. George is also the host of whole whales in house podcast, using the Whole Whale, which you should definitely check out if you’re not already a listener that. So, to set up today’s conversation, we’re talking about AI, which is a hugely requested topic, people are asking me about this a lot. I actually heard George give a presentation on AI for members of the nonprofit trust network. And I immediately knew that I had to have him on the show to talk about AI. So that’s why he’s here today. He’s also a really cool guy. So, George, thanks so much, first of all, for joining me today on the podcast, maybe you could just start us out by talking a little bit more about Whole Whale, and how you sort of got into this journey of learning about AI.

George Weiner 01:28

Yeah, thanks for having me on and love being a part of the nonprofit’s network with you all and sharing what we get there. We end up referring a lot of nonprofits to that network to find talented people to get things like marketing work done, Whole Whale has been around for over a decade, we’ve worked with hundreds of nonprofits. And you know, we really focus on trying to leverage data in tech to increase the impact of organizations, we really want the sector to go from zero to one around, leveraging new technology, the whole point of Whole Whale, the name, the name is based on this idea that in the late 1850s, when the you know, Nantucket whalers were out there pulling and killing whales for lamp oil and leaving the rest is the thought that in the time of abundance, we tend to be a touch wasteful. And right now we are absolutely in a time of abundance with regard to access to tools. More practically. We were you know; we were both around when the internet, websites that type of hey, we got to be online strategy came around and frankly, the first thought of many nonprofits and sadly, still some are is that oh, cool. A new place to put a donate button.

Spencer Brooks 02:36

Yeah.

George Weiner 02:37

And that’s only a part of it. That’s using the lamp oil. That’s not looking at the Whole Whale. So, to tie that metaphor together, you know, that’s why I’m interested in it. Because I love the idea of handing the largest leverage in history to people working on the hardest, most important causes.

Spencer Brooks 02:53

Yeah, I think is a great setup for AI, right, which has, at least in my mind, there’s a tremendous opportunity there to put one of the probably one of the most powerful, that certainly that has the greatest potential, I think of things in a long time to actually increase the capacity of nonprofits. And I think those are the folks that if I want anything to go to, it’s going to be those. So do you think we could maybe start by defining AI I know that I feel like it’s one of these buzzwords, sometimes when you think about like Internet of Things or blockchain, right, where maybe it feels like different people are talking about different things might be like describing an oblong blur is might some people might feel that way. So could you give me your best shot at maybe defining what it is that we’re talking about AI wise, at least for the context of this conversation like tools for nonprofits.

George Weiner 03:47

For the purpose of this conversation, let’s just focus on the fact that there are many types of AI’s that live under the overall broad umbrella of artificial intelligence. And if you think of like an onion, that’s like peeling it. As we go into the center layers, we have things like machine learning, we have deep learning, we have reinforcement learning, we have got narrow, we’ve got broad. And finally, in this section of the onion that we’ll be talking about is generative AI. Things that generate information based on how it’s been trained. I’m not going to get into how it’s been trained to those pieces. I’ll also simply take a step back and note that like, nobody knows, and can explain, ultimately, how a neural net is taking this information, packaging it into weights, and then allowing us to chat in a probabilistic model, like the creators of this thing can’t fully explain it. So let that terrify you, let it not, but also keep in mind that you’re listening to us right now. You’re hearing our voices, thanks to electricity. Do you need to know how electricity works? And by the way, go down that rabbit hole. We don’t really know how electricity fully works is like some questions. So, I think in broad strokes, we’re talking about generative AI and ultimately you should know, maybe a touch more than I can hand you in this particular narrative.

Spencer Brooks 05:04

Yeah, that makes sense. And I think like, as an example, you like ChatGPT would be the name that everyone has heard that would be an example of generative AI, right? Like that sort of thing.

George Weiner 05:14

Thank you, yes, more practically, you’re using ChatGPT to generate these things. But you’re gonna see a lot more the distance between these generative AI’s and the applications you use. What does that mean? That means you’re gonna be on LinkedIn, and suddenly you’re gonna see this little blurb say, rewrite with AI, you’re gonna be in email, and it’s not just going to be the little auto Q complete, it’s going to be rewrite this with AI inside of Microsoft tools inside of any text-based interface and other interfaces coming. It’s just gonna be popping up. So, it’s coming.

Spencer Brooks 05:47

Yeah, yeah, that’s right. You see it on LinkedIn, you see it? Like if you’re in, you know, Gmail, right? Where it’s like, it’s, it’s auto completing it, right, with AI. Like Grammarly, right, that sort of thing, which has been around forever. I mean, it’s immediately that’s much simpler, right. But I think perhaps part of your point is like, you’re kind of already starting to see it. And maybe like electricity, it’s actually been around and might be used in places and you just haven’t identified it as such.

George Weiner 06:15

Yes, if you did a Google search, you’ve been using AI.

Spencer Brooks 06:18

And it’s been very helpful. So, George, I’m curious how you see the landscape of nonprofit digital marketing, changing and evolving with AI. I know this is kind of like a crystal ball here, right. But if you had to make some predictions, or some just even your own opinions, right on how you feel like this is going to change the nonprofit marketing space over the next couple of years. What are your thoughts?

George Weiner 06:44

I think the volume and quality are both going to go up. And the demands on creatives are going to shift . In broad strokes that is the fairly safe crystal ball prediction. I think usual trends of the nonprofit industry lagging behind for profit industry motives are going to mean that sector is going to be a touch slower and cautious. And there’s good associated with that, a touch slower in adopting these pieces. But the volume of content being put out is drastically being increased. Literally today, whenever I look at things and trends like that, in terms of the massive usage, where you have 100 million weekly, active users on one AI tool alone, I think bigger is different. And I think we’re gonna see a lot more volume being created being expected and quality shifting. And so if you’re, let’s say, trying to keep up on a bicycle, in an F1 racing competition, where people have racecars, you just won’t be able to keep up.

Spencer Brooks 08:09

So, let’s talk about volume then for a second, right? Because when I think about volume increasing, I’m thinking about more noise, right? It’s harder to stand out. Do you think that the answer would be in this case that, oh, everyone’s got to keep up with this sort of arms race of we got to put up more and more and more and more as a nonprofit, right? Or do you think that’s going to lead to other opportunities, right? Instead of putting out more and more and more stuff? Do you think that’s going to, that there’s going to be some sort of shift or cap that’s hit that says, Hey, listen more content and producing more volume is no longer meeting our goals? Or do you think that there’s that ceiling is still quite far away?

George Weiner 08:53

I like the question. And you know, as a strategy, I don’t know the organization’s I’m talking to right now. I don’t know the size of your organization and your current capacity, I know that it will increase capacity. But still strategy will matter quite a bit. One thing as you’re listening to this, and you’re saying which direction do I go? You have two different directions, you can tack, you can tack toward quantity, or you can tack toward quality. I would warn you if you tack toward quantity, meaning volume of output, and you don’t use these tools, you’re gonna get run over like a steamroller like do you have no hope. Now, if you tack toward quality, that’s an interesting conversation as well. Both have merits so for instance, one tactic because I feel like we’re very high minded right now you’re like, What about tomorrow? What are we doing? I know that a lot of health nonprofits have these long explainers from doctors, from experts talking about your disease, your approach your immunotherapies. So, here’s a tactic that is lightweight and high output. You can take one of those long form 45 minute, webinars, conferences and put it into a tool called Opus Pro, this thing is going to automatically chop up the key highlighted points based on AI, analyzing that content for short 30 to 60 second clips that you can then put on Instagram, Tik Tok, YouTube shorts, and package the captions and everything for you. And that’s a way of, again, reusing what you already have for efficient output in order to increase the amount of attention, which is the game we’re playing, online attention you’re getting for your cause. So, like, there’s a practical example of like, oh, you know, I see.

Spencer Brooks 10:40

Yeah, yeah, I think that’s a really great, it is a great example. Because I think, yeah, we have been a little high minded here. And I mean, I love talking about the high-minded stuff, right?

George Weiner 10:49

I love it. But people tune out to me all the time. So I’m like, I gotta throw a useful nugget.

Spencer Brooks 10:54

Yeah, here’s the the useful nugget, right,

George Weiner 10:56

All the useful nuggets come at the end of this podcast, though. So how about that?

Spencer Brooks 10:59

Yeah, that’s right. And we’re gonna, we’re gonna string you along, you know, for the next 20 minutes. while we talk about, you know, theory, and, you know, machine or whatever, no, I think that’s super helpful. Right. And I know that, you know, most people in the nonprofit sector, especially in the marketing/ communications world, right, there’s no one sitting around going, you know, what, I have too much time on my hands. And I’m just looking for things to do, you know, right. So, to hear that, like that, essentially, AI is going to enable more people to generate more content, which means, okay, now I’ve got to do more to keep up can be a really terrifying thing. So, I think that there’s, you know, on one hand, it could be some good news, right? I do have some more help, but on the other hand, you gotta be, you know, I think you’ve got to be aware of what you’re competing with, as you pointed out, George, so. One thing that I did want to touch on before we get super deep, and all this stuff is the question of ethics around this. And or maybe risk, like, I know that this is one thing that may prevent people from diving into the world of AI, or really fully embracing this conversation, if they still have some questions about, hey, you know, I don’t know if it’s safe to use this. I don’t know, if I fully understand the risks. You know, there’s this is based on data, right? Like a giant data source, the entire internet scraped, right? What is the ethics of using that? How do we handle sensitive like all the health data, all that kind of stuff? So, can you talk me through, what are your thoughts about how to use AI in a way that is ethical and maybe some of the risks that nonprofits should be aware of to try to sidestep as they’re considering using AI and some of their operations?

George Weiner 12:47

Yeah, there was actually just an interesting case, you know, Sarah Silverman pushing these platforms because Llama 2, the Meta’s current AI, actually ingest it as part of its massive billion parameter database, some of her book, her book was part of this, this dataset, and actually just came out that the Court said, this is not infringing on your copyright. Because this LLM large language model is not going to output the exact, verbatim version of that book. It can mimic the style, and it can put pieces out there, but it is not infringing on your copyright. Other things around copyright, and if we’re just like talking about like raw risk, I don’t want to get sued. Whether or not you choose to believe it, Sam Altman over at Open AI, in the last developer conference, literally put out that they have a copyright protection, if something you have created gets attacked for copyright infringement, they are actually going to get your back now, talk to a lawyer, results may vary. But that is a pretty confident statement to be putting out around this. And so, you know, from that standpoint, I think there’s some of it. Images still, I would say is, you know, you got to be careful not because I can, you know, I can trick Dolly, which is the leading image generating AI, I can trick it into creating Spider Man to promote my brand. If you use Spider Man, you’re gonna get sued by Disney, even though at the same time, I can tell you sure AI image rights allow you to commercial use. So, you tell me, I think you still gotta use your the good old sense of saying like, look, if I asked an artist to draw Mickey Mouse for me, even though I had the artist draw it, or I had, let’s say Adobe draw it, I don’t own it. And if I ask an AI to do it, I don’t own it. If logically, there’s a copyright on it. So, you still like, you gotta think.

Spencer Brooks 14:41

100% Yeah. Yeah. And I know that’s, yeah, this is one of the things I and we’ll probably talk about this later, but like the human component of driving this and your responsibility, I think to you know, not just put it in the computer and trust that everything is going to come out. And this is not going to be a copyright infringement or whatever, right? I think it’s definite Yeah, that’s a valid point.

George Weiner 15:01

But I do want to put a put a finger on something really quickly that when you ask a large language model like Open AI, or Anthropic.com, which by the way, is HIPAA compliant, and its policies, when you ask it to create content data, when you ask it to create text around something, each one of those words is being probabilistically generated, it is not running around the internet, copying and pasting from things that has ingested, it has gone through a giant washing machine, come out, and it is not an exact copy. Now that is both probably maybe a good relief for saying like, Oh, no, this is not plagiarized material. However, keep in mind, it can’t separate fact and fiction. And so, it does what they call hallucinate, which bothers some people because it anthropomorphize is like an AI. It lies one out of five times. I just don’t know which time when you’re talking about facts. So, if I asked for the history of immunotherapy research in a list, it would give me something that was very confident, very thorough, and 10% wrong. And I just don’t know what 10%. So put that in the back of your mind, if your team is running around using this without any oversight or policy, and assuming what you get back is accurate.

Spencer Brooks 16:19

Yeah, it’s hugely, hugely helpful, right? Because I think it’s mostly those errors of omission, right? You just don’t know. And then boom, it comes back to bite you. It also actually reminds me of, it was the story, and forgive me, I don’t have the exact name of the organization up in my notes. But where, you know, they had like the Chat AI, right, answering was that like a crisis line or something like that? Right, where they had actually used AI in order to interact and provide some sort of health advice to a human being. And that went horribly, horribly wrong. Right. And so, I think that that particular story seemed to have made the rounds, especially in the nonprofit community, and kind of enough people mentioned that to me, that it felt like people got scared a little bit. So would you mind I mean, being able to speak to that, a little bit that particular story, and you know, examples of, maybe not, this is how you don’t just unleash AI on, you know, a human being or members of your constituency without some sort of oversight.

George Weiner 17:20

So, this may be the most important part of this podcast, the following human in the loop must be a absolute rule for the implementation of LLM’s in crisis, adjacent youth health conversations with constituents with stakeholders. What LLM large language models, any of them, what any of them do, must be a first not final draft. If you take anything away from this, make that your policy, if it is one sentence, make it first not final draft, absolute human in the loop for whatever we do. A human must review this before going into so much as a tweet, if that’s even what they’re called, still. What you’re referring to is the sad story of the National Eating Disorder Association. And they were implementing a chat, which they thought was sort of a logically built chat interface that had a, you know, a tier triage system for handling results, but they did not realize and I’m going to quote directly from their messaging is that unbeknownst to them, the technical vendor cass.ai C A S S.ai Cass.ai sold them something and unbeknownst to them, had part of this answer dependent on an LLM, a large language model. And so even though they said that, and I’ll go back here, cass.ai, formerly X2AI, they may have changed your name since now. Explain that only 10 to 25 messages out of 28,000 were off to us, we find it and they said unacceptable that any person was exposed to content on weight loss or dieting. So that means it only errored, according to them 25 times out of 28,000 messages. Now your error rate, let’s say if I was talking about an AI driving your kids to school on a school bus is unacceptable if you’re like yeah, I’ve only violently crashed one mile out of 28,000. It’s unacceptable. So, what you need to have in the back of your mind is that anytime anyone sells you, I don’t care how many PhDs are on the advisory board. Anytime you hear the words LLM and someone say the words zero hallucination or perfect response rate, run, don’t walk away, check your contracts and check anything that you are building. This it make it hurts my heart because my background is I was the CTO do something that org, out of which our texting program then build Crisis Text Line, which went to serve on many, many 1000s of young people in crisis and conversation. I take this stuff very seriously. And I think companies like cass.ai X2 AI formerly should pay more than they did.

Spencer Brooks 20:01

Yeah, I think that’s so hard with something that’s moving so fast right now, like AI, as I mean, the advances have been quite extraordinary. And that’s always the part of the price and the risk, right? It’s like you are adopting a new technology. And if you’re not smart about doing that, if you’re not, if you don’t sufficiently or honestly, I mean, in the case of what is it the National Eating Disorders Association, right, like, maybe they thought they had all their I’s dotted and their T’s crossed, right. But, you know, maybe the vendor wasn’t being forthcoming as they need. But I think it’s an unfortunate lesson learned that, at least now, we can apply that and that’s known and other people listening to this can help avoid that. I wanted to go back to what you were talking about, with human in the loop on AI. Related to this idea for one of the questions that I think maybe listeners are going to ask is how, how is my job going to change? Right? In the very beginning of this episode, you mentioned like there’s going to be different requirements of creatives. And in some cases, maybe that is people in communications departments and nonprofits, it might be like, if you’re a communication department of one, then you know, maybe part of your job is writing a bunch of press releases and using Canva to create all these images for your social media posts and all these things, right. And so, with AI, how do you think that people in communications roles, how do you think that they’re going to need to shift their idea of, you know, maybe what their their day to day job looks like? And how can they use AI in this idea of human in the loop to still function in their role, but then be able to take advantage of AI?

George Weiner 21:43

What I think you should be watching for is that you’re going to have a LLM up alongside your day-to-day work space. And it’s going to be a partner in the same way that right now, you know, I had to mute Slack before we started because Slack is always on, your Zoom is always on, your email is always on, you’re gonna have an always on large language model that is helping you with your work. What I’m pushing our team at Whole Whale to do is not just be a passive passenger on the AI train, meaning you’re going to sit there as all those little interfaces, remember, the distance between AI and the tool you use is closing, I don’t want you to be a passive passenger. I actually expect that when you’re doing a task, you kind of have this sixth sense, you’re like, I feel like AI could be helping me do this better quality, and faster. And practically, let’s go back to a practical application. One of the things that I love for communications teams to start doing is actually taking that like large PDF report from XYZ health information. And having that summarized into key salient points. AIs are actually wonderful at summarizing based on existing information. Now you can use tools like anthropic.com to upload that and paid for accounts, I strongly recommend paying for the tools you use, because we don’t need to learn again that if you don’t pay for it, you are the product. So that means GPT plus, paying for that for at least a shared account. And you can upload that and then get a summary of it. The next level over that is actually building your own GPTS inside of open AI where you customize a chat that lets and knows your organization, can pull from relevant information and give you something that is reduced, not removed hallucination, reduced hallucination, because it’s referring to a data set that you have so a practical use case for that is you putting your communications overview, let’s say of how we talk about our brand and are our mission. And by the way, here is a whole context about HIV prep treatment that we support and hold as fact that you should pull from. Now that’s an interesting application for your team to then go ahead and use.

Spencer Brooks 24:08

Yeah, yeah, I think that’s a it is a good point, I’ll hone in on the idea of customizing kind of your own GPT for a second, right, because I know that there’s probably if you pull an average organization, there’s probably folks within there that are using Chat GPT or anthropic or some, you know, some sort of generative text AI to, to be able to, you know, do day to day drafting of whatever copy they want to do, right? There’s people inside organizations using that. And I would imagine nine times out of 10, they’re just going directly in to this GPT and say, just, hey, can you write a press release about this for as an example, right? And getting back a very generic answer to that. And so, it reminds me you talked about this idea of a gray jacket problem, I think is what it was right. Could you let’s talk about that for a second, right? Because I know that this is going to be like if you’re using it, it’s a that way that you’re going to run into this problem. So would you mind talking about that?

George Weiner 25:11

I get a chuckle because I just I tried to use metaphors and stories to make things a bit more salient. So the gray jacket problem is not officially known by many but known well, by few. Something that happened to me, I was at an AI actually related talk. And I was going on stage with some other folks. And one of the other people coming up was another guy named George. And I get on stage. And we’re both wearing the same gray jacket, like gray like sport jacket and blue pants. And I was like, You gotta be kidding me. Two guys named George wearing the same thing. I’m like, hugely embarrassed, actually. I’m sitting up there. And I’m like, Oh, my gosh, you know what happened here? We were both walking through the store. We were shopping around in generic jacket store. And we picked out a delightful grade jacket. And we’re like, Oh, what a unique little snowflake. I am looking at this in context of me. And I put this jacket on. And frankly, I love that jacket. So I’m gonna wear it. Other George he wore it better though. What is this story telling me? What I see, when you wander onto a, let’s say, open AI without paying and use a generic GPT 3.5 model, a basic model is you’re getting an off the shelf, bit of reply. And it is generic. Why is that bad? Well, right now, the general pattern recognition of your audience, my audience, the entire population is building up a new muscle. And that muscle is detecting AI generated stuff. There’s an ability to detect when something’s been written by AI, it’s got a few more words like not written in a way that is immediately detectable. But we’re building up this pattern recognition. And if you don’t believe me, take a look online and look up stock photography versus a real photograph from your phone, you can detect it inside of like a few milliseconds. That wasn’t true, originally. But we built up that muscle, humans are phenomenal pattern recognition monsters. And so the risk of the gray jacket is that if your team without any guidance or oversight is one of the 100 million people using GPTs every single month, so even if you don’t think they are, they are. Every single college student coming out right now uses this. So, if you think they aren’t, they are. And if you aren’t providing the training and capacity building, you’re pumping out gray jackets, you’re showing up on that stage like I did wearing the same thing next to the person next to you. And may be it seems unique to you while you were in the store, looking at that jacket, but in the marketplace, you look generic, what’s more, you may even be become someone that has hurting your brand by putting this generic type of prompting out there. Now, you get around this by being deliberate about your prompt engineering and prompt architecture and how you build these tools for your purpose and treat it as a first not final draft.

George Weiner 25:14

Yeah, and I think getting to, you know what you’re talking about, which is actually the customizing the GPT meaning. And this is this is how I will try to understand and explain it in layman’s terms. And then you can correct me where I’m wrong in all this. Is to be able to customize prompts to educate this GPT this API and say, Hey, you’re writing about this particular topic. And this is the tone of voice that we have, here’s some relevant examples of data and information. Here’s like what we believe here’s more about the organization, and be able to really prime it in a way where it has context about you and your organization, and then therefore be able to get a less generic, and also more consistent response from that AI, especially as it’s being used across multiple team members. Is that, how close am I with that description of customizing your own GPT?

George Weiner 29:03

I think you nailed it. I was talking to someone on our team today. And I like to do these sort of co working sessions where we build GPTs together. And you know, we keep using the word GPT. It’s an unfortunate name. Think of it as the App Store for AI’s. But also you can create these little mini apps that you can then interface in a text and also image based way. And also soon, probably speech to text, you can upload documents into it. So, it’s a very amazing little tool. And you can also build in ways for it to access other sources of information and make API requests which gets beyond. Let me pull back and say like when you create your GPT, your little mini AI app. Think about how that training that you did it correlates to when you were first hired at your current organization. When you were first hired and someone asks you to create a report or an annual something about the organization like you weren’t very good. Now imagine you could just press fast forward with all of the context and training and guidance and process and outline and best practices that you now know. Who would you rather give the next assignment to somebody who has that training or starting from day one?

Spencer Brooks 30:14

Yeah, I mean, that’s it’s an excellent point. Right. And I think that, yeah, it’s not only increasing quality, but it’s also preventing generic content.

George Weiner 30:22

Yeah,

Spencer Brooks 30:22

I, man, George, I have so many questions to ask you, and we’re running out of time. So quickly, I think I don’t want to delay some of these nuggets any longer, I want to be able to ask you about some of the practical ways that you either at Whole Whale or some of your other nonprofit clients have been practically using AI tools to help out with marketing and communications. Do you have any examples that you want to share?

George Weiner 30:47

Yeah, so we have actually built a custom tool Cause Writer.ai That builds out these like, purpose built solutions using these tactics I’ve been talking about using AI and it’s all about understanding the people, the process of what’s going on. What are you doing? What are the data, what is the output that you’re looking for, and then training that sort of employees, so it’s not a day one, but it’s like, a day 100 type person who understands the fundamentals of your organization, what you’re trying to do, what are we actually building. So, in the land of nonprofits, I don’t know if you’ve noticed, but we have to write a lot of proposals and responses to proposals and RFPs. And so, there’s an entire way of building out applications that will ingest the let’s say, here is the RFP or the grant information, and then knows your outline style fundamental information programs, and maybe ingested your annual report and knows some of your numbers, and can fire off a pretty decent first draft for you. That’s an interesting one, I come back to the PDF summarization. Nonprofits love PDFs, like the deserts love the rain. Larger, the better, and AIs are phenomenal at creating abstract. So if you’re sitting on a site that has a bunch of these PDFs sitting on, you know, that section of the site that we kind of like know exists, but don’t want to make eye contact with, this would let you immediately generate abstracts and key takeaways that could be generative of pages that are helpful for people finding that information, a lot faster. Another use case, let’s say going back to the PDFs, you could take that PDF, you could get that summary, then you could actually ask it to create a video transcript or narrative that could be moved to a video format of that as an overview, you could take that script, and then go to a tool, like D-id.com, D-Id, put that in and have an AI avatar, speak those words, into a video that you can then overlay with going through the report that actually gives you another type of media that was created. Or another version, you could use one of the AI services that now does automated lip syncing to different languages. And so, let’s say you have your CEO talking about a given topic, and you want to translate it into French, Mandarin, Spanish. Now, keep in mind, for those of you who know translation, you’re going to want to have somebody to review that, especially if it’s sensitive information. But those tools are on the rise and are pretty capable. I mean, we could go on but like it’s case after case of saying like, what are you doing? What are you writing? Here’s how we can bring AI to create that first, not final, draft.

Spencer Brooks 33:42

Yeah, yeah, I think those kinds of examples are super, super helpful, because I think half the battle sometimes is when folks don’t have an idea of actually how you might use AI to say repurpose content, right. That’s one of the things that you described, or summarize this massive document, things that would otherwise be just time sucking, repetitive manual tasks, to be able to string that together with different AI tools. I think just those examples alone are sort of helpful to be able to say, oh, okay, I could see how maybe putting these tools together and using them in this way, would practically help me out.

George Weiner 34:18

The number one tool we end up building is actually for newsletter writing. Because it’s the engine that makes the world go round.

Spencer Brooks 34:25

Yeah. How would you recommend folks to begin to explore the breadth of tools out there then because I think the other half of the battle is like, okay, maybe someone who’s listening to this didn’t even realize that half of these things were even possible. And so now there’s a whole realm of tools. So if someone doesn’t even know the possibilities, how might they explore, the just getting started with what are the what are the breadth of possibilities that we could be using?

George Weiner 34:54

Yeah, well, we have some free courses actually, that gives you like that type of AI super market overview at causewriter.ai/courses that are free that you can kind of browse and take a look at, I would also caution that 95% of the tools that are currently in the market, not going to make it. Literally the updates by open AI in that little GPT store probably knocked out a third of companies thinking that like chat with PDF was like their unique thing, or that their entire company is based around one literal like prompt that they have behind the chat interface, like not going to make it. So, I would actually not try to boil the ocean, you can waste a lot of time, I have wasted a lot of time running around tinkering with all of these things. And instead, think about the use case inside of the organization that you most want to pull the thread on and solve for one. And then move from there. I think you need an internal champion of AI automation and efficiency and push it forward on a sort of learning agenda. How did you learn? I also recommend that organizations begin to build a prompt library. A prompt library is literally like, what are the prompts we’re all using? Right? What is the training that we are shoving in to these AIs to start working on the tasks that we solve for?

Spencer Brooks 36:17

Yeah, yeah, I mean, on the topic of leadership. One of the questions that I did want to make sure to ask you was if you had any other thoughts or advice on how CEOs or people in leadership positions can actually get started on this work? I know you’ve mentioned a couple things, right, the internal champions the script, building out those libraries. Do you have any other thoughts on how folks can get started, traps to avoid?

George Weiner 36:39

I would say, in this case, like not having an internal policy or not acknowledging that your staff is currently using these tools reminds me of how, you know, like a decade ago, we said, All right, no, using social media at work, okay, forgetting that you everyone has a phone, you’re like we’re all using it. We’re all using this in some way. How do we be intentional about it and having that conversation? I think leaders are very good in times of change and opportunity. And I think we’re now potentially realizing at the executive level that this isn’t the like bubble of like, everything has to be going on the blockchain like, this is practically being used. And you see a changing like this is both hype, but also new reality, you need to start with the practical use cases first, where you want to employ these you want to start sourcing and acknowledging and having and giving space to how members of your team are already using this and start to aggregate that information and making it part of a drumbeat, I’d say, including the creation of a policy in which you intend and want to intend.

Spencer Brooks 36:46

Yeah. Makes a lot of sense. Yeah. Thank you for that. George. I, as we’re wrapping up here getting close to time, what other questions have I not asked you right? What’s Is there anything that you have like, Man, I really wish you would ask me this question, or I really need to share this. What are the things? Do you have to share with folks that I haven’t already asked you about?

George Weiner 38:11

Well, what is your I’m curious, from your perspective, like what has been your experience of the hype versus fear versus excitement? Like, what are you seeing when you talk to your clients in your field here?

Spencer Brooks 38:23

I mean, I think it depends on the person, some people are they hear the story of like the National Eating Disorders Association, and they’re just really afraid, and they’re like, this is evil, or, you know, I can’t know, I don’t know if I can trust this. Other people are confused. Very few I feel like are, I wouldn’t describe it as excited. I think that it’s it seems to be this position where folks know that it’s coming. And they know that it’s going to impact their work, but they don’t know how, and they’ve heard a little bit of scary stories, and they know everyone is talking about it. So, I think I would personally like to see people get a little bit more excited about it, because I really do think there’s some great opportunities, you know, and just like the advent of the spreadsheet, right, like, you know, I admittedly, there’s a little bit less of an ethical conundrum with that. Right, you know, but if you put that aside for a second, you know, there is this sort of leap forward in technology in a way, right, where, you know, whether it’s word processors or you know, like that same kind of idea, right, where if you can adopt, it’s like, Oh, my word, it does all these calculations for me, I don’t have to right? And so, I think that’s very much of what I’m hoping for folks to see. But I think right now, there’s just not a lot of, there’s not a lot of practical resources and tools for people who aren’t like I think we are which are just naturally technology curious. We work in technology on a day-to-day basis, like I’m just sort of wired as I would imagine you probably are, to just go in and figure stuff out and be like, Oh, okay, this is how these pieces fit together. And this is how we could use it. And I just created something right. I know that most people that are sitting in marketing and communication seats, not all of them are thinking that way or really cared to be like that. And I think it’s okay. But I just think that then the impetus is going to be on people like us to be able to spell out. Here’s actually how you would use this. And here’s how you avoid the pitfalls. And then I think when that’s in place, there, I think there will be a little bit of a clear path for excitement and responsible, educated use of this. So anyway, that answers your question and more.

George Weiner 40:24

One thing I’ll bring up is that I see a lot of correlation between the amount of fear somebody has, and their willingness to learn and explore beyond a basic prompt. And that manifests as follows. I would hear someone say, I totally asked GPT to write this email for me. And it was so generic and terrible, like this thing is not ready. And what they did was the following. Please write me an email for sales, and it came back with a gray jacket. And they’re like, this is an ugly gray jacket. See, it’s not ready. Now. Clearly, you see that, like, there’s a difference between that and you are, let’s think about the following prompt. A world class email writer designed to leverage the pain agitate relief model, you understand the basic structure and ability to use my next sentence to engage this person more deeply in the following email. My audience are 30-year-olds in urban areas dealing in health care related fields, and they care deeply about the cause of immunotherapy, please design the following appeal for the end of year 2023, highlighting the following programs, remember, stay in role, stay in character as email GPT nd do your best please. Please refer to the following annual report for any relevant information for the final email. Which of those was written by somebody trying their best? Not afraid of GPT taking their jobs? And which was pulling a gray jacket off the shelf?

Spencer Brooks  42:02

That’s it’s an excellent, excellent point. Right. And I think I would hope that folks listening to this if they’re, if they’re taking away, well, a couple of things, but certainly out of this is just to not be afraid to be willing to try and to realize that this is more than just press the button and you get the same response that everyone does that there’s actually a skill level involved. I know George, you’ve talked about like, the idea of a job title, like an AI a prompt architect, right being that this, like, it’s an actual, like, it’s a skilled knowledge worker position, to be able to do this right. And so I think it’s sort of a skill that I would want to see everyone who is, you know, having to do that kind of knowledge work, like, have some level of some baseline level of that, and then know that like, you know, probably in five or 10 years, like the people who are really nerdy about it, you know, like, at least me, I’m not gonna call you a nerd, even though like

George Weiner 42:53

I would, I’m a proud nonprofit nerd by, by name, and by choice.

Spencer Brooks 42:58

Fair enough. Excellent. George, where I, we’re, we’re super out of time here. But how can how can listeners get in touch with you if they’d like to learn more about your work?

George Weiner 43:07

Honestly, if you’re still listening, kudos, you’re probably not afraid. So, you’ve made it that far. You can find us at wholewhale.com. And also, as I mentioned, causewriter.ai If you’re looking to build or use any of the many free GPT tools we’ve actually built on that platform for you. And I appreciate you listening this far. And thanks, Spencer, for your work. I love being a part of nonprofit.ist with you and helping promote the network.

Spencer Brooks 43:35

Yeah. Oh, of course. You’re very welcome, George. Yeah. And for listeners. That wraps up our show for today. If you like this episode, definitely consider rating and reviewing us on Apple podcasts or wherever you’re listening to this. The show is also part of the thought leadership of Brooks digital. We’re a web design development user experience Agency for Health nonprofits. So, if you like this podcast, feel free to check out our website at Brooks.digital. You can find more of our insights, learn more about our work. But yeah, with all that said, George, super appreciate you coming on the podcast today. Thanks again.

George Weiner 44:04

Thanks for your time.

44:11

Thanks for listening to Health Nonprofit Digital Marketing. If you liked this episode, leave us a review on your favorite podcast platform. And don’t forget to check out the Brooks digital website at www.Brooks.digital where you can find other resources like this podcast. Learn how we help nonprofits like yours and get in touch with our team. See you in the next episode.

Join 1,000+ health nonprofit professionals who receive regular insights and advice to improve their digital presence.

You have successfully subscribed! We deliver our insight pieces direct to your inbox every 2 weeks.

Pin It on Pinterest

Share This