AI in arts grantmaking

Some thoughts from a recent GIA session on AI in arts grantmaking. December 17, 2025

Gwendal Tsang, CC BY-SA 4.0, via Wikimedia Commons
Last month, I moderated CTRL + ART + APPLY: Navigating AI Ethics, Access, and Integrity in Grantmaking at the Grantmakers in the Arts conference in Minneapolis, sponsored by the GIA Support for Individual Artist Committee. Praise be to the organizers of the session: Jaime Sharp (Grantmakers in the Arts), Clarissa Crawford (Crawford & Co Creative), Indira Goodwine-Josias (New England Foundation for the Arts), and Anna Tragesser (Indy Arts Council) put together a great session. And special thanks to my co-panelists Zoe Cinel (Rochester Art Center), Terresa Hardaway (Black Garnet Books), Tim Brunelle (University of St Thomas), and Dee Harris (Creative Commons) for diving into this thorny issue with fearlessness and good humor.

The session explored AI’s role in grantmaking by focusing on accessibility, artistic integrity, and funder responsibility. Our approach was roughly aligned with “AI as Normal Technology” by Arvind Narayanan and Sayash Kapoor, which I mentioned in my previous post. We started from the proposition that AI exists, will likely be used in both grantmaking and grant-seeking, and its implications aren’t yet fully understood by arts funders.

What follows is less of a recap of the session than a discussion of some of the main themes and questions that emerged during the discussion. We structured the conversation around three possible scenarios in which AI might appear during a typical open call grant: AI in artmaking, AI in grantwriting, and AI in grant reviewing. Throughout, I’ll use the term “AI” to refer mostly to Large Language Models like ChatGPT, except where explicitly otherwise noted.

AI used in the process of creating work

Should funders require disclosure of AI use in artmaking?

In the discussion, it became clear that some grantmakers might imagine “AI-created artwork” in stark terms: AI-prompted facsimiles replacing handmade work. But the reality, as Zoe mentioned in the panel discussion, is far more complicated. Artists use machine learning to produce parts of works, train models on their own datasets, or generate content via LLMs. The use cases are varied and often technically sophisticated. This makes the question of whether or not AI was used to create work less black and white than some funders might think.

This sophistication significantly complicates any enforcement that a funder might consider, should that funder elect to prohibit AI use. Most funders do not have the technical expertise to make an assessment of this kind anyway. More fundamentally, expecting disclosure pries into an artist’s working methodology in ways we wouldn’t expect for artists working in other mediums. We don’t ask painters to disclose paint brands or media artists to specify whether they used Supercollider or MaxMSP, so why should AI be any different?

There is a counterargument to be made here, which is that funders could reasonably oppose generative AI on moral/ethical grounds rather than technical ones. LLMs are environmentally destructive and catastrophic for creative labor, and a funder might declare “no generative AI in funded projects” as a response to that.

While I am sympathetic to this argument, the problem again is enforcement. How will a grantmaker definitively know whether AI was used? Without this certainty, the policy is unenforceable. Unenforceable policies privilege applicants willing to use AI and stay quiet, which results in the the opposite of what a values-based prohibition intends to achieve. The practical reality is that funders can’t police creative process, which means the decision about whether to use AI ultimately rests with artists themselves.

AI used in the grantwriting process

Should applicants be allowed to use AI to write grant applications?

The room responded positively to this scenario, with some caveats I’ll get to. Panelists saw allowing LLMs like Claude or ChatGPT to help write applications as an equity issue. Because many applicants to open calls are writing the grant applications on their own without professional assistance, AI could potentially have a leveling effect.

At Knight, we always sought ways to ensure we were evaluating the work, and not the application, and to not overprivelege applications for less impactful work that were simply well-written. I wish this weren’t the case, but unfortunately applications written using terminology and formatting that a grant reviewer expects and is accustomed to are more likely to get a fair hearing than those that don’t. AI could be leveling here, by giving all applicants the ability to frame their applications in ways that might be most advantageous to them.

A concern is that the leveling effect of AI could actually result in a flattening in which all applications resemble one another. This will increase the burden on artist applicants to find ways to make their applications stand out while using AI to help ensure those applications receive a fair hearing.

However!

Making applications easier to write inevitably will increase the number of submissions a funder receives, potentially dramatically. Writing grants is difficult, and that friction is one of the main brakes on the process. But as soon as applicants can prompt Claude to “write an application my community foundation will accept”, funders that usually receive 30 applications for an open call might find themselves receiving 3,000. This kind of increase could shut down these programs or trigger other problematic responses.

Another potential issue that came up in the discussion was outright fraud. Not just AI-generated applications, but potentially even AI-generated supporting documentation. This does not yet appear to be a widespread problem, but several program directors in the room mentioned they were already dealing with this. Most foundations do not have the resources to follow up on the legitimacy of, say, every IRS determination letter submitted to them; they simply take the letter’s presence as conferring legitimacy. The same would go for other submitted documents—are these real news clippings from previous gallery openings, or are they AI slop? AI will make telling the difference between real and fraudulent applications much more difficult, and it would only take a few grants awarded to applicants pushing vaporware to undermine entire funding programs.

AI in the grant reviewing process

Should funders use AI to review applications?

This question received the most negative responses from the panelists and attendees. No one was happy about it.

We discussed three likely scenarios in which AI might be used to review grant applications:

  • A funder who had not planned to use AI to review applications receives an unmanageable number of them (this is the “3,000 applications instead of the expected 30” scenario mentioned above) and deploys AI to “automate” part of the process.
  • After the panel reviews are complete, a program director discovers that one of the panelists used AI to evaluate the applications assigned to them, without disclosing this.
  • A funder explicitly uses AI to automate review, justifying it as enabling them to handle more applications while ensuring fairer and more objective outcomes.

All three scenarios fundamentally complicate a funder’s ability to justify funding decisions.

Because LLMs lack an internal knowledge model ensuring reliable concept retention and repetition, they cannot “unwind” their justification process or reliably produce the same results with the same input. Submitting the same 3,000 applications for AI review 10 times will yield 10 different “accepted” and “declined” lists. Traditional panel review has this problem too, but well-structured processes iron out inconsistencies and can be unwound when necessary.

A related issue is that applicants whose applications are declined need feedback to refine future submissions. A funder using AI to generate this feedback cannot ensure the information would actually help applicants, further breaking the process. And of course, we know that LLMs are far from objective, and are likely to introduce biases that are present in their training data. For funders seeking to make just and equitable grants, this is a potential nightmare.

If a funder plans to use AI as part of the review process, it should at minimum declare what model(s) it is using and potentially what prompts were used to generate output. This does not on its own solve any of these problems, but it establishes some transparency. Program directors should also be prepared, if they declare AI usage to be off-limits for panelists, to evaluate panelist output more closely than they may have in the past.

Where this leaves us

Our discussion at GIA highlighted a real tension: AI in grantmaking presents genuine opportunities for equity and access, particularly in helping under-resourced applicants navigate the grant-seeking process. But those same affordances create new vulnerabilities such as the potential for fraud, the risk of overwhelming programs with applications, and the introduction of biases into decision-making.

While there was no consensus on how to address this tension, I would say there was broad agreement as to what the biggest issues here are likely to be. These aren’t questions with obvious answers, and different funders will land in different places based on their values, capacity, and communities.

What is clear is that funders will have to think about these issues, whether they want to or not. We were confident that AI is already being used in all three scenarios we discussed, and as funders, we must engage with this thoughtfully and proactively. Funders who don’t establish clear policies and practices around AI use will find themselves making ad hoc decisions under pressure, likely in response to problems rather than in anticipation of them.

comments powered by Disqus