AI as a Grantmaking Filter: Promise, Pitfalls, and the Human Judgment Gap
Read ArticleRead Original Articleadded Sep 1, 2025September 1, 2025

Imperial College London is piloting an AI-driven approach to spot climate research with commercialization potential, awarding small, flexible grants without IP strings. Proponents argue AI can scale triage and improve equity and cross-field insight, while critics warn of bias and lock-in to past successes. Public funders remain wary over confidentiality, and experts stress the need for rigorous testing; AI will likely augment, not replace, human judgment.
Key Points
- Imperial’s CSC used a tailored ChatGPT workflow to scan 10,000 UK research abstracts, shortlist 160, and—after human review—award three no-strings £35,000 grants aimed at commercialization steps.
- The program seeks to surface hidden, high-potential climate solutions and fund activities traditional grants often miss (e.g., industry engagement, market analysis) without taking equity or IP.
- Advocates say AI can scale triage, improve cross-disciplinary understanding, and potentially mitigate inequities by proactively reaching overlooked researchers.
- Concerns center on algorithmic bias and lock-in: evidence from VC suggests AI can favor look-alike bets, which may stifle novelty.
- Public funders are cautious due to confidentiality and bias risks—NIH banned AI in peer review and UKRI forbids reviewers’ use of generative AI—so more testing is needed before broad adoption.
Sentiment
Mixed to skeptical: supportive of AI as an assistant and for proactive scouting, wary of AI as a gatekeeper due to bias, arms-race dynamics, opacity, and policy/confidentiality risks.
In Agreement
- AI is helpful for the formulaic parts of grant writing (e.g., Gantt charts, compliance sections) and for pre-review, tightening text, and surfacing methodological weaknesses quickly.
- Using AI for proactive scouting (as CSC did) to find promising work and then handing off to human experts is a sensible, scalable triage approach.
- Human oversight should remain central; AI can speed screening and reduce workload but should not make final funding decisions.
- Given reviewers are overloaded and often cursory, AI-assisted pre-reviews can improve clarity and completeness before human panels see proposals.
- LLMs can help surface under-the-radar innovators by scanning large literatures, potentially improving equity compared to conventional networks.
Opposed
- AI-driven gatekeeping risks entrenching past biases and conventional wisdom, favoring low-risk, copycat proposals rather than high-variance breakthroughs.
- An arms race is likely: AI-written proposals versus AI reviewers, plus adversarial prompt tricks and optimization to ‘game’ the filter.
- Opaque AI filters are hard to audit or debug; to validate them humans would need to re-review discarded proposals, defeating the purpose.
- Confidentiality and policy concerns (e.g., NIH/UKRI bans) limit practical use of AI in formal peer review.
- Optimizing for ‘commercial promise’ misaligns with scientific merit and could distort research priorities.
- LLM-only written proposals would be shallow and homogenous; real scientific nuance, credibility, and execution history matter more.
- The core problem is the grant system’s burden and incentives; better fixes are block grants, lotteries, or CV-based funding, not more AI.