When AI Writes the RFP

This article originally appeared as part of The Vendor Edge series on LinkedIn. This is an expanded and updated version for kieranengels.com.

AI-generated RFPs and proposals are becoming common. They sound polished. They are structurally complete. And they are often fundamentally misleading. When a vendor uses AI to respond to an RFP, the sponsor receives answers that are technically correct but operationally empty. The polish masks whether the vendor actually understands the program. The truth is, AI is a tool for efficiency. But vendor selection requires human judgment, specificity, and the ability to detect when a vendor is telling you what you want to hear rather than what is true. Kieran Engels has observed proposals generated by AI that hit every requirement on the RFP yet offered no specific insight into how the vendor would actually execute the work. The proposals looked complete. They were complete. They just were not true. This is not a criticism of AI. It is a recognition that tools are inputs, not authority. AI can make a proposal polished. Only people can make a proposal honest.

KEY TAKEAWAYS

Key Takeaways

  • AI-generated RFP responses are technically correct but operationally empty, mirroring requirements back without specific insight or implementation detail.
  • Specificity is the differentiator between AI-generated polish and operational truth. Specific proposals name people, timelines, costs, and constraints.
  • Vendors that do not acknowledge operational constraints are hiding from the realities of execution and should be viewed with caution.
  • Involving the actual execution team in vendor selection reveals gaps between proposal and operational capability that a procurement team alone cannot assess.
  • Vendor selection requires human judgment to distinguish between surface-level correctness and operational truth. Tools enable efficiency but cannot replace judgment.

The Challenge

AI-generated proposals are becoming standard practice in vendor selection. A vendor receives an RFP. They feed it into an AI system. The system generates a response that addresses every requirement, uses professional language, and is internally consistent. The sponsor reads it and sees a vendor that understands their program.

But here is what is actually happening. The AI system is reading the RFP and generating text that mirrors the structure of the RFP back to you. It is not generating insight. It is generating reflection.

Let’s be clear about what this means operationally. An RFP asks: How will you manage monitoring in a decentralized trial? The AI-generated response says: We will implement a comprehensive monitoring strategy that leverages remote tools to ensure data quality while minimizing site burden. This is technically correct. It is also operationally empty. It does not say what tools, how often, what happens when something goes wrong, or what this will cost.

A human-generated response from someone who has actually managed monitoring in decentralized trials would say something different. It would say: We manage decentralized monitoring through three touchpoints. First, real-time EDC review with alerts for out-of-range values, flagged within 24 hours of entry. Second, weekly video monitoring calls with sites, focused on high-risk data points. Third, monthly on-site visits to a subset of sites, rotating based on enrollment and data quality trends. This approach costs more than central monitoring but catches problems early.

The difference is specificity. Specificity requires experience. Specificity requires judgment. Specificity is what you need to evaluate whether the vendor can actually execute.

AI-generated proposals optimize for surface-level correctness. They address every requirement. They use language that mirrors the sponsor’s values. They show up on time. But they hide the operational reality beneath a layer of polish.

Kieran Engels has seen this pattern repeatedly. A vendor submits a proposal that sounds excellent. It covers all the bases. The sponsor moves forward. Execution begins. The vendor’s actual approach is different from what the proposal suggested. The monitoring plan is less frequent. The team is different. The escalation path is not what was promised. The vendor is not lying. They are implementing a different approach than what they proposed. But by the time this becomes clear, the vendor has been selected and implementation has started.

This is a selection problem, not a vendor problem. The selection process failed to distinguish between AI-generated polish and operational truth.

The Infrastructure

How does a sponsor address this? Several approaches work. First, ask for specificity. When a vendor proposes a monitoring strategy, ask them to name the monitoring coordinator, describe their experience, and outline the monitoring visit calendar by program month. Specificity is hard to fake. AI-generated proposals dodge specificity.

Second, ask about constraints. Every operational approach has constraints. A monitoring strategy that promises weekly visits has cost implications. It requires staffing. It requires logistical coordination. A vendor that does not acknowledge constraints is hiding from operational reality.

Third, involve the team that will do the work in the selection process. The person who will actually manage monitoring should meet the vendor’s proposed monitoring team. Not the vendor’s business development person. The actual monitoring team. This is uncomfortable for vendors. It is also revealing. This is where operational truth becomes visible.

Fourth, require references that check operational capability, not just vendor viability. Ask references: Did the vendor deliver what they proposed? Were there surprises? What would you do differently? References often reveal gaps between proposal and reality.

Tools are inputs, not authority. AI is a tool. It can make a proposal efficient. It can make a proposal polished. But only human judgment can make a proposal honest.

Seuss+ works with biotech leadership teams navigating vendor selection in an AI-abundant environment. The risk is not that AI will generate false statements. The risk is that AI will generate technically correct but operationally empty statements that obscure whether the vendor actually knows how to execute.

The solution is not to ban AI. The solution is to change the selection process. Make proposals more specific. Require constraint acknowledgment. Involve execution teams. Check references on operational capability.

This is the work of buying with your eyes open. It requires more work than reading polished proposals. But it prevents the expensive surprise of discovering after selection that the vendor’s proposal did not match their operational reality.

AI-Generated vs. Human-Generated RFP Responses

DimensionAI-Generated ResponseHuman-Generated Response (Experienced Vendor)
SpecificityComprehensive monitoring strategy leveraging remote toolsReal-time EDC review flagged within 24hrs, weekly video calls with weekly on-site visits to 15% of sites
Constraint AcknowledgmentWe will ensure data quality while minimizing burdenWeekly monitoring requires 12 FTE. Cost is 30% higher than quarterly. We recommend it for trials with safety risk.
Implementation DetailImplementation will follow best practices and regulatory requirementsMonth 1: System setup, site training. Month 2 onwards: Weekly calls, rotating on-site visits. Monitoring coordinator will be X (resume attached).
Risk TransparencyWe manage risk through quality oversightHigh-risk areas: sites with <95% EDC compliance. Approach: daily alert review, escalation to site within 24hrs.
Staffing ModelOur experienced team will manage the programNamed monitoring coordinator (Jane Smith, 12 years CRO experience). QA reviewer (Bob Jones, 8 years). Weekly standup call with sponsor.

Tools are inputs, not authority. AI can make a proposal polished. Only people can make a proposal honest.

Kieran Engels, CEO

Key Industry Data

The FDA conducted 6,375 domestic researcher inspections between 2007 and 2015, with 360 resulting in significant violations and 194 triggering regulatory actions. (Source: FDA)

FDA warning letters increased 59% year over year, from 190 in FY2024 to 303 in FY2025. (Source: FDA)

The most common FDA inspection violations include failing to maintain accurate case histories (10.82%), enrolling ineligible subjects (8.85%), and failing to perform required tests (8.52%). (Source: FDA warning letter analysis)

Responses to FDA Form 483 observations are due within 15 business days. (Source: FDA)

Only 5% of FY2025 warning letters concerned clinical research or IRB oversight, indicating that when violations occur in clinical settings, they are treated with heightened severity. (Source: FDA)

Frequently Asked Questions

No. AI is a productivity tool. The question is not whether vendors use AI. The question is whether the proposal reflects operational truth. A vendor can use AI to draft a proposal and then edit it to add specificity, acknowledge constraints, and reflect how they will actually execute. That is appropriate. A vendor can also use AI to generate a polished proposal that masks operational reality. That is the problem.

Look for lack of specificity. AI-generated proposals tend to be structurally complete but operationally empty. They use language that mirrors your RFP back to you. They avoid naming specific people, specific timelines, or specific constraints. Specific proposals are harder to AI-generate because specificity requires knowledge.

No. Banning is not practical and would eliminate efficiency gains. The approach is to change the selection process. Require specificity. Involve execution teams. Check references on operational capability. These requirements separate honest proposals from polish-driven ones, regardless of whether AI was used.

Document the differences clearly. Clarify whether the proposal was aspirational or committed. If it was committed, escalate. If it was aspirational, adjust expectations. Then adjust the governance infrastructure to catch differences early through feedback mechanisms rather than discovering them mid-execution.

Do not focus on detecting AI. Focus on detecting specificity. Ask vendors to provide the names of the people who will execute the work. Ask them to map out their approach month by month. Ask them to identify constraints and costs. These requirements are hard to meet without specific knowledge, and they work regardless of how the proposal was drafted.

About the Author

Kieran Engels is CEO and Co-Founder of Seuss+, a strategy and execution partner helping biotech sponsors optimize vendor relationships across clinical development. With more than a decade of experience in vendor governance, risk management, and clinical trial execution, Kieran works with biotech leadership teams to build the oversight systems that protect timelines, budgets, and data integrity. Learn more at seuss.plus.

Kieran Engels

Kieran Engels

CEO & Co-Founder of Seuss+. Kieran writes about vendor governance, execution accountability, and the structural patterns that shape clinical development outcomes.