<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Vendor Strategy Archives - Kieran Engels</title>
	<atom:link href="https://www.kieranengels.com/category/vendor-strategy/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kieranengels.com/category/vendor-strategy/</link>
	<description></description>
	<lastBuildDate>Tue, 31 Mar 2026 14:21:17 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Clinical Catfish</title>
		<link>https://www.kieranengels.com/clinical-catfish-polished-proposals-vendor-reality/</link>
					<comments>https://www.kieranengels.com/clinical-catfish-polished-proposals-vendor-reality/#respond</comments>
		
		<dc:creator><![CDATA[Kieran Engels]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 07:39:32 +0000</pubDate>
				<category><![CDATA[Vendor Strategy]]></category>
		<category><![CDATA[clinical catfish]]></category>
		<category><![CDATA[CRO selection]]></category>
		<category><![CDATA[due diligence]]></category>
		<category><![CDATA[RFP evaluation]]></category>
		<category><![CDATA[vendor proposals]]></category>
		<guid isPermaLink="false">https://www.kieranengels.com/?p=46</guid>

					<description><![CDATA[<p>Clinical catfishing happens when proposals don't match execution. Align incentives, name teams, and build governance to prevent gaps.</p>
<p>The post <a href="https://www.kieranengels.com/clinical-catfish-polished-proposals-vendor-reality/">Clinical Catfish</a> appeared first on <a href="https://www.kieranengels.com">Kieran Engels</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This article originally appeared as part of <a href="https://www.linkedin.com/newsletters/the-vendor-edge-7315396810720665602/">The Vendor Edge series on LinkedIn</a>. This is an expanded and updated version for kieranengels.com.</p>



<p>Clinical catfishing is what happens when a vendor&#8217;s proposal does not match their operational reality. The proposal promises a team, a timeline, and a methodology. Execution reveals different staff, different timelines, and different capability. This is not always malicious. It is often a systemic problem: the team that writes the proposal is not the team that executes the work. The incentive structure rewards winning bids, not honest capability assessment. A vendor can promise excellence in the proposal because the proposal team has no <a href="/age-of-accountability/">accountability</a> for execution. The execution team operates under different constraints. Kieran Engels has observed this pattern across CROs, sites, and specialized vendors. A proposal names a medical director with 15 years of experience. Execution reveals that the medical director will be reviewing work but the day-to-day team is junior and less experienced. This is not deception. This is misalignment between proposal and execution incentives.</p>



<h2 class="wp-block-heading">KEY TAKEAWAYS</h2>


<div class="ogs-takeaways"><h3 class="ogs-takeaways__title">Key Takeaways</h3><ul class="ogs-takeaways__list"><li>Clinical catfishing occurs when the proposal team and execution team have different incentives, creating gaps between promised and delivered capability.</li><li>Proposal-execution gaps are often not malicious. They reflect the reality that the person promising execution is not the person delivering it.</li><li>Named staffing commitments with contractual protections reduce the likelihood that execution will involve different people than what was proposed.</li><li>Governance infrastructure with early feedback surfaces proposal-execution gaps within weeks rather than after they compound into major program delays.</li><li>Effective vendor management requires treating proposals as aspirations requiring specificity, not commitments requiring only contract enforcement.</li></ul></div>



<h2 class="wp-block-heading">The Challenge</h2>



<p>The problem with clinical vendor selection is misaligned incentives. The person writing the proposal is incentivized to win the bid. The person executing the work is incentivized to deliver profitably. These are not the same person. They often do not even know each other.</p>



<p>A CRO receives an RFP. The business development team responds. The proposal promises a monitoring plan with monthly on-site visits, a dedicated program manager, and a quality review cadence of twice weekly. The sponsor reads this and feels confident about execution oversight.</p>



<p>Then execution begins. The program manager assigned is junior and divided across three programs. Monthly on-site visits become quarterly because travel budgets are tight. Quality reviews are weekly, not twice weekly. The vendor is not breaking their contract. They are operating within its letter but not its spirit. The difference between what was proposed and what is executing is real.</p>



<p>This is not the vendor lying. This is the vendor operating under different constraints than what the proposal assumed.</p>



<p>Let&#8217;s be clear about what is happening. The proposal team is rewarded for closing deals. The execution team is rewarded for managing costs. These incentives are misaligned. And the gap between them is where execution problems emerge.</p>



<p>Kieran Engels has seen this across different vendor types. A CRO proposes a team with specific people. During execution, one of the named people leaves for another job. They are replaced with someone less experienced. The CRO fulfilled the contract by providing a replacement. But the execution capability decreased.</p>



<p>A site proposes a patient population and enrollment timeline. During execution, the patient population is harder to recruit than expected. Enrollment slows. The site did not breach the contract. They are operating in a harder environment than what the proposal assumed.</p>



<p>A specialized vendor proposes a novel analysis approach. During execution, the approach proves more complex than anticipated. Timelines slip. The vendor is doing the work they promised. But the delivery is delayed.</p>



<p>These are not vendor failures. These are proposal-execution gaps. And they are endemic to how vendor selection works.</p>



<h2 class="wp-block-heading">The Infrastructure</h2>



<p>The RFP-proposal process assumes that what is written in the proposal will be executed. But proposals are written by the people who sell. Execution is done by the people who deliver. These are different incentives operating on different timelines.</p>



<p>How does a sponsor address this? First, involve the execution team in the selection process. Not the business development team. The team that will actually do the work. This creates alignment. The people writing the proposal are the people who will execute it. Their incentives are the same.</p>



<p>Second, require specific staffing commitments. Not generic promises about a &#8220;dedicated team.&#8221; Specific names. Specific roles. Specific hours. And contractual protections if the named person is reassigned without cause.</p>



<p>Third, build feedback mechanisms that surface proposal-execution gaps early. A governance infrastructure with weekly feedback means that if the execution team is delivering differently than what was proposed, this becomes visible within weeks, not months.</p>



<p>Fourth, build governance infrastructure that clarifies expectations before execution starts. A proposal that promises a monthly on-site visit plan should result in a written monitoring plan that details which months, which sites, which activities. This forces specificity before the contract is signed.</p>



<p>Seuss+ works with biotech leadership navigating vendor relationships that have proposal-execution gaps. The first response is often to escalate to the vendor relationship manager. The vendor&#8217;s response is often technically accurate. But it does not close the gap between what was proposed and what is executing.</p>



<p>The real solution is systemic. Align the incentives of the proposal team and the execution team. Require specificity in proposals. Involve execution teams in selection. Build feedback mechanisms early.</p>



<p>This is the work of buying with your eyes open. It requires acknowledging that proposals are aspirations, not commitments. And commitments require specificity, accountability, and governance infrastructure to enforce.</p>



<h2 class="wp-block-heading">Proposal vs. Reality</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Dimension</td><td>What the Proposal Said</td><td>What Execution Revealed</td><td>Governance Fix</td></tr><tr><td>Named Team</td><td>Medical Director: Dr. Smith (15yr PM experience). Program Manager: Jane Doe.</td><td>Dr. Smith reviews monthly. Jane manages 3 programs. Neither is the day-to-day contact.</td><td>Named primary contact with SLA for response time. Escalation path if primary contact is reassigned.</td></tr><tr><td>Timeline</td><td>On-site monitoring visits monthly</td><td>Visits happen quarterly due to travel budget constraints</td><td>Written monitoring visit calendar by program month, signed before execution starts</td></tr><tr><td>Methodology</td><td>Real-time EDC review with 24-hour response to flags</td><td>EDC review is weekly. Flags are reviewed within 5 days.</td><td>Specific escalation pathway. Named person responsible for response time. Weekly feedback to sponsor.</td></tr><tr><td>Risk Acknowledgment</td><td>Risk is managed through quality oversight</td><td>Vendor is managing cost, not risk visibility. High-risk issues surface late.</td><td>Governance infrastructure with daily/weekly escalation for high-risk areas. Clear definition of what triggers escalation.</td></tr><tr><td>Staffing Model</td><td>Dedicated program team</td><td>Team is shared across multiple programs. Availability is inconsistent.</td><td>Primary and secondary named contacts. Defined hours of availability. Escalation if availability falls below commitment.</td></tr></tbody></table></figure>


<figure class="ogs-quote"><blockquote class="ogs-quote__text"><p>Clinical catfishing is not deception. It is the gap between incentive structures: proposal team rewarded for winning, execution team rewarded for managing costs.</p></blockquote><figcaption class="ogs-quote__caption"><cite class="ogs-quote__attribution">Kieran Engels, CEO</cite></figcaption></figure>



<h2 class="wp-block-heading">Key Industry Data</h2>



<p>70% of clinical trials experience delays, with more than half related to site activation. (Source: Tufts CSDD)</p>



<p>Nearly 80% of trials fail to meet their original on time enrollment targets. (Source: Tufts CSDD)</p>



<p>Cancer center median trial activation time is 167 days, compared to the NCI target of 90 days. (Source: AACI/NCI)</p>



<p>Daily trial delays cost sponsors between $600,000 and $8 million per day in lost revenue opportunity. (Source: Tufts CSDD)</p>



<p>Best performing clinical facilities achieve 5% to 10% operating cost improvements through clinical standardization and productivity gains. (Source: McKinsey)</p>



<h2 class="wp-block-heading">Frequently Asked Questions</h2>


<div class="ogs-faq-block"><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-1">How can sponsors protect against proposal-execution gaps?</button><div class="ogs-faq-answer" id="ogs-faq-1"><p>Four approaches work. First, involve the execution team in selection. Second, require named staffing with contractual protections. Third, build feedback mechanisms that surface gaps early. Fourth, build governance infrastructure that clarifies expectations before execution starts. Together, these reduce the gap between proposal and reality.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-2">Is it reasonable to hold vendors accountable for proposal commitments?</button><div class="ogs-faq-answer" id="ogs-faq-2"><p>Yes, but only if the proposal is specific. Generic proposals like &#8216;dedicated team&#8217; or &#8216;comprehensive monitoring&#8217; are aspirational, not commitments. Specific proposals like &#8216;monthly on-site visits on these dates with these activities&#8217; are commitments. Contracts should distinguish between the two.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-3">What should sponsors do when execution does not match the proposal?</button><div class="ogs-faq-answer" id="ogs-faq-3"><p>First, document the difference clearly. Is it a material gap or a minor variation? If material, escalate to vendor leadership with specific examples. Clarify whether the proposal was committed or aspirational. If committed, seek remediation. If aspirational, adjust expectations and add governance infrastructure to catch future gaps early.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-4">Can a vendor have legitimate reasons for proposal-execution gaps?</button><div class="ogs-faq-answer" id="ogs-faq-4"><p>Yes. Markets change. Resources become unavailable. Constraints emerge during execution that were not visible during proposal. The question is not whether gaps are legitimate. The question is whether they are transparent and managed with the sponsor, or hidden until they cause damage.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-5">How can governance infrastructure reduce proposal-execution gaps?</button><div class="ogs-faq-answer" id="ogs-faq-5"><p>Governance infrastructure surfaces gaps early through feedback mechanisms. A weekly call between sponsor and vendor allows the sponsor to see how execution is tracking against proposal. When gaps become visible, adjustments can be made while they are still small. Without governance infrastructure, gaps hide until they are expensive to fix.</p>
</div></div></div><script data-no-optimize="1" data-no-defer="1" data-no-minify="1">(function(){function ogsFaqInit(){document.querySelectorAll(".ogs-faq-question").forEach(function(btn){if(btn.dataset.ogsBound)return;btn.dataset.ogsBound="1";btn.addEventListener("click",function(e){e.preventDefault();var item=this.closest(".ogs-faq-item");var isOpen=item.classList.contains("is-open");item.classList.toggle("is-open");this.setAttribute("aria-expanded",!isOpen);});});}ogsFaqInit();if(document.readyState==="loading"){document.addEventListener("DOMContentLoaded",ogsFaqInit);}document.addEventListener("rocket-allScriptsLoaded",ogsFaqInit);})();</script><script type="application/ld+json">{"@context":"https://schema.org","@type":"FAQPage","mainEntity":[{"@type":"Question","name":"How can sponsors protect against proposal-execution gaps?","acceptedAnswer":{"@type":"Answer","text":"Four approaches work. First, involve the execution team in selection. Second, require named staffing with contractual protections. Third, build feedback mechanisms that surface gaps early. Fourth, build governance infrastructure that clarifies expectations before execution starts. Together, these reduce the gap between proposal and reality."}},{"@type":"Question","name":"Is it reasonable to hold vendors accountable for proposal commitments?","acceptedAnswer":{"@type":"Answer","text":"Yes, but only if the proposal is specific. Generic proposals like &#8216;dedicated team&#8217; or &#8216;comprehensive monitoring&#8217; are aspirational, not commitments. Specific proposals like &#8216;monthly on-site visits on these dates with these activities&#8217; are commitments. Contracts should distinguish between the two."}},{"@type":"Question","name":"What should sponsors do when execution does not match the proposal?","acceptedAnswer":{"@type":"Answer","text":"First, document the difference clearly. Is it a material gap or a minor variation? If material, escalate to vendor leadership with specific examples. Clarify whether the proposal was committed or aspirational. If committed, seek remediation. If aspirational, adjust expectations and add governance infrastructure to catch future gaps early."}},{"@type":"Question","name":"Can a vendor have legitimate reasons for proposal-execution gaps?","acceptedAnswer":{"@type":"Answer","text":"Yes. Markets change. Resources become unavailable. Constraints emerge during execution that were not visible during proposal. The question is not whether gaps are legitimate. The question is whether they are transparent and managed with the sponsor, or hidden until they cause damage."}},{"@type":"Question","name":"How can governance infrastructure reduce proposal-execution gaps?","acceptedAnswer":{"@type":"Answer","text":"Governance infrastructure surfaces gaps early through feedback mechanisms. A weekly call between sponsor and vendor allows the sponsor to see how execution is tracking against proposal. When gaps become visible, adjustments can be made while they are still small. Without governance infrastructure, gaps hide until they are expensive to fix."}}]}</script>



<h2 class="wp-block-heading">About the Author</h2>



<p><a href="https://www.linkedin.com/in/kierancanisius/">Kieran Engels</a> is CEO and Co-Founder of <a href="https://www.seuss.plus/">Seuss+</a>, a strategy and execution partner helping <a href="https://www.seuss.plus/who-we-help/">biotech sponsors</a> optimize vendor relationships across clinical development. With more than a decade of experience in <a href="https://www.seuss.plus/clinical-trial-vendor-optimization-services/">vendor governance</a>, <a href="https://www.seuss.plus/risk-management-setup-for-biotech-clinical-trials/">risk management</a>, and <a href="https://www.seuss.plus/stage-4-optimization/">clinical trial execution</a>, Kieran works with biotech leadership teams to build the oversight systems that protect timelines, budgets, and data integrity. Learn more at <a href="https://www.seuss.plus/">seuss.plus</a>.</p>
<p>The post <a href="https://www.kieranengels.com/clinical-catfish-polished-proposals-vendor-reality/">Clinical Catfish</a> appeared first on <a href="https://www.kieranengels.com">Kieran Engels</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kieranengels.com/clinical-catfish-polished-proposals-vendor-reality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>When AI Writes the RFP</title>
		<link>https://www.kieranengels.com/ai-rfp-rethinking-vendor-selection-clinical-development/</link>
					<comments>https://www.kieranengels.com/ai-rfp-rethinking-vendor-selection-clinical-development/#respond</comments>
		
		<dc:creator><![CDATA[Kieran Engels]]></dc:creator>
		<pubDate>Tue, 03 Mar 2026 07:38:01 +0000</pubDate>
				<category><![CDATA[Vendor Strategy]]></category>
		<category><![CDATA[AI in pharma]]></category>
		<category><![CDATA[clinical trials]]></category>
		<category><![CDATA[RFP]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[vendor selection]]></category>
		<guid isPermaLink="false">https://www.kieranengels.com/?p=44</guid>

					<description><![CDATA[<p>AI-generated proposals are polished but operationally empty. Vendor selection requires specificity and human judgment, not just polish.</p>
<p>The post <a href="https://www.kieranengels.com/ai-rfp-rethinking-vendor-selection-clinical-development/">When AI Writes the RFP</a> appeared first on <a href="https://www.kieranengels.com">Kieran Engels</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This article originally appeared as part of <a href="https://www.linkedin.com/newsletters/the-vendor-edge-7315396810720665602/">The Vendor Edge series on LinkedIn</a>. This is an expanded and updated version for kieranengels.com.</p>



<p>AI-generated RFPs and proposals are becoming common. They sound polished. They are structurally complete. And they are often fundamentally misleading. When a vendor uses AI to respond to an RFP, the sponsor receives answers that are technically correct but operationally empty. The polish masks whether the vendor actually understands the program. The truth is, AI is a tool for efficiency. But vendor selection requires human judgment, specificity, and the ability to detect when a vendor is telling you what you want to hear rather than what is true. Kieran Engels has observed proposals generated by AI that hit every requirement on the RFP yet offered no specific insight into how the vendor would actually execute the work. The proposals looked complete. They were complete. They just were not true. This is not a criticism of AI. It is a recognition that tools are inputs, not authority. AI can make a proposal polished. Only people can make a proposal honest.</p>



<h2 class="wp-block-heading">KEY TAKEAWAYS</h2>


<div class="ogs-takeaways"><h3 class="ogs-takeaways__title">Key Takeaways</h3><ul class="ogs-takeaways__list"><li>AI-generated RFP responses are technically correct but operationally empty, mirroring requirements back without specific insight or implementation detail.</li><li>Specificity is the differentiator between AI-generated polish and operational truth. Specific proposals name people, timelines, costs, and constraints.</li><li>Vendors that do not acknowledge operational constraints are hiding from the realities of execution and should be viewed with caution.</li><li>Involving the actual execution team in vendor selection reveals gaps between proposal and operational capability that a procurement team alone cannot assess.</li><li>Vendor selection requires human judgment to distinguish between surface-level correctness and operational truth. Tools enable efficiency but cannot replace judgment.</li></ul></div>



<h2 class="wp-block-heading">The Challenge</h2>



<p>AI-generated proposals are becoming standard practice in vendor selection. A vendor receives an RFP. They feed it into an AI system. The system generates a response that addresses every requirement, uses professional language, and is internally consistent. The sponsor reads it and sees a vendor that understands their program.</p>



<p>But here is what is actually happening. The AI system is reading the RFP and generating text that mirrors the structure of the RFP back to you. It is not generating insight. It is generating reflection.</p>



<p>Let&#8217;s be clear about what this means operationally. An RFP asks: How will you manage monitoring in a decentralized trial? The AI-generated response says: We will implement a comprehensive monitoring strategy that leverages remote tools to ensure data quality while minimizing site burden. This is technically correct. It is also operationally empty. It does not say what tools, how often, what happens when something goes wrong, or what this will cost.</p>



<p>A human-generated response from someone who has actually managed monitoring in decentralized trials would say something different. It would say: We manage decentralized monitoring through three touchpoints. First, real-time EDC review with alerts for out-of-range values, flagged within 24 hours of entry. Second, weekly video monitoring calls with sites, focused on high-risk data points. Third, monthly on-site visits to a subset of sites, rotating based on enrollment and data quality trends. This approach costs more than central monitoring but catches problems early.</p>



<p>The difference is specificity. Specificity requires experience. Specificity requires judgment. Specificity is what you need to evaluate whether the vendor can actually execute.</p>



<p>AI-generated proposals optimize for surface-level correctness. They address every requirement. They use language that mirrors the sponsor&#8217;s values. They show up on time. But they hide the operational reality beneath a layer of polish.</p>



<p>Kieran Engels has seen this pattern repeatedly. A vendor submits a proposal that sounds excellent. It covers all the bases. The sponsor moves forward. Execution begins. The vendor&#8217;s actual approach is different from what the proposal suggested. The monitoring plan is less frequent. The team is different. The escalation path is not what was promised. The vendor is not lying. They are implementing a different approach than what they proposed. But by the time this becomes clear, the vendor has been selected and implementation has started.</p>



<p>This is a selection problem, not a vendor problem. The selection process failed to distinguish between AI-generated polish and operational truth.</p>



<h2 class="wp-block-heading">The Infrastructure</h2>



<p>How does a sponsor address this? Several approaches work. First, ask for specificity. When a vendor proposes a monitoring strategy, ask them to name the monitoring coordinator, describe their experience, and outline the monitoring visit calendar by program month. Specificity is hard to fake. AI-generated proposals dodge specificity.</p>



<p>Second, ask about constraints. Every operational approach has constraints. A monitoring strategy that promises weekly visits has cost implications. It requires staffing. It requires logistical coordination. A vendor that does not acknowledge constraints is hiding from operational reality.</p>



<p>Third, involve the team that will do the work in the selection process. The person who will actually manage monitoring should meet the vendor&#8217;s proposed monitoring team. Not the vendor&#8217;s business development person. The actual monitoring team. This is uncomfortable for vendors. It is also revealing. This is where operational truth becomes visible.</p>



<p>Fourth, require references that check operational capability, not just vendor viability. Ask references: Did the vendor deliver what they proposed? Were there surprises? What would you do differently? References often reveal gaps between proposal and reality.</p>



<p>Tools are inputs, not authority. AI is a tool. It can make a proposal efficient. It can make a proposal polished. But only human judgment can make a proposal honest.</p>



<p>Seuss+ works with biotech leadership teams navigating vendor selection in an AI-abundant environment. The risk is not that AI will generate false statements. The risk is that AI will generate technically correct but operationally empty statements that obscure whether the vendor actually knows how to execute.</p>



<p>The solution is not to ban AI. The solution is to change the selection process. Make proposals more specific. Require constraint acknowledgment. Involve execution teams. Check references on operational capability.</p>



<p>This is the work of buying with your eyes open. It requires more work than reading polished proposals. But it prevents the expensive surprise of discovering after selection that the vendor&#8217;s proposal did not match their operational reality.</p>



<h2 class="wp-block-heading">AI-Generated vs. Human-Generated RFP Responses</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Dimension</td><td>AI-Generated Response</td><td>Human-Generated Response (Experienced Vendor)</td></tr><tr><td>Specificity</td><td>Comprehensive monitoring strategy leveraging remote tools</td><td>Real-time EDC review flagged within 24hrs, weekly video calls with weekly on-site visits to 15% of sites</td></tr><tr><td>Constraint Acknowledgment</td><td>We will ensure data quality while minimizing burden</td><td>Weekly monitoring requires 12 FTE. Cost is 30% higher than quarterly. We recommend it for trials with safety risk.</td></tr><tr><td>Implementation Detail</td><td>Implementation will follow best practices and regulatory requirements</td><td>Month 1: System setup, site training. Month 2 onwards: Weekly calls, rotating on-site visits. Monitoring coordinator will be X (resume attached).</td></tr><tr><td>Risk Transparency</td><td>We manage risk through quality oversight</td><td>High-risk areas: sites with &lt;95% EDC compliance. Approach: daily alert review, escalation to site within 24hrs.</td></tr><tr><td>Staffing Model</td><td>Our experienced team will manage the program</td><td>Named monitoring coordinator (Jane Smith, 12 years CRO experience). QA reviewer (Bob Jones, 8 years). Weekly standup call with sponsor.</td></tr></tbody></table></figure>


<figure class="ogs-quote"><blockquote class="ogs-quote__text"><p>Tools are inputs, not authority. AI can make a proposal polished. Only people can make a proposal honest.</p></blockquote><figcaption class="ogs-quote__caption"><cite class="ogs-quote__attribution">Kieran Engels, CEO</cite></figcaption></figure>



<h2 class="wp-block-heading">Key Industry Data</h2>



<p>The FDA conducted 6,375 domestic researcher inspections between 2007 and 2015, with 360 resulting in significant violations and 194 triggering regulatory actions. (Source: FDA)</p>



<p>FDA warning letters increased 59% year over year, from 190 in FY2024 to 303 in FY2025. (Source: FDA)</p>



<p>The most common FDA inspection violations include failing to maintain accurate case histories (10.82%), enrolling ineligible subjects (8.85%), and failing to perform required tests (8.52%). (Source: FDA warning letter analysis)</p>



<p>Responses to FDA Form 483 observations are due within 15 business days. (Source: FDA)</p>



<p>Only 5% of FY2025 warning letters concerned clinical research or IRB oversight, indicating that when violations occur in clinical settings, they are treated with heightened severity. (Source: FDA)</p>



<h2 class="wp-block-heading">Frequently Asked Questions</h2>


<div class="ogs-faq-block"><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-6">Is it unethical for vendors to use AI to write RFP responses?</button><div class="ogs-faq-answer" id="ogs-faq-6"><p>No. AI is a productivity tool. The question is not whether vendors use AI. The question is whether the proposal reflects operational truth. A vendor can use AI to draft a proposal and then edit it to add specificity, acknowledge constraints, and reflect how they will actually execute. That is appropriate. A vendor can also use AI to generate a polished proposal that masks operational reality. That is the problem.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-7">How can a sponsor detect AI-generated proposals?</button><div class="ogs-faq-answer" id="ogs-faq-7"><p>Look for lack of specificity. AI-generated proposals tend to be structurally complete but operationally empty. They use language that mirrors your RFP back to you. They avoid naming specific people, specific timelines, or specific constraints. Specific proposals are harder to AI-generate because specificity requires knowledge.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-8">Should sponsors ban AI-generated proposals?</button><div class="ogs-faq-answer" id="ogs-faq-8"><p>No. Banning is not practical and would eliminate efficiency gains. The approach is to change the selection process. Require specificity. Involve execution teams. Check references on operational capability. These requirements separate honest proposals from polish-driven ones, regardless of whether AI was used.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-9">What should a sponsor do if a vendor&#039;s execution differs from their proposal?</button><div class="ogs-faq-answer" id="ogs-faq-9"><p>Document the differences clearly. Clarify whether the proposal was aspirational or committed. If it was committed, escalate. If it was aspirational, adjust expectations. Then adjust the governance infrastructure to catch differences early through feedback mechanisms rather than discovering them mid-execution.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-10">How should sponsors evaluate vendor proposals for AI content?</button><div class="ogs-faq-answer" id="ogs-faq-10"><p>Do not focus on detecting AI. Focus on detecting specificity. Ask vendors to provide the names of the people who will execute the work. Ask them to map out their approach month by month. Ask them to identify constraints and costs. These requirements are hard to meet without specific knowledge, and they work regardless of how the proposal was drafted.</p>
</div></div></div><script data-no-optimize="1" data-no-defer="1" data-no-minify="1">(function(){function ogsFaqInit(){document.querySelectorAll(".ogs-faq-question").forEach(function(btn){if(btn.dataset.ogsBound)return;btn.dataset.ogsBound="1";btn.addEventListener("click",function(e){e.preventDefault();var item=this.closest(".ogs-faq-item");var isOpen=item.classList.contains("is-open");item.classList.toggle("is-open");this.setAttribute("aria-expanded",!isOpen);});});}ogsFaqInit();if(document.readyState==="loading"){document.addEventListener("DOMContentLoaded",ogsFaqInit);}document.addEventListener("rocket-allScriptsLoaded",ogsFaqInit);})();</script><script type="application/ld+json">{"@context":"https://schema.org","@type":"FAQPage","mainEntity":[{"@type":"Question","name":"Is it unethical for vendors to use AI to write RFP responses?","acceptedAnswer":{"@type":"Answer","text":"No. AI is a productivity tool. The question is not whether vendors use AI. The question is whether the proposal reflects operational truth. A vendor can use AI to draft a proposal and then edit it to add specificity, acknowledge constraints, and reflect how they will actually execute. That is appropriate. A vendor can also use AI to generate a polished proposal that masks operational reality. That is the problem."}},{"@type":"Question","name":"How can a sponsor detect AI-generated proposals?","acceptedAnswer":{"@type":"Answer","text":"Look for lack of specificity. AI-generated proposals tend to be structurally complete but operationally empty. They use language that mirrors your RFP back to you. They avoid naming specific people, specific timelines, or specific constraints. Specific proposals are harder to AI-generate because specificity requires knowledge."}},{"@type":"Question","name":"Should sponsors ban AI-generated proposals?","acceptedAnswer":{"@type":"Answer","text":"No. Banning is not practical and would eliminate efficiency gains. The approach is to change the selection process. Require specificity. Involve execution teams. Check references on operational capability. These requirements separate honest proposals from polish-driven ones, regardless of whether AI was used."}},{"@type":"Question","name":"What should a sponsor do if a vendor's execution differs from their proposal?","acceptedAnswer":{"@type":"Answer","text":"Document the differences clearly. Clarify whether the proposal was aspirational or committed. If it was committed, escalate. If it was aspirational, adjust expectations. Then adjust the governance infrastructure to catch differences early through feedback mechanisms rather than discovering them mid-execution."}},{"@type":"Question","name":"How should sponsors evaluate vendor proposals for AI content?","acceptedAnswer":{"@type":"Answer","text":"Do not focus on detecting AI. Focus on detecting specificity. Ask vendors to provide the names of the people who will execute the work. Ask them to map out their approach month by month. Ask them to identify constraints and costs. These requirements are hard to meet without specific knowledge, and they work regardless of how the proposal was drafted."}}]}</script>



<h2 class="wp-block-heading">About the Author</h2>



<p><a href="https://www.linkedin.com/in/kierancanisius/">Kieran Engels</a> is CEO and Co-Founder of <a href="https://www.seuss.plus/">Seuss+</a>, a strategy and execution partner helping <a href="https://www.seuss.plus/who-we-help/">biotech sponsors</a> optimize vendor relationships across clinical development. With more than a decade of experience in <a href="https://www.seuss.plus/clinical-trial-vendor-optimization-services/">vendor governance</a>, <a href="https://www.seuss.plus/risk-management-setup-for-biotech-clinical-trials/">risk management</a>, and <a href="https://www.seuss.plus/stage-4-optimization/">clinical trial execution</a>, Kieran works with biotech leadership teams to build the oversight systems that protect timelines, budgets, and data integrity. Learn more at <a href="https://www.seuss.plus/">seuss.plus</a>.</p>
<p>The post <a href="https://www.kieranengels.com/ai-rfp-rethinking-vendor-selection-clinical-development/">When AI Writes the RFP</a> appeared first on <a href="https://www.kieranengels.com">Kieran Engels</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kieranengels.com/ai-rfp-rethinking-vendor-selection-clinical-development/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why Best-Fit Vendor Is Dead, And What to Look for Instead</title>
		<link>https://www.kieranengels.com/vendor-selection-operational-proof-governance/</link>
					<comments>https://www.kieranengels.com/vendor-selection-operational-proof-governance/#respond</comments>
		
		<dc:creator><![CDATA[Kieran Engels]]></dc:creator>
		<pubDate>Tue, 25 Nov 2025 07:27:00 +0000</pubDate>
				<category><![CDATA[Vendor Strategy]]></category>
		<category><![CDATA[clinical trials]]></category>
		<category><![CDATA[CRO]]></category>
		<category><![CDATA[operational proof]]></category>
		<category><![CDATA[RFP]]></category>
		<category><![CDATA[vendor selection]]></category>
		<guid isPermaLink="false">https://www.kieranengels.com/?p=29</guid>

					<description><![CDATA[<p>The concept of &#8216;best-fit vendor&#8217; is broken. It relies on surface signals: how polished their pitch is, how available they seem, whether you&#8217;ve heard of them. The truth is, fit is not a feeling. Fit is evidence. Sponsors should evaluate vendors on operational proof: how they handle constraints, what their staffing model actually looks like, [&#8230;]</p>
<p>The post <a href="https://www.kieranengels.com/vendor-selection-operational-proof-governance/">Why Best-Fit Vendor Is Dead, And What to Look for Instead</a> appeared first on <a href="https://www.kieranengels.com">Kieran Engels</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The concept of &#8216;best-fit vendor&#8217; is broken. It relies on surface signals: how polished their pitch is, how available they seem, whether you&#8217;ve heard of them. The truth is, fit is not a feeling. Fit is evidence. Sponsors should evaluate vendors on operational proof: how they handle constraints, what their staffing model actually looks like, whether their proposal is internally consistent, how they respond to contradictions. The vendors that succeed are not the ones that feel like the best fit. They&#8217;re the ones that demonstrate the discipline to deliver exactly what you&#8217;ve defined, on time, with the governance you need.</p>



<h2 class="wp-block-heading">KEY TAKEAWAYS</h2>


<div class="ogs-takeaways"><h3 class="ogs-takeaways__title">Key Takeaways</h3><ul class="ogs-takeaways__list"><li>Best-fit vendor selection relies on surface signals (reputation, polish, availability) that don&#8217;t predict execution.</li><li>Operational proof, how a vendor handles constraints, staff continuity, and contradictions, is a more reliable indicator of success.</li><li>Governance-led vendor evaluation shifts focus from cultural fit to execution alignment and decision-making clarity.</li><li>Vendors that can articulate their constraints and staffing model are more trustworthy than vendors that promise everything.</li><li>The best vendor for your project is the one whose operational model aligns with how you govern and make decisions.</li></ul></div>



<p>Here&#8217;s what most vendors know: Clinical sponsors are looking for vendors that feel right. Polished. Available. Responsive. So vendors optimize for that. They build great decks. They make themselves available for endless discovery calls. They promise flexibility and adaptability.</p>



<h2 class="wp-block-heading">Understanding the Fundamentals</h2>



<p>And then execution starts. The resource that was going to lead your project gets pulled onto something else. The timeline they approved starts shifting because no one was clear on what &#8216;done&#8217; looked like. The governance structure you agreed to in kickoff never quite takes shape because no one defined what the sponsor&#8217;s decision rights actually are.</p>



<p>The problem wasn&#8217;t the vendor. The problem was that you were evaluating fit based on signals that don&#8217;t matter.</p>



<h2 class="wp-block-heading">The Real Cost of Misalignment</h2>



<p>Operational proof is different. It&#8217;s asking: How does this vendor actually work? What are their real constraints? How do they staff projects? What happens when we ask them something that contradicts something else in their proposal?</p>



<p>Let me be clear. The vendors that fail in clinical development aren&#8217;t failing because they lack capability. They&#8217;re failing because there&#8217;s misalignment between what the sponsor needs and what the vendor can realistically deliver. That misalignment is visible in the proposal and in how they respond to constraints. Most sponsors just don&#8217;t know how to read it.</p>



<h2 class="wp-block-heading">Building Governance Infrastructure</h2>



<p>At Seuss+, the work is helping sponsors see the governance signals in vendor proposals. A vendor that can clearly articulate their staffing model is more trustworthy than a vendor that promises unlimited flexibility. A vendor that tells you their constraints is more reliable than a vendor that says they can do anything. A vendor whose proposal is internally consistent is more aligned with your needs than a vendor whose proposal shifts depending on the conversation.</p>



<p>This is governance-led vendor evaluation. You&#8217;re not asking, &#8220;Do we like working with them?&#8221; You&#8217;re asking, &#8220;Is their operational model aligned with how we govern? Can we clearly define what they&#8217;re supposed to deliver? Can we hold them accountable for it?&#8221;</p>



<h2 class="wp-block-heading">The Speed Advantage</h2>



<p>The vendors that succeed in clinical development are the ones where there&#8217;s no surprises in the governance structure. You ask them about their decision cadence, and it matches what you need. You ask them how they handle scope change, and their answer aligns with your change management process. You ask them about continuity, and their staffing model supports it.</p>



<p>Kieran Engels has worked with dozens of sponsors who hired vendors based on best-fit feeling and then spent months in rework and negotiation. Every single time, the signs were in the proposal. The vendor was promising something that contradicted something else. Or they were being vague about staffing. Or their timeline assumed a decision-making pace that didn&#8217;t match how the sponsor actually operates.</p>



<p>The other vendors, the ones that deliver on time and on scope, are usually not the most impressive in the pitch room. They&#8217;re the ones who asked the most questions about your governance. They&#8217;re the ones who said no to things that didn&#8217;t fit their model. They&#8217;re the ones who were explicit about their constraints.</p>



<p>This is also why vendor failure isn&#8217;t really vendor failure. It&#8217;s governance failure. If you hired a vendor based on a feeling rather than operational alignment, and they fail to deliver, that&#8217;s not because they&#8217;re incapable. It&#8217;s because you didn&#8217;t diagnose what they could actually deliver before you signed.</p>



<p>Governance-led vendor evaluation prevents that. It forces both sponsor and vendor to get clarity on expectations before execution. It creates accountability because the expectations are explicit. And it prevents the months of misalignment and rework that come from discovering too late that the vendor&#8217;s operational model doesn&#8217;t match your governance needs.</p>



<h2 class="wp-block-heading">Traditional Vendor Selection Criteria vs. Governance-Led Criteria</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>Selection Dimension</td><td>Traditional Best-Fit Approach</td><td>Governance-Led Approach</td></tr><tr><td>Decision-making</td><td>They seem responsive and flexible</td><td>Can they articulate your approval cadence? Do they have the same decision rights you do?</td></tr><tr><td>Staffing and continuity</td><td>They said they have good people</td><td>What&#8217;s their actual staffing model? How do they backfill? What&#8217;s the continuity guarantee?</td></tr><tr><td>Timeline management</td><td>Their proposal timeline feels reasonable</td><td>Does their timeline assume a decision-making pace that matches your governance? Are assumptions explicit?</td></tr><tr><td>Change management</td><td>They said they&#8217;re adaptable to change</td><td>How do they handle scope change? Does their change process align with yours? Who approves changes?</td></tr><tr><td>Accountability</td><td>Trust that they&#8217;ll deliver if we have a good relationship</td><td>Clear KPIs, reporting cadence, escalation path, and performance expectations defined upfront.</td></tr><tr><td>Internal consistency</td><td>Not really evaluated</td><td>Does their proposal contradict itself? Do different team members say different things? That&#8217;s your signal.</td></tr><tr><td>Constraint articulation</td><td>We assume they can do anything</td><td>What can&#8217;t they do? When would they say no? Vendors that articulate constraints are more trustworthy.</td></tr></tbody></table></figure>


<figure class="ogs-quote"><blockquote class="ogs-quote__text"><p>Vendors that succeed aren&#039;t the ones that feel like the best fit. They&#039;re the ones that can prove their operational model aligns with how you govern, make decisions, and hold people accountable.</p></blockquote><figcaption class="ogs-quote__caption"><cite class="ogs-quote__attribution">Kieran Engels, CEO</cite></figcaption></figure>



<h2 class="wp-block-heading">Key Industry Data</h2>



<p>Daily trial delays cost sponsors between $600,000 and $8 million per day in lost revenue opportunity. (Source: Tufts CSDD)</p>



<p>63% of all new trial starts now come from emerging biotech companies, up from 56% in 2019. (Source: IQVIA)</p>



<p>The top five CRO companies hold more than 35% of total market share. (Source: IQVIA)</p>



<p>Between 50% and 60% of all clinical trial activities are now outsourced to CROs globally. (Source: IQVIA)</p>



<p>The global CRO market reached $79.5 billion in 2023 and is projected to exceed $125 billion by 2030. (Source: Grand View Research)</p>



<h2 class="wp-block-heading">Frequently Asked Questions</h2>


<div class="ogs-faq-block"><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-11">How do we evaluate operational proof if vendors are naturally going to present themselves well?</button><div class="ogs-faq-answer" id="ogs-faq-11"><p>Ask them the same question twice, from different angles. Ask them about their staffing model, then ask them about continuity. See if the answers align. Ask them about their decision-making process, then ask them how they handle change. Inconsistency is a signal. The vendors that succeed are the ones whose answers are the same no matter how you frame the question.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-12">What should we look for in the proposal that signals governance misalignment?</button><div class="ogs-faq-answer" id="ogs-faq-12"><p>Look for vagueness about staffing, timeline, or decision-making. Look for contradictions between different sections. Look for assumptions about your process that you haven&#8217;t confirmed. Look for promises of unlimited flexibility. These aren&#8217;t vendor problems. They&#8217;re signals that you haven&#8217;t gotten clarity on governance yet.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-13">Does governance-led vendor selection take longer?</button><div class="ogs-faq-answer" id="ogs-faq-13"><p>No. It actually shortens the entire timeline. Yes, the evaluation takes more rigor upfront. But you avoid months of misalignment and rework downstream. The sponsors who move fastest are the ones who got clear on governance before they signed the contract.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-14">How do we handle vendor pushback if we ask for too much operational clarity?</button><div class="ogs-faq-answer" id="ogs-faq-14"><p>If a vendor pushes back on your governance questions, that&#8217;s your signal. The vendors that succeed are the ones that welcome clarity and constraints. They know that explicit expectations are easier to meet than vague ones. Pushback usually means the vendor is uncomfortable with the level of accountability you&#8217;re asking for.</p>
</div></div><div class="ogs-faq-item"><button class="ogs-faq-question" aria-expanded="false" aria-controls="ogs-faq-15">Can we use governance-led vendor evaluation for new vendors or only established ones?</button><div class="ogs-faq-answer" id="ogs-faq-15"><p>This approach works better for new vendors. Established vendors often rely on reputation and historical relationships. New vendors have to earn trust through operational clarity and performance. Governance-led evaluation levels the playing field and forces all vendors to prove alignment, not just reputation.</p>
</div></div></div><script data-no-optimize="1" data-no-defer="1" data-no-minify="1">(function(){function ogsFaqInit(){document.querySelectorAll(".ogs-faq-question").forEach(function(btn){if(btn.dataset.ogsBound)return;btn.dataset.ogsBound="1";btn.addEventListener("click",function(e){e.preventDefault();var item=this.closest(".ogs-faq-item");var isOpen=item.classList.contains("is-open");item.classList.toggle("is-open");this.setAttribute("aria-expanded",!isOpen);});});}ogsFaqInit();if(document.readyState==="loading"){document.addEventListener("DOMContentLoaded",ogsFaqInit);}document.addEventListener("rocket-allScriptsLoaded",ogsFaqInit);})();</script><script type="application/ld+json">{"@context":"https://schema.org","@type":"FAQPage","mainEntity":[{"@type":"Question","name":"How do we evaluate operational proof if vendors are naturally going to present themselves well?","acceptedAnswer":{"@type":"Answer","text":"Ask them the same question twice, from different angles. Ask them about their staffing model, then ask them about continuity. See if the answers align. Ask them about their decision-making process, then ask them how they handle change. Inconsistency is a signal. The vendors that succeed are the ones whose answers are the same no matter how you frame the question."}},{"@type":"Question","name":"What should we look for in the proposal that signals governance misalignment?","acceptedAnswer":{"@type":"Answer","text":"Look for vagueness about staffing, timeline, or decision-making. Look for contradictions between different sections. Look for assumptions about your process that you haven&#8217;t confirmed. Look for promises of unlimited flexibility. These aren&#8217;t vendor problems. They&#8217;re signals that you haven&#8217;t gotten clarity on governance yet."}},{"@type":"Question","name":"Does governance-led vendor selection take longer?","acceptedAnswer":{"@type":"Answer","text":"No. It actually shortens the entire timeline. Yes, the evaluation takes more rigor upfront. But you avoid months of misalignment and rework downstream. The sponsors who move fastest are the ones who got clear on governance before they signed the contract."}},{"@type":"Question","name":"How do we handle vendor pushback if we ask for too much operational clarity?","acceptedAnswer":{"@type":"Answer","text":"If a vendor pushes back on your governance questions, that&#8217;s your signal. The vendors that succeed are the ones that welcome clarity and constraints. They know that explicit expectations are easier to meet than vague ones. Pushback usually means the vendor is uncomfortable with the level of accountability you&#8217;re asking for."}},{"@type":"Question","name":"Can we use governance-led vendor evaluation for new vendors or only established ones?","acceptedAnswer":{"@type":"Answer","text":"This approach works better for new vendors. Established vendors often rely on reputation and historical relationships. New vendors have to earn trust through operational clarity and performance. Governance-led evaluation levels the playing field and forces all vendors to prove alignment, not just reputation."}}]}</script>



<h2 class="wp-block-heading">About the Author</h2>



<p><a href="https://www.linkedin.com/in/kierancanisius/">Kieran Engels</a> is CEO and Co-Founder of <a href="https://www.seuss.plus/">Seuss+</a>, a strategy and execution partner helping <a href="https://www.seuss.plus/who-we-help/">biotech sponsors</a> optimize vendor relationships across clinical development. With more than a decade of experience in <a href="https://www.seuss.plus/clinical-trial-vendor-optimization-services/">vendor governance</a>, <a href="https://www.seuss.plus/risk-management-setup-for-biotech-clinical-trials/">risk management</a>, and <a href="https://www.seuss.plus/stage-4-optimization/">clinical trial execution</a>, Kieran works with biotech leadership teams to build the oversight systems that protect timelines, budgets, and data integrity. Learn more at <a href="https://www.seuss.plus/">seuss.plus</a>.</p>



<p></p>
<p>The post <a href="https://www.kieranengels.com/vendor-selection-operational-proof-governance/">Why Best-Fit Vendor Is Dead, And What to Look for Instead</a> appeared first on <a href="https://www.kieranengels.com">Kieran Engels</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kieranengels.com/vendor-selection-operational-proof-governance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
