Innowhyte Logo

RFP Assistant

The solution involves AI integration at multiple steps in the entire RFP drafting and writing workflow.

Requirements Matrix Extraction

Our automation leverages two key features of modern Large Language Models (LLMs). The first is the "thinking" capability, which enables a form of System 2 thinking as described in Daniel Kahneman's book "Thinking, Fast and Slow". This activates a deliberate "slow, effortful, and logical" mode of processing that ensures thorough and accurate analysis of RFP documents.

The second feature is structured output generation, which allows us to extract information in a consistent, well-defined format. Our schema includes:

FieldDescription
Requirement NumberA sequential identifier
Requirement TextThe verbatim requirement from the RFP document
Reference PageSource page number for easy traceability
CategoryClassified into predefined types (Scope of Work, Project Timeline, Cost, Resources, Proposal Submission, Proposal Content, Proposal Timeline)
Sub-categoryAutomatically generated based on requirement context
Risk AssessmentIdentified risks or concerns related to the requirement

For enhanced user experience, we implement real-time streaming of requirements to the frontend as they are processed.

Writing Assistance

We developed a custom text editor that provides writing assistance directly within the editor without ever leaving the writing flow.

Writing Tools:
  • Change the tone of the text to one of the below options:
    • Professional and Formal
    • Confident and Assertive
    • Client-Centric and Empathetic
    • Clear and Concise
    • Persuasive and Convincing
  • Fix spelling & grammar
  • Fix formatting
  • Convert to bullet points
  • Convert to numbered list

This allows the writer to focus on the content over formatting and reduces silly unforced writing errors.

Proposal Content Validator

The system checks and validates responses against RFP proposal content requirements and checks for completeness. It highlights the requirements that have been responded to and ensures no requirement has been missed.


Here as well, we use the two powerful features of the LLM - "thinking" and "structured output" - to build this capability.

Suggest Relevant Past Projects

Our system maintains a comprehensive knowledge base of past project case studies, leveraging AWS Bedrock Knowledge Base for powerful semantic search capabilities.

The knowledge base setup process:

  1. Case studies from our past projects are stored securely in AWS S3
  2. AWS Bedrock Knowledge Base processes these documents, creating vector embeddings stored in PostgreSQL (pgvector)

When a new RFP is uploaded:

  1. The system identifies and extracts key technical requirements
  2. These requirements are transformed into optimized search queries
  3. The system then retrieves the most relevant past projects for each query
  4. The relevant past projects for each query are ranked and then an aggregate list of past projects is calculated