This article provides researchers, scientists, and drug development professionals with a comprehensive framework for assessing and optimizing comprehension within the informed consent process.
This article provides researchers, scientists, and drug development professionals with a comprehensive framework for assessing and optimizing comprehension within the informed consent process. Drawing on the latest regulatory guidance and empirical research, we explore foundational challenges in comprehension, detail innovative methodological approaches for digital and co-created content, offer strategies for troubleshooting common pitfalls in risk communication and readability, and validate these techniques through comparative analysis of traditional versus modern consent models. The goal is to equip professionals with actionable insights to enhance participant understanding, meet ethical obligations, and improve the quality of clinical research.
This technical support center provides researchers, scientists, and drug development professionals with practical guides and solutions for a common critical flaw in clinical trials: informed consent documents that are too complex for participants to understand. Use the resources below to diagnose, troubleshoot, and resolve issues related to document complexity and participant comprehension.
Problem: Participants are not fully understanding the informed consent forms (ICFs), potentially compromising the ethical integrity of the trial and leading to poor recruitment or retention.
Investigation & Diagnosis: Follow this logical workflow to systematically identify the root cause of poor participant comprehension.
Diagnostic Steps:
Measure Document Length:
Calculate Readability Scores: Use established metrics to quantitatively assess text difficulty. The table below summarizes key metrics and their implications.
| Readability Metric | Target Range | Problematic Range (as found in COVID-19 trials) | Interpretation |
|---|---|---|---|
| Flesch-Kincaid Grade Level (FKGL) [1] | 8th grade or lower [2] | Median: 9.8 (9.1-10.8) [1] | Corresponds to a 14-15 year old's reading level [1] |
| Flesch Reading Ease (FRES) [1] | 60-100 ("Standard" to "Easy") | Median: 54.6 (47.0-58.3) [1] | Scores below 60 are classified as "difficult" for comprehension [1] |
| Gunning-Fog Index (GFOG) [1] | 8 or lower | Median: 11.8 (10.4-13.0) [1] | Indicates complex sentence structure and word choice [1] |
Assess Language Complexity:
Problem: Manually rewriting complex ICFs to be simpler is time-consuming and may risk omitting critical information.
Solution: Implement a structured, LLM-assisted process to refine and improve consent documents. The following workflow outlines a proven methodology.
Resolution Steps:
Q1: What is the single most important improvement I can make to my informed consent form? A: Focus on reducing the Flesch-Kincaid Grade Level to 8th grade or lower. This directly addresses the gap between document complexity and the average reading ability of the population, a flaw identified as critical in recent research [1] [2].
Q2: Our ICFs are long because of legal and regulatory requirements. How can we shorten them? A: While total length is a challenge, the focus should be on the comprehensibility of the key information. The 2018 Common Rule mandates a "concise and focused presentation of key information" at the beginning of the ICF [2]. Use structured content methodologies to reuse approved boilerplate text and automate formatting, ensuring consistency and saving time without compromising legal requirements [3].
Q3: Are there proven technological solutions to this problem? A: Yes. Recent mixed-methods research demonstrates that Large Language Models (LLMs) can successfully generate ICFs with significantly improved readability, understandability, and actionability. One study showed LLM-generated ICFs achieved a Flesch-Kincaid grade level of 7.95 versus 8.38 for human-generated versions, and a 100% score in actionability [2].
Q4: Who is responsible for fixing this flaw in a clinical trial? A: Optimizing informed consent is a shared ethical commitment. Sponsors, investigators, and institutional review boards (IRBs) all have a role. Regulatory bodies like the FDA are also promoting a collaborative, global approach to improve the clarity and brevity of consent forms [4] [5].
Q5: How can I measure the "actionability" of an ICF? A: Actionability refers to how well the document enables a person to know what to do based on the information. This can be measured using tools like the RUAKI indicator, which contains specific items to assess whether the ICF clearly states the actions a participant needs to take [2].
The following table details key methodological tools and frameworks for conducting research on informed consent comprehension.
| Tool / Material | Function / Explanation | Application in Comprehension Research |
|---|---|---|
| Flesch-Kincaid Grade Level | A validated software algorithm that calculates U.S. grade-level readability based on sentence length and syllables per word [1]. | Provides a quantitative, objective measure of text complexity to benchmark against population literacy levels [1]. |
| Readability, Understandability, and Actionability of Key Information (RUAKI) Indicator | An assessment framework comprising 18 binary-scored items that evaluate the accessibility, comprehensibility, and actionability of information [2]. | Serves as a structured evaluation tool for researchers to empirically test and improve the effectiveness of ICF key information sections [2]. |
| Large Language Model (e.g., Mistral 8x22B) | An artificial intelligence model trained on vast amounts of text data, capable of summarizing, paraphrasing, and simplifying complex language [2]. | Functions as an experimental intervention in research studies to automate and optimize the generation of simplified, participant-friendly consent forms [2]. |
| Prompt Engineering (Least-to-Most) | A technique for interacting with LLMs that breaks down complex tasks into a sequence of simpler, manageable prompts [2]. | A critical methodological step in research protocols to ensure LLMs produce accurate, complete, and appropriately-formatted ICF content [2]. |
Q1: What is the core requirement of the FDA's 2024 draft guidance on informed consent? The draft guidance introduces two core requirements for informed consent forms (ICFs). First, consent must begin with a concise "Key Information" section designed to help prospective subjects understand the main reasons for or against participating. Second, the entire consent document must be presented in a manner that facilitates understanding [6] [7]. This aims to ensure that individuals can make a truly informed decision.
Q2: What specific content should be included in the "Key Information" section? The Key Information section should provide a focused overview of the most important details [8]. The FDA recommends including:
Q3: What formatting and presentation strategies does the FDA recommend to improve comprehension? The guidance encourages the use of plain language and clear organizational tools. A sample approach endorsed by the FDA is a "bubble format" that uses rounded boxes to present discrete units of information, which research has shown can improve understanding [9] [7]. Other effective strategies include using bulleted lists to break down complex information, combining text with visual aids, and employing electronic consent processes where appropriate [7] [8].
Q4: Is this guidance binding, and what is its current status? As of its issuance in March 2024, this document is a draft guidance and contains non-binding recommendations [6]. It was issued to align with provisions in the revised Common Rule and a corresponding FDA proposed rule. The public comment period for this draft guidance was open until April 30, 2024 [9] [7].
Q5: How can I provide feedback on this draft guidance? You may submit comments at any time, even after the initial comment period. You can submit:
www.regulations.gov (Docket FDA-2022-D-2997).| Challenge | Symptom | Solution & Best Practices |
|---|---|---|
| Overlengthy Key Information | Key section becomes a dense "mini-consent," defeating its purpose. | Strictly summarize only the most critical "why/why not" points. Use cross-references to the main document for comprehensive details and avoid repeating all risk information [8]. |
| Poor Participant Comprehension | Low enrollment, high participant questions, or poor performance on comprehension assessments. | Adopt the "bubble format" or similar visual grouping for discrete information chunks. Use simple language (avoid technical jargon) and integrate visual aids or illustrations, especially for complex concepts or low-literacy populations [9] [7] [8]. |
| Inconsistent Application Across Studies | Wide variability in ICF structure and quality between different study sites or protocols. | Develop a standardized ICF template with a predefined Key Information section structure for your organization. Provide training for investigators and IRBs on the guidance's principles to ensure consistent interpretation and review [8]. |
| Difficulty Explaining Complex Trial Design | Participants struggle with concepts like biomarker-driven enrollment or crossover arms. | In the Key Information section, focus on the practical implications for the participant (e.g., "Your tumor will be tested for a specific marker to see if you are eligible"). Use a visual diagram or flowchart in the full ICF to explain the study design. |
Objective: To quantitatively measure the effectiveness of a new ICF format in improving participant understanding. Methodology:
Objective: To gather in-depth user feedback on the clarity, organization, and usability of the ICF. Methodology:
ICF Comprehension Assessment Workflow
| Research 'Reagent' | Function in Optimizing Informed Consent |
|---|---|
| Plain Language Guidelines | Provides standardized rules for simplifying complex medical and technical jargon, serving as a foundational tool for drafting understandable consent forms. |
| Readability Assessment Tools (e.g., Flesch-Kincaid) | Quantifies the reading grade level of an ICF, allowing researchers to objectively measure and adjust text complexity to match the target population. |
| Validated Comprehension Questionnaires | Acts as a calibrated instrument to quantitatively measure participant understanding of core ICF elements, providing data for evidence-based ICF improvements. |
| Electronic Consent (eConsent) Platforms | Enables the integration of interactive elements (e.g., hyperlinks, embedded videos, knowledge checks) to present key information in a more engaging and accessible manner. |
| Visual Aid Libraries | A repository of pre-designed, culturally appropriate illustrations and icons that can be used to explain complex procedures or concepts within the Key Information section. |
Q1: Why is the color contrast of text in my experimental workflow diagrams critical for research on informed consent? Poor color contrast can obscure critical information, reducing the comprehension of research protocols for participants. Adhering to WCAG 2.1 Level AAA standards ensures that visual aids are accessible to individuals with low vision or color deficiencies, which is a fundamental requirement for valid informed consent comprehension assessment [10] [11]. Diagrams with insufficient contrast can introduce bias into your research results.
Q2: How can I quickly check if the colors in my chart or diagram have sufficient contrast? You can use free online tools like the WebAIM Contrast Checker [12]. These tools allow you to input foreground and background color values (in HEX or RGB) and will immediately calculate the contrast ratio and indicate if it passes WCAG AA and AAA standards for both normal and large text [12].
Q3: What are the minimum contrast ratios I should aim for in my research materials? The required contrast ratio depends on the text size and the specific WCAG conformance level. For the enhanced (Level AAA) standard, which is recommended for critical research materials, the requirements are stricter [10] [11].
| Text Type | Size and Weight Definition | Minimum Contrast Ratio (Level AAA) |
|---|---|---|
| Large Text | 18 point (24px) or larger, or 14 point (18.66px) and bold [12] [11] | 4.5:1 [10] [11] |
| Normal Text | Anything smaller than large text [12] | 7:1 [10] [11] |
Q4: In Graphviz, how do I explicitly set a high-contrast text color for a node?
You must define both the fillcolor (background of the node) and the fontcolor (color of the text) for each node or subgraph in your DOT script. Relying on default settings often leads to poor contrast. The following example demonstrates the correct syntax:
Problem: Participants incorrectly estimate the likelihood of research risks and benefits when frequency data is presented only in dense textual formats [10].
Solution: Supplement text with well-designed visual aids.
Problem: The text inside nodes is difficult to read because the color does not sufficiently contrast with the node's background. This often happens when colors are chosen manually without verification [10].
Solution: Systematically apply and verify color choices in your diagramming tools.
node_aes() function to explicitly set the fontcolor based on the fillcolor [13].Essential digital tools and their functions for creating accessible visual communication materials.
| Tool or Resource Name | Function in Research |
|---|---|
| WebAIM Contrast Checker [12] | Validates the contrast ratio between foreground and background colors to ensure compliance with WCAG guidelines. |
| Graphviz (DOT language) | Generates consistent, reproducible diagrams for illustrating experimental workflows and conceptual models. |
color-contrast-checker (npm package) [15] |
Allows programmatic validation of color contrast within automated data visualization pipelines. |
prismatic::best_contrast() (R function) [16] |
Automatically selects the best contrasting text color (e.g., white or black) for a given background color in R graphics. |
| W3C Contrast (Enhanced) Rule [10] | Provides the formal technical standard and testing framework for achieving 7:1 (normal text) and 4.5:1 (large text) contrast ratios. |
1. Objective: To quantitatively evaluate whether supplementing text-based risk frequency information with icon arrays improves comprehension accuracy in informed consent forms.
2. Methodology:
3. Data Analysis:
The diagram below visualizes the experimental protocol for assessing informed consent comprehension, from participant recruitment to data analysis. The node colors and text are formatted to ensure high accessibility.
This flowchart outlines the decision-making process for ensuring text is readable against its background in scientific diagrams, a critical step for creating accessible visual aids.
For researchers aiming to optimize the assessment of informed consent comprehension, the Quality of Informed Consent (QuIC) and the Decision-Making Control Instrument (DMCI) serve as essential, validated tools. These instruments allow for the standardized and empirical evaluation of two core components of a valid consent process: the participant's understanding of the information presented (QuIC) and the voluntariness of their decision (DMCI).
The QuIC is a reliable questionnaire that measures both the actual (objective) and perceived (subjective) understanding that research subjects have of the clinical trial they are joining [17]. It was developed to incorporate the basic elements of informed consent specified in federal regulations and can be adapted for different study populations [18] [17].
The DMCI was developed to fill a gap in the empirical assessment of the voluntariness of consent. It measures a participant's perception of control over the decision-making process, assessing three dimensions: Self-Control, Absence of Control, and Others’ Control [19] [20]. Using these tools in tandem provides a more holistic assessment of the informed consent process, moving beyond theoretical assumptions to data-driven insights.
The table below summarizes the core characteristics of the QuIC and DMCI to help researchers select the appropriate tool.
Table 1: Key Characteristics of the QuIC and DMCI
| Feature | Quality of Informed Consent (QuIC) | Decision-Making Control Instrument (DMCI) |
|---|---|---|
| Primary Construct Measured | Understanding and Comprehension | Voluntariness and Perceived Control |
| Key Domains | Objective understanding (factual knowledge), Subjective understanding (perceived knowledge) [21] [17]. | Self-Control, Absence of Control, Others' Control [19] [20]. |
| Typical Format | Two-part questionnaire (Part A: objective knowledge; Part B: perceived understanding) [18] [21]. | 9-item questionnaire with three subscales [20]. |
| Scoring & Interpretation | Higher scores indicate better understanding [21]. | Higher total scores (max 30) indicate greater perceived autonomy [21]. |
| Validation Populations | Cancer clinical trial patients [17], adapted for minors, pregnant women, and adults in vaccine trials [18]. | Parents making decisions for seriously ill children [19] [20]. |
| Administration Time | Brief; average of ~7 minutes [17]. | Not explicitly stated, but designed for use soon after decision is made [19]. |
A 2025 randomized controlled trial provides a robust methodology for using the QuIC and DMCI to compare two consent delivery modalities [22] [21].
The following workflow diagram illustrates the experimental design of this comparative study:
A 2025 cross-sectional study evaluated digitally implemented informed consent (eIC) based on i-CONSENT guidelines, showcasing the QuIC's adaptability [18].
Table 2: Comprehension Scores and Preferences from the Multinational eIC Study
| Population Group | Sample Size (n) | Mean Objective QuIC Score (SD) | Comprehension Category | Preferred Format |
|---|---|---|---|---|
| Minors | 620 | 83.3 (13.5) | Adequate | Video (61.6%) |
| Pregnant Women | 312 | 82.2 (11.0) | Adequate | Video (48.7%) |
| Adults | 825 | 84.8 (10.8) | High | Text (54.8%) |
The table below lists the key "research reagents" — the essential tools and materials — required to implement the methodologies described in this guide effectively.
Table 3: Essential Research Reagents and Materials for Informed Consent Assessment
| Item Name | Specifications / Recommended Type | Primary Function in Research |
|---|---|---|
| Validated QuIC Questionnaire | Original (20 objective, 14 subjective items) [17] or culturally/population-adapted versions [18]. | Measures objective and subjective understanding of the informed consent. |
| Validated DMCI Questionnaire | 9-item form measuring Self-Control, Absence of Control, and Others' Control [20]. | Assesses the perceived voluntariness of the consent decision. |
| Health Literacy Assessment Tool | Short Assessment of Health Literacy-English (SAHL-E) [21] or equivalent in target language. | Controls for a key confounding variable that impacts comprehension scores. |
| Teleconsent Platform | HIPAA-compliant video conferencing software with screen sharing and e-signature capabilities (e.g., Doxy.me) [21] [24]. | Facilitates remote consent administration while maintaining interaction and documentation. |
| Digital Consent (eIC) Materials | Multi-format materials: Layered web pages, narrative videos, infographics [18]. | Enhances participant engagement and comprehension by catering to diverse preferences. |
The following diagram outlines the logical relationship and workflow for integrating these tools into a comprehensive consent assessment strategy:
This guide provides solutions to common technical and user-experience issues encountered when implementing or using electronic informed consent (eConsent) platforms in a research setting.
Q: The participant did not receive the expected eConsent email. What should I do? A: First, check the eConsent status in your system. If the status is "Delivered," verify the participant's email address in their record is correct. If it is incorrect, correct the address, cancel the original eConsent form, and resend it. If the email is correct, ask the participant to check their spam or trash folders. If the status is not "Delivered" after an hour, there may be a system error, and you should contact your platform's technical support [25].
Q: A participant has lost their eConsent email or deleted it before completing the form. How can they proceed? A: Ask the participant to check their spam or trash folders. If they have not yet registered an account, you can cancel the original eConsent form and send a new one. If they have already registered but not signed, they can log in directly to the patient portal, where their pending tasks will be displayed [25].
Q: A participant is having trouble logging in because their credentials are not recognized. A: Confirm the participant is using the same email address that is on file with your site. Verify that the Caps Lock key is not on and assist them with the password reset function if needed. Also, ensure the participant is trying to log in to the correct regional website for their account (e.g., the EU site vs. the US site) [25].
Q: A participant reports the text is too small to read comfortably.
A: In the web browser, participants can use the zoom function (often Ctrl + on Windows or Cmd + on Mac) to enlarge the text. Furthermore, ensure your eConsent platform supports accessibility features like screen readers and keyboard navigation [25].
Q: Why is a consent form grayed out or unavailable for a participant to review? A: This typically occurs when forms are required to be completed in a specific signing order. The participant must first complete the consent forms that are higher on the task list before they can access subsequent ones [25].
Q: A participant cannot sign the form because the system states they haven't read all sections. A: Guide the participant to the table of contents, which should indicate which sections are incomplete (often marked with an orange highlight or lack a green checkmark). The participant needs to view each section and answer all required questions before the signature field becomes active [25].
The following data and methodologies are derived from recent multinational studies evaluating the effectiveness of eConsent materials designed according to the i-CONSENT guidelines.
A 2025 cross-sectional study evaluated the comprehension and satisfaction of 1,757 participants across Spain, the UK, and Romania using tailored eConsent materials. Comprehension was assessed using an adapted Quality of the Informed Consent (QuIC) questionnaire, with objective comprehension scores categorized as low (<70%), moderate (70%-80%), adequate (80%-90%), or high (≥90%). Satisfaction was measured via Likert scales, with scores ≥80% deemed acceptable [26] [18] [27].
Table 1: Objective Comprehension Scores by Participant Group
| Participant Group | Sample Size (n) | Mean Comprehension Score (SD) | Comprehension Category |
|---|---|---|---|
| Minors | 620 | 83.3 (13.5) | Adequate |
| Pregnant Women | 312 | 82.2 (11.0) | Adequate |
| Adults | 825 | 84.8 (10.8) | Adequate |
Source: Fons-Martinez et al., 2025 [26] [18] [27]
Table 2: Participant Satisfaction with eConsent Materials
| Participant Group | Satisfaction Rate | Key Feedback |
|---|---|---|
| Minors | 604/620 (97.4%) | - |
| Pregnant Women | 303/312 (97.1%) | - |
| Adults | 804/825 (97.5%) | 777/825 (94.2%) reported materials facilitated understanding |
Source: Fons-Martinez et al., 2025 [26] [18] [27]
Protocol 1: Co-creation of eConsent Materials The following methodology was used to develop the eConsent materials evaluated in the 2025 study [26] [18]:
Protocol 2: Assessing Comprehension and Satisfaction This protocol outlines the assessment phase used in the referenced study [26] [18]:
Table 3: Preferred eConsent Format by Participant Group
| Participant Group | Preferred Format | Proportion Preferring Format |
|---|---|---|
| Minors | Video | 382/620 (61.6%) |
| Pregnant Women | Video | 152/312 (48.7%) |
| Adults | Text | 452/825 (54.8%) |
Source: Fons-Martinez et al., 2025 [26] [18] [27]
Key findings from recent research include:
The following diagram illustrates the logical workflow for a multimodal eConsent system designed to optimize participant comprehension.
Multimodal eConsent Comprehension Workflow
Table 4: Key Research Reagents and Solutions for eConsent Implementation
| Item | Function in eConsent Research |
|---|---|
| Digital Platform with Multimodal Capabilities | A secure system that hosts and delivers eConsent materials in various formats (web, video, infographics, documents) and manages the consenting workflow [26] [29]. |
| Quality of Informed Consent (QuIC) Questionnaire | A validated instrument adapted to assess objective and subjective comprehension of the consent information among participants [26] [18] [27]. |
| Co-creation Framework (e.g., Design Thinking) | A participatory methodology for involving target populations (minors, pregnant women, etc.) in the design of eConsent materials to ensure relevance and clarity [26] [18]. |
| Professional Translation & Cultural Adaptation Protocol | A rigorous process for translating and culturally adapting consent materials, ensuring they are contextually appropriate for multinational trials [26] [18]. |
| Comprehension Check Modules | Integrated interactive quizzes or questions within the eConsent platform to verify participant understanding before signing [29] [30]. |
Informed consent (IC) is a cornerstone of ethical clinical research, yet comprehensive studies consistently reveal significant gaps in participant comprehension [18]. The i-CONSENT project addresses these challenges by improving IC materials to make them more comprehensible, accessible, and tailored to the specific needs of diverse populations [18]. This approach recognizes that effective IC must meet five key criteria: voluntariness, capacity, disclosure, understanding, and decision-making [18]. Co-creation represents a paradigm shift in developing these materials, moving from a top-down, researcher-driven process to a collaborative approach that actively involves target populations as partners in material development [31]. This methodology acknowledges the value of participant voices and experiences, ultimately leading to more effective comprehension outcomes.
Co-creation is a collaborative approach that considers the interests and voices of all stakeholders [32]. In education and research contexts, this means not only contextualizing but also creating partnerships to serve the learners and their communities [32]. When applied to informed consent material development, co-creation involves inviting target populations to participate in constructing knowledge or designing materials, activities, and assessments [31]. This process acknowledges that participants possess valuable insights about their own cognitive needs, literacy levels, and communication preferences that researchers may lack.
Co-creation concepts describe the development of learning material by employees, for employees in organizational settings [33], and this same principle applies to research contexts where materials are developed by participants, for participants. The approach enables the design of situated, adapted learning materials that are easier for target audiences to understand and are just-in-time available, thereby counteracting cognitive overload [33].
The theoretical foundation of co-creation draws from situated learning theory, which emphasizes that learning is most effective when embedded in the context in which it will be applied [33]. This is particularly relevant for informed consent processes, where participants must understand complex information well enough to make meaningful decisions about their participation. Co-creation offers numerous demonstrated benefits:
Additionally, the creation of content has positive effects on the diverse participants involved in the co-creation process, including increasing autonomy, self-regulation, and responsibility; improving performance; and enhancing critical reflection and communication skills [33].
The degree of participant involvement in co-creation can vary significantly across a spectrum from consultation to full partnership. Bovill et al. (2017) present different levels of student involvement in the curriculum that can be adapted for research contexts [31]:
Table: Levels of Participation in Co-Creation
| Participation Level | Description | Application in IC Research |
|---|---|---|
| Dictated Curriculum | Participants have no control or input into the material design | Traditional researcher-developed consent forms |
| Pedagogical Consultation | Researcher incorporates participants' ideas and feedback with a certain group | Focus groups providing feedback on draft materials |
| Partnership Classroom | All participants contribute ideas and feedback throughout the process | Iterative design with entire participant cohorts |
| Curriculum Co-design | Working with participants to redesign materials or co-design new ones | Participant representatives join material development teams |
| Knowledge Co-creation | Engaging participants in research activities that contribute to new knowledge | Participants as co-researchers in developing and testing IC frameworks |
Successful implementation of co-creation in informed consent material development involves several practical approaches drawn from validated methodologies:
Design Thinking Sessions: The i-CONSENT project conducted design thinking sessions with children and parents, as well as sessions with children alone, to develop appropriate materials for minors [18]. Similarly, they held two design thinking sessions with pregnant women to develop tailored materials for this population [18].
Multidisciplinary Collaboration: A multidisciplinary team comprising clinical trial physicians, epidemiologists, a sociologist, a journalist, and a nurse collaborated on the design of materials, ensuring they were scientifically accurate while addressing the cognitive and cultural needs of participants [18].
Iterative Piloting: The development process includes piloting the contents of information sheets and surveys with target populations to refine materials before final implementation [18].
Layered Information Approaches: Implementing modular information designs that allow participants to access additional details or definitions by clicking on specific terms, accommodating varying levels of information needs [18].
The workflow for implementing co-creation in material development follows a systematic process:
The effectiveness of co-created materials has been rigorously evaluated in multiple studies. A cross-sectional study conducted with 1,757 participants across Spain, the United Kingdom, and Romania demonstrated significant success [18]. The study involved 620 minors, 312 pregnant women, and 825 adults who reviewed electronically delivered informed consent (eIC) materials developed through co-creation methodologies [18].
Table: Comprehension Scores by Population Group
| Participant Group | Sample Size | Mean Comprehension Score | Standard Deviation | Adequate Comprehension (80-90%) |
|---|---|---|---|---|
| Minors | 620 | 83.3% | 13.5 | Yes |
| Pregnant Women | 312 | 82.2% | 11.0 | Yes |
| Adults | 825 | 84.8% | 10.8 | Yes |
These results demonstrate that co-created materials consistently achieved adequate comprehension levels (above 80%) across all demographic groups [18]. The study also revealed important demographic variations in comprehension. Women and girls outperformed men and boys (β=+.16 to +.36), and Generation X adults scored higher than millennials (β=+.26, P<.001) [18]. Interestingly, prior trial participation was associated with lower comprehension scores (β=−.47 to −1.77), suggesting that overconfidence from previous experience might negatively impact engagement with new consent materials [18].
Co-creation methodologies also revealed significant differences in format preferences across population groups, highlighting the importance of offering multiple modalities:
Table: Format Preferences by Participant Group
| Participant Group | Video Preference | Text Preference | Other Formats | Satisfaction Rate |
|---|---|---|---|---|
| Minors (n=620) | 61.6% (382) | 22.4% (139) | 16.0% (99) | 97.4% (604) |
| Pregnant Women (n=312) | 48.7% (152) | 34.9% (109) | 16.4% (51) | 97.1% (303) |
| Adults (n=825) | 28.2% (233) | 54.8% (452) | 17.0% (140) | 97.5% (804) |
These findings demonstrate that co-created materials achieved remarkably high satisfaction rates (exceeding 90%) across all groups [18]. The variation in format preferences underscores the importance of tailoring delivery methods to specific populations rather than taking a one-size-fits-all approach.
Q: What are the most significant challenges in implementing co-creation for material development? A: The primary challenges include: (1) Power dynamics - co-creation "requires the teacher to relinquish some inherent power and, similarly, requires students to take responsibility in their empowered status as partners in the classroom" [31]; (2) Time management - the process requires additional time to acclimate participants to the process and expectations [31]; and (3) Cognitive load - participants may experience increased cognitive demands during the creation process [33].
Q: How can researchers address power imbalances in co-creation processes? A: Successful approaches include building trust through transparent communication, establishing clear guidelines for the co-creation process, acknowledging the value of participant expertise, and creating structured opportunities for meaningful input rather than token consultation [31].
Q: What methodological considerations are crucial for cross-cultural implementation of co-created materials? A: The i-CONSENT project demonstrated that while translated materials maintained high efficacy across countries, comprehension scores in Romania were lower among participants with lower educational levels (β=−1.05, P=.001) [18]. This highlights the need for cultural adaptation beyond mere translation, considering local customs, linguistic conventions, and educational contexts.
Q: How can researchers manage the increased cognitive load associated with co-creation? A: Strategies include breaking complex tasks into manageable steps, providing clear templates and guidelines, offering adequate technical support, and distributing development activities across multiple sessions to prevent participant fatigue [33].
The following diagram outlines a systematic approach to addressing common challenges in co-creation implementation:
Successful implementation of co-creation methodologies requires specific tools and approaches that function as "research reagents" in this context. The following table details essential components for effective co-creation in informed consent material development:
Table: Essential Research Reagents for Co-Creation Implementation
| Tool Category | Specific Implementation | Function | Example Applications |
|---|---|---|---|
| Participatory Design Frameworks | Design Thinking Sessions | Structured approach to collaborative problem-solving that emphasizes empathy and iteration | Sessions with minors and parents to develop age-appropriate consent materials [18] |
| Multimodal Content Delivery Systems | Layered Web Content | Modular information architecture allowing users to access details at their preferred depth | Website allowing participants to click terms for definitions [18] |
| Narrative Development Tools | Tailored Video Formats | Storytelling approaches designed for specific demographic groups | Question-and-answer style videos for pregnant women; narrative storytelling for minors [18] |
| Assessment Instruments | Adapted Quality of Informed Consent Questionnaire (QuIC) | Validated tools modified for specific populations to measure comprehension outcomes | Tailored adaptations for minors, pregnant women, and adults with appropriate reading levels [18] |
| Cross-Cultural Adaptation Protocols | Professional Translation Rubrics | Guidelines ensuring fidelity to meaning while adapting to local customs and linguistic conventions | Translation process prioritizing contextual appropriateness for multinational trials [18] |
The co-creation model represents a significant advancement in the development of informed consent materials that genuinely promote participant comprehension. By actively involving target populations in the design process, researchers can create materials that are more accessible, engaging, and effective across diverse demographic groups. The experimental evidence demonstrates that co-created materials consistently achieve adequate comprehension levels (above 80%) and high satisfaction rates (exceeding 90%) across populations including minors, pregnant women, and adults [18]. Future research should continue to explore regional disparities, evaluate interventions for overconfident returning participants, and validate these tools across broader cultural contexts to further optimize informed consent processes in clinical research.
Q1: What are the key format preferences for minors in informed consent materials? Research indicates that minors (ages 12-13) show a strong preference for video content. In a multinational study, 61.6% of minors preferred videos presented in a narrative storytelling format, which significantly outperformed text-based materials for this demographic [26] [18].
Q2: How do content preferences of pregnant women differ from other adult populations? Pregnant women participating in clinical trials demonstrated divided preferences between videos (48.7%) and other digital formats. They responded particularly well to question-and-answer style videos and infographics explaining study procedures, suggesting a need for both visual engagement and specific informational clarity [26].
Q3: What content formats do adults prefer for complex clinical trial information? Unlike younger demographics, most adults (54.8%) prefer traditional text-based materials, though enhanced with layered web content and supporting infographics. Generation X adults consistently outperformed millennials in comprehension scores when using these text-dominant formats [26] [18].
Q4: How effective are co-creation methods in developing tailored content? Participatory design methods significantly improve comprehension across all demographics. Design thinking sessions with minors and pregnant women, plus surveys with adults, resulted in comprehension scores exceeding 80% across all groups, with satisfaction rates over 97% [26].
Q5: Do these preferences translate across different cultural contexts? While core preferences remain consistent, cultural adaptation is crucial. Materials co-created in Spain maintained high efficacy when translated to English and Romanian, though comprehension scores in Romania were lower among participants with lower educational levels, indicating need for localized adjustment [26].
| Demographic | Sample Size | Mean Comprehension Score (%) | Preferred Format | Percentage Preferring Format |
|---|---|---|---|---|
| Minors (12-13 years) | 620 | 83.3 (SD 13.5) | Narrative Video | 61.6% |
| Pregnant Women | 312 | 82.2 (SD 11.0) | Q&A Video | 48.7% |
| Adults (Millennials) | 825 | 84.8 (SD 10.8) | Layered Text | 54.8% |
| Adults (Generation X) | 825 | Higher than millennials (β=+.26) | Layered Text with Infographics | Not specified |
The experimental protocol followed a rigorous cross-sectional design across Spain, the United Kingdom, and Romania [26] [18]:
Participant Recruitment:
Material Development Process:
Assessment Methodology:
Problem: Low comprehension scores among prior trial participants
Problem: Cultural adaptation gaps in translated materials
Problem: Generational comprehension differences in adult populations
| Reagent/Material | Function | Application Notes |
|---|---|---|
| Adapted QuIC Questionnaire | Measures objective and subjective comprehension | Requires demographic-specific customization; validated translations needed |
| Layered Web Content Platform | Digital delivery with modular information access | Supports progressive disclosure of complex information |
| Narrative Video Production | Engages visually-oriented demographics | Storytelling format for minors; Q&A for pregnant women |
| Co-creation Session Protocols | Facilitates participant-led design | Design thinking methods for minors; surveys for adults |
| Multilingual Translation Rubric | Ensures cross-cultural applicability | Native speaker translation with contextual adaptation review |
Research Methodology Workflow
Format Preference Mapping
Using verbal descriptors like "common" or "rare" without numerical frequencies leads to highly variable interpretations among research participants and patients. This variability can compromise the informed consent process in clinical research.
Table 1: Numerical Interpretations of Common Verbal Descriptors
| Verbal Descriptor | EC Guideline Definition | Lay Interpretation Range (Mean) | Reported Participant Preference for Numerical Data |
|---|---|---|---|
| Very Common | ≥ 10% (≥1/10) | Data Not Available | Most participants prefer numerical information, alone or combined with verbal labels [34]. |
| Common | ≥ 1% to < 10% (≥1/100 to <1/10) | Data Not Available | Most participants prefer numerical information, alone or combined with verbal labels [34]. |
| Uncommon | ≥ 0.1% to < 1% (≥1/1,000 to <1/100) | Data Not Available | Most participants prefer numerical information, alone or combined with verbal labels [34]. |
| Rare | ≥ 0.01% to < 0.1% (≥1/10,000 to <1/1,000) | 7% to 21% [34] | Most participants prefer numerical information, alone or combined with verbal labels [34]. |
| Very Rare | < 0.01% (<1/10,000) | Data Not Available | Most participants prefer numerical information, alone or combined with verbal labels [34]. |
Table 2: Impact of Presentation Format on Comprehension and Perception
| Outcome Measure | Verbal Descriptors Alone | Numerical Presentation | Key Evidence |
|---|---|---|---|
| Risk Perception Accuracy | Large overestimation of risk (e.g., 3% - 54%) [35]. | Smaller overestimation (e.g., 2% - 20%) [35]. | Numerical data leads to more accurate risk estimates [35]. |
| Information Satisfaction | Lower satisfaction scores [35]. | Higher satisfaction (MD: 0.48 on Likert scale, p<0.00001) [35]. | Numbers increase satisfaction with the information [35]. |
| Likelihood of Medication Use | Reduced intention for common side effects [35]. | Increased likelihood (e.g., MD: 0.90 for common effects, p<0.00001) [35]. | Numerical presentation supports better decision-making [35]. |
Q1: Why is it problematic to use only verbal descriptors like "common" or "rare" for side effects in our consent forms? A1: Verbal descriptors alone are interpreted with extreme variability. For example, the term "rare" can be interpreted as a risk ranging from 7% to 21%, whereas regulatory guidelines define it as 0.01%-0.1% [34]. This leads to participants overestimating their risk, which can affect trial participation and adherence [35].
Q2: What is the most effective way to present risk frequencies? A2: The most effective method is to pair a standard verbal descriptor with its corresponding numerical frequency (e.g., "Common (may affect up to 1 in 10 people)"). This approach improves comprehension accuracy, reduces risk overestimation, and increases participant satisfaction compared to words or numbers alone [35] [36].
Q3: Our ICFs are already long. How can we add numbers without overwhelming participants? A3: Use clear, concise formatting. Present side effects in a bulleted list or table, grouping them by frequency bands (e.g., Very Common, Common) and including the numerical equivalent for each band. This enhances scannability and understanding without significantly increasing length [36].
Q4: What is the current state of risk communication in practice? A4: An evaluation of ICFs from ClinicalTrials.gov found widespread issues. Only 3.6% used European Commission-recommended verbal descriptors with their correct numerical probability, over 20% provided no frequency information at all, and none utilized risk visualizations like icon arrays [36].
Q5: Does improving risk communication really impact participant comprehension? A5: Yes. A systematic review found that while no single strategy is a silver bullet, successful consent processes include various communication modes and one-to-one interaction with someone knowledgeable about the study. Clear risk presentation is a foundational element of this multi-faceted approach [37].
Objective: To compare participant comprehension, risk perception, and satisfaction between different risk presentation formats in an Informed Consent Form (ICF).
Methodology:
Diagram 1: Risk Format Validation Workflow
Objective: To develop and validate a standardized, accessible template for presenting risk information in ICFs.
Methodology:
Diagram 2: Template Validation Process
Table 3: Essential Resources for Risk Communication Research
| Tool / Resource | Function / Purpose | Example / Application |
|---|---|---|
| Systematic Review Databases | To gather and synthesize existing evidence on risk communication formats and their efficacy. | MEDLINE, Embase, PsycINFO, Cochrane Library [35] [34]. |
| Regulatory Guidelines | Provide standardized definitions for verbal risk descriptors and recommendations for patient information. | European Commission (EC) Guideline on readability, EMA and MHRA guidance documents [35] [36]. |
| Clinical Trial Repositories | Source real-world informed consent forms for evaluation and analysis of current practices. | ClinicalTrials.gov [36]. |
| Accessibility Checkers | Ensure that designed templates (colors, contrasts) are accessible to individuals with visual impairments. | WebAIM Contrast Checker, Venngage Accessible Color Palette Generator [12] [38]. |
| Statistical Analysis Software | To analyze comprehension data, compare intervention groups, and calculate effect sizes. | R, RevMan (for meta-analysis), standard statistical packages (SPSS, SAS) [35]. |
| Survey & Data Collection Platforms | To administer comprehension questionnaires and collect outcome measures from study participants. | Qualtrics, REDCap, Covidence (for systematic review management) [34]. |
Q1: What are the core ethical principles that should guide the design of informed consent forms?
A1: The design of informed consent forms should be guided by three core ethical principles derived from the Belmont Report: respect for persons (protecting participant autonomy and ensuring voluntary participation), beneficence (minimizing potential harm and maximizing benefits), and justice (ensuring fair distribution of the burdens and benefits of research) [39] [40]. The consent process must be more than a signature; it should ensure genuine comprehension and voluntary decision-making [39].
Q2: How can we effectively assess if a participant has truly understood the consent information?
A2: True comprehension is a cornerstone of ethical consent. Best practices to assess understanding include:
Q3: What specific strategies can reduce stress and cognitive load for participants during the consent process?
A3: To create a low-stress consent experience, research teams should:
Q4: What are the common pitfalls in consent form design that can negatively impact health literacy?
A4: Common pitfalls include:
Q5: How can we ensure our digital consent forms are accessible to individuals with visual impairments or color vision deficiencies?
A5: Ensuring digital accessibility is a legal and ethical requirement. Key actions include:
The following tables summarize key quantitative findings from recent research into the completeness of informed consent forms (ICFs) for digital health studies.
Summary of a review of 25 real-world Informed Consent Forms (ICFs) for adherence to ethical elements, highlighting significant gaps in participant protection [40].
| Metric | Value | Context / Implication |
|---|---|---|
| Highest Completeness Score | 73.5% | Even the best-performing ICF was missing over a quarter of the required/recommended ethical elements [40]. |
| Full Adherence to Framework | 0% | None of the 25 reviewed ICFs fully adhered to all required ethical elements, revealing systemic gaps [40]. |
| Major Gap Area | Technology-specific risks | Consent forms were particularly poor at conveying risks related to data privacy, reuse, and third-party technology involvement [40]. |
Essential domains and attributes identified for a robust ethical framework, extending beyond traditional consent to address digital-specific challenges [40].
| Framework Domain | Description | Example Attributes |
|---|---|---|
| Consent | Fundamental aspects of research participation. | Study purpose, benefits, compensation, voluntary participation, right to withdraw [40]. |
| Grantee (Researcher) Permissions | What researchers are allowed to do with participant data and biospecimens. | Types of analyses (e.g., genomic), future use permissions, data sharing with collaborators [40]. |
| Grantee (Researcher) Obligations | Responsibilities researchers must fulfill to protect participants. | Data storage and security, information confidentiality, result sharing, managing Incidental Findings [40]. |
| Technology | Specific details and risks associated with the digital tools used. | Technology purpose, regulatory status (e.g., FDA approval), data frequency/volume, and technology-specific risks [40]. |
Title: Iterative Development and Testing of a Comprehensive Consent Framework for Digital Health Research.
Objective: To create and refine a practical, ethically-grounded framework for informed consent that addresses the unique challenges posed by digital health technologies (DHTs) like wearable devices and mobile apps.
Methodology:
Workflow Diagram: The following diagram illustrates the multi-stage, iterative methodology used to develop the final consent framework.
This table details key tools and resources essential for conducting rigorous research into informed consent comprehension and material design.
| Item / Solution | Function / Description | Application in Research |
|---|---|---|
| Readability Assessment Tools (e.g., Flesch-Kincaid) | Software or formulas that calculate the approximate U.S. grade level required to understand a text. | Objectively measuring the reading difficulty of draft consent forms to ensure they meet the 8th-grade level target [41] [39]. |
| WCAG Contrast Checkers (e.g., WebAIM Contrast Checker) | Online tools or built-in browser developer tools that calculate the color contrast ratio between foreground and background elements. | Ensuring that digital consent forms and visual aids meet minimum contrast standards (4.5:1) for accessibility, crucial for participants with low vision [44]. |
| Structured Comprehension Assessment | A customized questionnaire or interview script designed to test a participant's understanding of key study concepts after the consent process. | Quantifying comprehension levels and identifying specific areas of misunderstanding in a standardized way [39]. |
| Digital Consent Platform | Software solutions that support interactive, multimedia consent presentation (e.g., with embedded videos, quizzes). | Implementing and testing dynamic consent models and evaluating the impact of multi-format presentation on participant understanding and engagement [39] [40]. |
| Qualitative Data Analysis Software (e.g., NVivo) | Applications that facilitate the organization and thematic analysis of open-ended feedback from participants. | Analyzing transcripts from "teach-back" sessions or focus groups to identify recurring themes, concerns, and points of confusion [40]. |
This guide addresses frequent challenges researchers face when ensuring the cultural and contextual appropriateness of clinical trials across global sites.
Problem: Potential participants in a specific region demonstrate poor understanding of key consent concepts like randomization, risks, or voluntary participation during comprehension assessments.
Solution:
Problem: A direct translation of the Informed Consent Form (ICF) has led to linguistic inaccuracies or conceptual misunderstandings, jeopardizing participant comprehension and regulatory approval.
Solution:
Problem: Even with a translated and locally approved protocol, enrollment from underserved or diverse communities remains low.
Solution:
Problem: Implementing a Decentralized Clinical Trial (DCT) across multiple countries is hindered by differing regulatory requirements for consent, data, and technology.
Solution:
Q1: What is the difference between objective and subjective understanding in informed consent? A1: Objective Understanding refers to a participant's demonstrable knowledge of the consent information, typically measured through standardized questionnaires or tests. Subjective Understanding is the participant's own perception of how well they understood the information. Both are critical for evaluating the effectiveness of the consent process [46].
Q2: How can I validate an informed consent comprehension tool for a new cultural setting? A2: The process involves cross-cultural adaptation and validation. A study in Kenya successfully adapted a tool by:
Q3: What are common cultural barriers in patient-reported outcomes? A3: Cultural norms can significantly influence how patients report symptoms. For instance, a question about preferring to stay at home had no value in diagnosing depression among Malay patients, who place a high cultural value on family living. This highlights the need for cultural adaptation of study questionnaires, not just linguistic translation [47].
Q4: What technological solutions can improve data integrity in global DCTs? A4: To ensure data quality and security in remote settings, you can:
Purpose: To systematically test and improve the cultural appropriateness and comprehensibility of informed consent forms for a specific population.
Methodology:
Purpose: To adapt and validate an existing informed consent comprehension questionnaire for a new linguistic and cultural context.
Methodology:
Table: Essential Materials for Cultural Appropriateness Research
| Item | Function in Research |
|---|---|
| Digitized Informed Consent Comprehension Questionnaire (DICCQ) | A reliable and validated audio computerized tool to assess understanding of consent information in low-literacy populations [45]. |
| Informed Consent Comprehension Assessment (ICCA) | An adapted questionnaire used to measure participant comprehension across key domains like voluntary participation, randomization, and risks [45]. |
| Translation Memory (TM) / Glossary | An archive of preferred and approved clinical trial terminology for a specific language, ensuring translation consistency and accuracy across all documents [47]. |
| Audio Computer-Assisted Self-Interview (ACASI) | A technology that delivers questions and consent information via audio in the participant's native language, bypassing literacy barriers [45]. |
| Centralized Regulatory Guidance Database | A living database that consolidates and updates DCT and consent regulations across different global regions, crucial for maintaining compliance [49]. |
| Culturally Adapted Patient-Reported Outcome (PRO) Measures | Study questionnaires that have been linguistically and culturally validated to ensure they accurately capture data from diverse populations [47]. |
Global Trial Cultural Adaptation Workflow
Informed Consent Understanding Framework
This technical support center provides troubleshooting guides and FAQs for researchers assessing informed consent comprehension in special populations. The guidance is framed within the context of optimizing informed consent comprehension assessment research.
Q1: What is the evidence that teleconsent is as effective as in-person consent? A1: A recent randomized comparative study found no significant differences in comprehension scores (measured by QuIC) or decision-making control (measured by DMCI) between teleconsent and in-person groups. This supports teleconsent as a viable alternative that maintains understanding while improving accessibility [51].
Q2: How do I verify the identity of a participant during a remote teleconsent session? A2: Best practices include requiring participants to enable their cameras for the entire session. When signing the consent form electronically, use software features that capture a timestamped screenshot alongside the live signature as documentation [51].
Q3: What are the key stages for supporting a vulnerable young person through the consent and research process? A3: Health care providers recommend a patient navigator service encompassing four stages: 1) Identifying individuals needing support, 2) Preparing for the transfer to adult-focused studies, 3) Navigating the health and research system, and 4) Providing post-transfer support [50].
Q4: Are there validated instruments to quantitatively measure informed consent comprehension? A4: Yes, commonly used instruments include:
Table 1: Comparison of Teleconsent vs. In-Person Consent on Key Metrics [51]
| Metric | Teleconsent Group (n=32) | In-Person Group (n=32) | P-value |
|---|---|---|---|
| Health Literacy (SAHL-E score, mean) | 16.72 (SD 1.88) | 17.38 (SD 0.95) | 0.03 |
| Comprehension - QuIC Part A (mean) | No significant difference | No significant difference | 0.29 |
| Comprehension - QuIC Part B (mean) | No significant difference | No significant difference | 0.25 |
| Decision-Making - DMCI (mean) | No significant difference | No significant difference | 0.38 |
Table 2: HCAT Performance in Different Patient Settings [52]
| Participant Group | Comprehension Score | Time & Effort Required | Key Challenges |
|---|---|---|---|
| Forensic Psychiatric Inpatients | Significantly lower | Required more time and simpler language | Increased errors, greater reading effort |
| Non-forensic Psychiatric Inpatients | Higher than forensic patients | Moderate | Clinical symptoms impacting capacity |
| Healthy Controls | Highest | Standard | N/A |
Protocol 1: Randomized Study of Telehealth vs. In-Person Informed Consent [51]
Protocol 2: Assessing Informed Consent Capacity with the HCAT [52]
Diagram Title: Teleconsent Study Workflow
Diagram Title: Patient Navigator Support Stages
Table 3: Essential Materials for Informed Consent Comprehension Research
| Item | Function |
|---|---|
| Quality of Informed Consent (QuIC) Survey | A validated instrument to measure both objective and perceived understanding of the consent form [51]. |
| Decision-Making Control Instrument (DMCI) | A 15-item validated tool to assess participants' perceived voluntariness, trust, and decision self-efficacy [51]. |
| Hopkins Competency Assessment Test (HCAT) | A tool designed to evaluate the decision-making capacity of patients, including those with severe mental disorders [52]. |
| Short Assessment of Health Literacy-English (SAHL-E) | A tool to measure participants' health literacy levels, which is critical for tailoring communication [51]. |
| Secure Telehealth Platform (e.g., Doxy.me) | Software that enables real-time video interaction, screen sharing, and electronic signature capture for remote consent processes [51]. |
The following table summarizes the core quantitative findings from key studies that directly compare digital and in-person informed consent processes.
Table 1: Summary of Key Randomized Studies on Digital vs. In-Person Consent
| Study & Design | Population & Setting | Primary Comprehension Metric | Key Findings on Comprehension | Key Findings on Satisfaction & Other Outcomes |
|---|---|---|---|---|
| Khairat et al. (2025) [22] [21]Randomized Controlled Trial | 64 participants (USA); adults recruited for a study on patient portals. | Quality of Informed Consent (QuIC) questionnaire. | No significant differences in QuIC scores between teleconsent and in-person groups (Part A, P=.29; Part B, P=.25). [22] [21] | No significant differences in Decision-Making Control Instrument (DMCI) scores (P=.38), indicating similar perceived voluntariness and trust. [22] [21] |
| Fons-Martinez et al. (2025) [18]Cross-Sectional Evaluation | 1,757 participants across Spain, UK, and Romania; included minors, pregnant women, and adults. | Adapted QuIC; objective comprehension categorized as low, moderate, adequate, or high. | Mean objective comprehension scores exceeded 80% across all digital consent groups (Minors: 83.3%; Pregnant women: 82.2%; Adults: 84.8%). [18] | Satisfaction rates surpassed 90% in all groups. Format preferences varied, with minors preferring videos and adults favoring text. [18] |
To ensure the reproducibility of your research, this section outlines the methodologies of the cited key experiments in detail.
This protocol is based on the study by Khairat et al. (2025) [22] [21].
This protocol is based on the large-scale study by Fons-Martinez et al. (2025) [18].
Table 2: Key Tools and Instruments for Assessing Informed Consent
| Tool Name | Primary Function | Application in Research |
|---|---|---|
| Quality of Informed Consent (QuIC) Questionnaire [18] [22] [21] | Measures comprehension of the informed consent form. | Widely used to objectively assess a participant's knowledge of study details (QuIC Part A) and their perceived understanding (QuIC Part B). |
| Decision-Making Control Instrument (DMCI) [22] [21] | Assesses perceived voluntariness, trust, and decision self-efficacy. | Evaluates whether participants feel their decision to participate was free from coercion and based on trust in the research team. |
| Health Literacy Assessment (e.g., SAHL-E, REALM-SF) [22] [55] [21] | Measures a participant's ability to obtain, process, and understand basic health information. | Used as a covariate in analysis to control for the influence of health literacy on comprehension scores. |
| Digital Consent (eIC) Platform [18] [56] | Hosts and delivers consent materials in multiple formats (video, text, infographics). | The intervention being tested; allows for a tailored participant experience and collection of usage data (e.g., format preference). |
| Statistical Software (e.g., R, SPSS) [18] | Performs statistical analysis on collected data. | Used to run tests like t-tests, chi-square, and multivariable regression to compare groups and identify predictors of comprehension. |
Q: Is digital consent truly non-inferior to in-person consent for participant comprehension?
Q: What are the main advantages of using digital consent tools?
Q: How can I ensure participants from diverse cultural backgrounds understand the digital consent materials?
Q: My research involves vulnerable populations like minors. Is digital consent appropriate?
Q: What is the biggest challenge when implementing digital consent?
This section provides targeted support for researchers encountering challenges in multinational electronic Informed Consent (eIC) comprehension assessment studies.
Guide Scope: This guide addresses frequent operational problems in eIC studies, from participant recruitment to data analysis, helping researchers identify and implement corrective actions. Preparation: Before troubleshooting, ensure you have access to the raw dataset, the original study protocol, and all statistical analysis plans.
| Problem Area | Specific Problem | Possible Causes | Recommended Actions & Fixes |
|---|---|---|---|
| Participant Recruitment | Slow enrollment rate [57] | Complex protocol; overly restrictive eligibility criteria; ineffective outreach. | Simplify recruitment materials to an 8th-grade level [58] [59]; broaden eligibility criteria if scientifically justified; use diverse recruitment channels [57]. |
| Participant Retention | High dropout rate (>30%) [57] | Low comprehension leading to disengagement; complex or burdensome study designs [58]. | Implement simplified eIC forms; use AI tools to lower readability to a Flesch-Kincaid Grade Level of ≤8 [58] [60]; increase participant touchpoints. |
| Data Quality | Low comprehension scores | Informed consent forms written at a high reading level (e.g., Grade 12.0) [58]; lack of health literacy assessment. | Adopt simplified consent forms; pre-screen participants using health literacy tools (e.g., REALM, TOFHLA) [59]; use multimedia aids to explain concepts [60]. |
| Data Quality | High data query rates [57] | Unclear data entry guidelines; complex case report forms; site personnel training gaps. | Provide enhanced training for site staff; simplify data collection forms; implement real-time data validation checks in electronic systems. |
| Operational Efficiency | Long site activation time [57] | Delays in ethics committee approvals; slow regulatory document completion [57]. | Streamline document submission processes; use centralized IRB reviews; maintain a checklist for essential startup documents. |
Q1: What is the benchmark for an acceptable comprehension score in eIC studies? A: While targets can vary by study, comprehension rates for standard consent forms are often low, averaging around 58% [59]. Studies using simplified forms have shown comprehension rates of 56-72%, with higher scores strongly correlated with higher participant health literacy [59]. Aiming for comprehension scores above 80% is a robust goal, often achievable through iterative design and testing.
Q2: The readability of our consent form is too high. How can we fix this without compromising legal content? A: Using Large Language Models (LLMs) like GPT-4 is a promising method. Prompting the AI to "convert this text to the average American reading level by using simpler words and limiting sentence length to 10 or fewer words" has proven effective. This method can significantly lower the Flesch-Kincaid Grade Level while preserving essential medicolegal meaning, as confirmed by expert medicolegal review [58] [60].
Q3: We have high screen failure rates. How can we improve this metric? A: A high screen failure rate often indicates that eligibility criteria are too strict or unclear [57]. Review and refine your criteria for necessity. Furthermore, pre-screen potential participants with a brief, easy-to-understand summary of the key inclusion and exclusion criteria before the formal consent process to better manage expectations and reduce resource waste.
Q4: What is the most significant predictor of participant comprehension we should track? A: Health literacy is a critical predictor. Research consistently shows that lower health literacy levels are significantly associated with poorer comprehension of consent information, even when simplified forms are used [59]. Integrating a validated health literacy assessment (e.g., REALM or TOFHLA) into your screening process can help stratify risk and tailor the consent approach [59].
Q5: How can we effectively measure participant satisfaction with the eIC process? A: Use structured surveys with Likert scale questions to quantitatively measure satisfaction. In studies where AI-generated summaries were used, over 80% of surveyed participants reported enhanced understanding of the clinical trial [60]. This suggests that satisfaction is closely linked to perceived comprehension, making comprehension scores a strong proxy metric.
This section consolidates key quantitative findings from recent literature to provide benchmarks for eIC study outcomes.
Table 1: Quantitative Data on Consent Form Readability and Comprehension
| Metric | Value | Context / Source |
|---|---|---|
| Average Readability of Consent Forms | Flesch-Kincaid Grade Level 12.0 ± 1.3 | Based on analysis of 798 federally funded trials; equivalent to a high school graduate level [58]. |
| Average Comprehension of Standard Forms | 58% | Comprehension score for a Phase III breast cancer clinical trial consent form [59]. |
| Comprehension with Simplified Forms | 56% - 72% | Range depends on participant health literacy; higher literacy correlated with better comprehension [59]. |
| Impact of Readability on Dropout | 16% higher dropout rate per 1-grade level increase | Incidence Rate Ratio (IRR) of 1.16 (95% CI: 1.12-1.22) for trial dropout rates [58]. |
| Participant Satisfaction with AI-Improved Materials | >80% reported enhanced understanding | Survey results from participants who reviewed clinical trial summaries generated by GPT-4 [60]. |
This section outlines detailed methodologies for key experiments cited in this article, providing a replicable framework for researchers.
Objective: To quantitatively assess the readability of clinical trial consent forms and evaluate the efficacy of an AI-driven tool in simplifying them while preserving medicolegal content. Background: The average readability of consent forms significantly exceeds the average reading ability of most adults, creating a barrier to true informed consent and potentially impacting participant retention [58] [59].
Materials:
Methodology:
Objective: To evaluate the relationship between participant health literacy levels and their comprehension of electronic informed consent materials.
Materials:
Methodology:
eIC Study Optimization Workflow
Table 2: Essential Materials and Tools for eIC Comprehension Research
| Item Name | Function / Purpose | Example / Specification |
|---|---|---|
| Validated Health Literacy Tools | Quantifies a participant's ability to obtain, process, and understand health information. | REALM (Rapid Estimate of Adult Literacy in Medicine): Measures ability to read medical words. TOFHLA (Test of Functional Health Literacy in Adults): Uses Cloze procedure to assess comprehension [59]. |
| Readability Analysis Software | Objectively calculates the reading grade level required to understand a text. | Online-Utility.org Readability Calculator: Recommended by the National Cancer Institute; calculates Flesch-Kincaid and other metrics [58]. |
| Large Language Models (LLMs) | AI tools used to simplify complex medical text into more accessible language. | GPT-4 (OpenAI): Can be prompted to rewrite text to a lower grade level while preserving meaning [58] [60]. |
| Clinical Trial Databases | Provides source materials (informed consent forms) for analysis and benchmarking. | ClinicalTrials.gov: Public repository containing consent forms from completed federally funded trials, as per the Revised Common Rule [58]. |
| Electronic Data Capture (EDC) System | Digitally captures participant responses, comprehension scores, and survey data in a structured format. | Platforms like Veeva Vault or Medidata CTMS often include built-in analytics for tracking KPIs like recruitment and retention [57]. |
Q1: What does the current evidence say about participant comprehension in teleconsent versus traditional in-person consent? Recent high-quality evidence from a 2025 randomized controlled trial indicates that teleconsent is a viable alternative to in-person consent, yielding statistically equivalent levels of participant comprehension and decision-making control [22]. The study found no significant differences in the scores for the Quality of Informed Consent (QuIC) instrument, which measures understanding, or the Decision-Making Control Instrument (DMCI), which assesses perceived voluntariness and self-efficacy [22]. This suggests that the telehealth modality does not compromise the core objective of the informed consent process.
Q2: What specific tools can I use to assess comprehension and decision-making in a teleconsent study? The following validated instruments are recommended for a robust assessment of the consent process [22]:
Q3: How can I design a teleconsent protocol that ensures regulatory compliance and participant understanding? A compliant and effective teleconsent protocol should integrate the following steps [61]:
Q4: What are the primary logistical benefits of implementing a teleconsent model in clinical research? The key logistical advantage of teleconsent is its ability to overcome significant geographic and accessibility barriers that traditionally hinder participant enrollment [22]. By allowing participants to complete the consent process from their homes, researchers can reduce transportation costs and time burdens for participants, which may lead to improved recruitment rates and faster study enrollment, while also expanding the potential recruitment pool to a wider geographic area [22].
Q5: My research involves populations with lower health literacy. What strategies can improve comprehension in a teleconsent setting? Applying health literacy principles is crucial. Strategies include [62] [63]:
This protocol summarizes the methodology from the 2025 randomized controlled trial by Khairat et al. [22].
This protocol is based on a study comparing a multimedia tool with paper-based methods [64].
| Assessment Tool | Teleconsent Group (n=32) | In-Person Group (n=32) | P-value |
|---|---|---|---|
| QuIC Part A (Mean Score) | No significant difference reported | No significant difference reported | 0.29 |
| QuIC Part B (Mean Score) | No significant difference reported | No significant difference reported | 0.25 |
| DMCI (Mean Score) | No significant difference reported | No significant difference reported | 0.38 |
| SAHL-E (Mean Score) | 16.72 (SD 1.88) | 17.38 (SD 0.95) | 0.03 |
| Outcome Measure | Multimedia Digital Tool (n=25) | Traditional Paper (n=25) | Notes |
|---|---|---|---|
| Comprehension | High | High | Both groups demonstrated high comprehension. |
| Satisfaction | Higher | Lower | Digital tool participants reported higher satisfaction. |
| Perceived Ease of Use | Higher | Lower | The digital tool was perceived as easier to use. |
| Perceived Time | Shorter | Longer | Participants felt the digital process was faster. |
Teleconsent vs In-Person Study Workflow
| Item Name | Function/Brief Description | Example Use in Research |
|---|---|---|
| HIPAA-Compliant Videoconferencing Platform | Software that enables secure, real-time audio-video communication and document sharing for the consent process. | Platforms like Doxy.me are used to conduct the teleconsent session and facilitate e-signing [22] [61]. |
| Quality of Informed Consent (QuIC) | A validated survey instrument designed to quantitatively measure a research participant's comprehension of the informed consent material. | Used as a primary outcome measure to objectively compare understanding between teleconsent and in-person groups [22]. |
| Decision-Making Control Instrument (DMCI) | A validated tool that assesses a participant's perception of voluntariness, trust, and self-efficacy regarding their decision to enroll in a study. | Used to ensure the teleconsent process does not exert undue pressure and supports autonomous decision-making [22]. |
| Health Literacy Assessment Tool | A brief test, such as the Short Assessment of Health Literacy-English (SAHL-E), to evaluate a participant's baseline ability to understand health information. | Administered to control for health literacy as a potential confounding variable in the analysis of comprehension scores [22]. |
| Electronic Signature System | A secure, digital method for capturing a participant's signature on the consent document within the telehealth platform. | Provides documentary evidence of consent and integrates with electronic health records for audit trails [61]. |
This section provides practical solutions to common challenges researchers face when translating, adapting, and evaluating informed consent materials for cross-cultural research.
Problem 1: Participants demonstrate low comprehension of key research concepts after translation.
Problem 2: Translated consent forms are too long and complex, leading to poor participant engagement.
Problem 3: Low recruitment and consent rates from specific cultural groups.
Problem 4: Uncertainty about how to validate translated materials for comprehension.
Problem 5: Regulatory bodies question the quality of the translation and cultural adaptation.
Q1: When is it necessary to translate an informed consent form? A: Translation is required whenever language barriers could prevent a potential participant from fully understanding the study. Key scenarios include international research, when participants are not proficient in the study's primary language, in high-risk studies, and when mandated by local regulatory or ethical requirements [68].
Q2: Is back-translation alone sufficient for ensuring a high-quality translation? A: No. While back-translation is a valuable quality control step for identifying gross errors, it is not sufficient on its own. It must be part of a larger process that includes review by a bilingual committee, testing for comprehension with the target audience, and cultural adaptation to ensure the translation is not only accurate but also contextually appropriate and easily understood [66] [65].
Q3: How can we improve comprehension for participants with lower literacy levels? A: Move beyond text-heavy documents. Use multimedia tools such as short narrative videos, infographics, and pictograms [18]. Offer information in audio formats and facilitate group discussions where questions can be asked freely. The key is to provide information in multiple, accessible formats that cater to different learning preferences [67].
Q4: What are the biggest pitfalls in cross-cultural consent processes? A: The most significant pitfalls include:
Q5: How can we assess comprehension effectively without making participants uncomfortable? A: Frame the comprehension check as a tool to improve your communication, not a test of the participant's intelligence. Use open-ended questions (e.g., "Can you tell me in your own words what this study is about?") and the "Teach Back Method." Utilize a friendly, supportive tone and conduct the assessment in a private setting [67].
The table below summarizes key quantitative findings from recent studies on consent comprehension, highlighting the effectiveness of various material formats and adaptation processes.
Table 1: Comprehension and Satisfaction Metrics from Recent Consent Studies
| Study Population & Intervention | Comprehension Score (Mean or %) | Satisfaction Rate | Key Findings |
|---|---|---|---|
| Abortion Research (N=1557) [71] [72] | High comprehension for healthcare rights (99.2%), confidentiality (98.5%), voluntariness (99.8%). Lower for HIPAA (88.7%) and privacy (87.1%). | Not Specified | Self-administered information can be effectively understood. No significant comprehension difference between adolescents and adults. |
| eIC for Minors (N=620) [18] | Mean objective comprehension: 83.3% (SD 13.5) | 97.4% (604/620) | 61.6% of minors preferred video format for receiving information. |
| eIC for Pregnant Women (N=312) [18] | Mean objective comprehension: 82.2% (SD 11.0) | 97.1% (303/312) | 48.7% preferred videos. Comprehension was high across all countries in the study. |
| eIC for Adults (N=825) [18] | Mean objective comprehension: 84.8% (SD 10.8) | 97.5% (804/825) | 54.8% preferred text-based information. Prior trial participation was associated with lower comprehension scores. |
This section details the methodologies of pivotal experiments cited in this article, providing a replicable framework for researchers.
This protocol, adapted from a biobanking study, outlines a multi-step process for translating and validating consent documents [66].
This protocol describes the method for a multinational cross-sectional study evaluating electronically delivered informed consent [18].
The table below lists key "research reagents" – tools and methodologies – essential for conducting robust evaluation of translated consent materials.
Table 2: Essential Methodologies and Tools for Consent Comprehension Research
| Item | Function/Description | Application Example |
|---|---|---|
| Bilingual Committee Review | A panel of native speakers with relevant cultural and research knowledge reviews translations for accuracy, conceptual equivalence, and cultural appropriateness [66]. | Resolving discrepancies found during back-translation and ensuring medical terms are appropriately described in the target language [66]. |
| Comprehension Assessment Tool (e.g., QuIC) | A validated questionnaire, often adapted for the specific study, to objectively measure participants' understanding of key consent concepts like risks, benefits, and rights [18]. | Providing a quantitative score to compare comprehension across different consent material formats (e.g., text vs. video) or participant groups [18]. |
| Readability Formulas (e.g., Fernández-Huerta, Flesch-Kincaid) | Algorithms that estimate the education grade level required to understand a text. | Objectively evaluating the complexity of a consent document before and after simplification to ensure it matches the target population's literacy level [66]. |
| Community Advisory Board (CAB) | A group of individuals from the target community who provide input on study design, recruitment strategies, and the cultural relevance of materials [65] [69]. | Identifying potentially stigmatizing language or concepts in consent forms and advising on trusted communication channels within the community [69]. |
| Back-Translation | A quality control process where a translated document is independently translated back into the source language by a second translator [66] [68]. | Flagging potential errors or conceptual shifts in the initial translation for further investigation by the bilingual committee [66]. |
| Multimedia Consent Tools | Digital formats such as interactive websites, narrative videos, and infographics used to present consent information [18]. | Catering to different learning styles and literacy levels; videos were particularly preferred by minors and pregnant women for understanding trial information [18]. |
| "Teach Back" Method | A qualitative technique where participants are asked to explain the study information in their own words [67]. | Assessing deep understanding beyond rote memorization and identifying specific concepts that are commonly misunderstood [67]. |
The diagram below visualizes the multi-stage workflow for the rigorous translation and validation of informed consent materials, synthesizing protocols from the cited research.
Translation and Validation Workflow: This diagram illustrates the multi-stage process for developing validated translated consent materials, from source document preparation to final approval.
Optimizing informed consent comprehension is both an ethical imperative and a practical necessity for robust clinical research. The synthesis of evidence confirms that a shift towards participant-centric approaches—characterized by simplified language, digital and multi-format materials, co-creation, and precise risk communication—significantly enhances understanding and satisfaction. Future efforts must focus on standardizing these best practices, developing adaptive tools for diverse global populations, and integrating comprehension assessment as a core, continuous component of the trial lifecycle. By embracing these strategies, the research community can foster greater trust, improve recruitment and retention, and uphold the fundamental principle of respect for persons.