# Plan: CSAT Feedback Cycle — Search & AI Tools

## TL;DR
Build a unified Customer Satisfaction (CSAT) feedback system across Search and all AI tools. Users rate their experience at key moments (after search, after AI result, after OCR download). Data flows into a single entity, powers an admin dashboard, and enables data-driven improvements. One existing implementation (AI Detection) gets migrated into the unified system.

---

## Current State

### What's Already Tracked

| System | What We Know | What We DON'T Know |
|--------|-------------|---------------------|
| **Search CTR** | Query text, result count, which result clicked, position | Was the user satisfied? Did they find what they needed? |
| **AI Usage Logs** | Operation type, tokens, latency, success/fail, cost | Was the output useful? Did the user keep or discard it? |
| **OCR Jobs** | Pages processed, success/fail, download count | Quality perception? Did OCR meet expectations? |
| **AI Detection** | ✅ Has 1-5 star rating + NPS + comment (fully implemented) | Tightly coupled, raw SQL table, no Doctrine entity |

### Existing Feedback Code

| Component | Status | Location |
|-----------|--------|----------|
| AI Detection feedback widget | ✅ Live | `templates/ai_detection/index.html.twig` (line ~742) |
| AI Detection feedback API | ✅ Live | `AiDetectionController::submitFeedback()` — `POST /api/ai-detection/feedback` |
| AI Detection feedback table | ✅ Prod | `ai_detection_feedback` (raw SQL, no entity) |
| Admin CSAT tab | ✅ Live | `templates/admin/playground/dashboard.html.twig` (line ~1965) — shows avg rating, NPS, trend chart |
| Translation thumbs up/down | ❌ Commented out | `templates/translate/index.html.twig` (line ~148) — UI exists but non-functional |
| Playground feedback | ❌ None | No feedback mechanism on any writing tool |
| Search satisfaction | ❌ None | Zero insight into search quality perception |
| OCR feedback | ❌ None | No quality rating after OCR completion |
| ChatBot feedback | ❌ None | No per-message or per-session rating |

---

## Architecture

### Phase 1: Unified CSAT Entity & API

#### Entity: `CsatFeedback`

```php
// src/Entity/CsatFeedback.php
#[ORM\Entity(repositoryClass: CsatFeedbackRepository::class)]
#[ORM\Table(name: 'csat_feedback')]
#[ORM\Index(columns: ['context', 'created_at'], name: 'idx_csat_context_date')]
#[ORM\Index(columns: ['user_id', 'context'], name: 'idx_csat_user_context')]
class CsatFeedback
{
    #[ORM\Id]
    #[ORM\GeneratedValue]
    #[ORM\Column]
    private int $id;

    #[ORM\ManyToOne(targetEntity: User::class)]
    #[ORM\JoinColumn(nullable: true)]
    private ?User $user;              // nullable for anonymous search feedback

    #[ORM\Column(length: 50)]
    private string $context;          // search, playground_rephrase, playground_proofread,
                                      // playground_write, playground_chat, playground_translate,
                                      // playground_summarize, playground_citation, playground_tashkeel,
                                      // playground_keywords, ocr, ai_detection, translation, chatbot

    #[ORM\Column(type: 'smallint')]
    private int $rating;              // 1-5 stars (CSAT)

    #[ORM\Column(type: 'smallint', nullable: true)]
    private ?int $recommend;          // 1-10 NPS (optional, shown periodically)

    #[ORM\Column(type: 'text', nullable: true)]
    private ?string $comment;         // free-text (optional)

    #[ORM\Column(type: 'json', nullable: true)]
    private ?array $metadata;         // context-specific data (see below)

    #[ORM\Column(length: 45, nullable: true)]
    private ?string $ipAddress;

    #[ORM\Column(length: 255, nullable: true)]
    private ?string $sessionId;

    #[ORM\Column]
    private \DateTimeImmutable $createdAt;
}
```

#### Context-Specific Metadata

| Context | Metadata JSON |
|---------|--------------|
| `search` | `{"query": "machine learning", "results_count": 42, "search_type": "english", "sqid": 123, "clicked_position": 3}` |
| `playground_*` | `{"operation": "rephrase", "usage_log_id": 456, "tokens_used": 320, "action": "keep\|copy\|ignore"}` |
| `ocr` | `{"job_id": 789, "pages": 5, "output_format": "docx"}` |
| `ai_detection` | `{"text_length": 1200, "verdict": "likely_ai"}` |
| `translation` | `{"source_lang": "ar", "target_lang": "en", "char_count": 500}` |
| `chatbot` | `{"research_slug": "abc123", "messages_count": 4}` |

#### Single API Endpoint

```
POST /api/csat/feedback
{
  "context": "search",
  "rating": 4,
  "recommend": 8,        // optional
  "comment": "...",       // optional
  "metadata": { ... }    // optional, context-specific
}

GET /api/csat/status?context=search&session_key=xyz
→ { "already_submitted": true }  // prevent duplicate prompts
```

### Phase 2: Reusable JS Feedback Widget

A single JS component that can be dropped into any template:

```javascript
// assets/js/csat-widget.js
class CsatWidget {
    constructor(options) {
        this.context = options.context;       // e.g. 'search', 'playground_rephrase'
        this.metadata = options.metadata;     // context-specific data
        this.trigger = options.trigger;       // 'auto' | 'button' | 'exit'
        this.position = options.position;     // 'bottom-right' | 'inline' | 'modal'
        this.delay = options.delay || 0;      // ms before showing (for auto trigger)
        this.showNps = options.showNps;       // show NPS question too?
        this.onSubmit = options.onSubmit;     // callback after submission
    }
}
```

**Widget variants:**

| Variant | UI | Use Case |
|---------|-----|----------|
| **Inline stars** | ★★★★☆ row below content | After AI result, after translation |
| **Slide-up panel** | Bottom sheet with stars + optional comment | After search session, after OCR download |
| **Thumbs** | 👍 👎 quick binary | Per chat message, per AI preview |
| **Modal NPS** | Full modal with 0-10 scale + comment | Periodic (every 10th session), shown once per context per week |

---

## Touchpoints & Triggers

### Search CSAT

| Trigger | When | Widget Type | Rationale |
|---------|------|-------------|-----------|
| **Zero-result query** | Immediately after 0 results shown | Inline banner | "Sorry we couldn't find anything. Help us improve — what were you looking for?" (free-text only, no stars) |
| **Search refinement** | User searches 3+ different queries in 60s | Slide-up after 3rd search | Rapid refinement = struggling to find content |
| **Pagination depth** | User reaches page 3+ | Subtle bottom prompt | Deep pagination = poor ranking / irrelevant results |
| **Successful click** | User clicks a result AND spends 30s+ on it | On return to search (via back button) | Positive signal → ask for confirmation |
| **Session end** | User is about to leave search (tab close / navigate away) | Beacon-based passive log | No UI prompt; log the implicit signal (searches, clicks, dwell) |

**Search widget placement**: Bottom of results list in `all.html.twig`, below pagination.

**Key search metadata to capture:**
- Original query text
- Number of results
- Which result(s) clicked
- Position of clicked results
- Time spent on search page
- Number of query refinements in session
- Search type (Arabic/English)

### Playground AI Tools CSAT

| Tool | Trigger | Widget Type | Placement |
|------|---------|-------------|-----------|
| **Rephrase / Proofread / Criticize** | After user clicks "Keep" or "Copy" on AI preview overlay | Inline 5-star row below the accepted text | Inside `ai-preview-overlay` modal, below action buttons |
| **Rephrase / Proofread / Criticize** | After user clicks "Ignore" (discard) | Different prompt: "What went wrong?" + optional comment | Replace the overlay content briefly |
| **Write with AI** | After research plan / content is generated and displayed | Inline stars below generated content | Below the generated output area |
| **Chat** | After 5+ messages in a conversation | Subtle thumbs 👍👎 on each AI response | Next to each AI message bubble |
| **Chat** | On chat close / navigate away | Slide-up: "How was this chat session?" 1-5 stars | Bottom of chat panel |
| **Summarize** | After summary displayed | Inline stars | Below summary output |
| **Citation** | After citation generated | Inline stars | Below citation output |
| **Translation** (playground) | After translation result shown | Reactivate existing thumbs + add stars | Below translation output |
| **Tashkeel** | After result shown | Inline stars | Below tashkeel output |
| **Keywords** | After keywords extracted | Inline stars | Below keywords output |

**Key AI metadata to capture:**
- `PlaygroundUsageLog.id` — links CSAT to the specific API call
- Operation type
- User action (keep / copy / ignore / regenerate)
- Token count
- Latency
- Whether user edited the AI output after accepting

### OCR CSAT

| Trigger | When | Widget Type |
|---------|------|-------------|
| **Job completed** | Status changes to "completed" on the job card | Inline stars on the job card |
| **Download** | User clicks Download (DOCX/PDF/MD) | Slide-up after 3s delay: "How was the OCR quality?" |
| **Job failed** | Status changes to "failed" | Different prompt: "Sorry this failed. Was the file readable?" + file type info |

**Key OCR metadata:**
- Job ID, page count
- Output format downloaded
- File type (PDF scan, image, etc.)
- Processing time

### Translation (Standalone) CSAT

| Trigger | When | Widget Type |
|---------|------|-------------|
| **Translation complete** | Result appears in target textarea | Reactivate commented-out thumbs UI + add 5-star row |
| **Copy result** | User copies translated text | Quick thumbs confirmation |

---

## Implementation Plan

### Step 1: Entity + Migration + Repository + API Controller

| File | Purpose |
|------|---------|
| `src/Entity/CsatFeedback.php` | Doctrine entity |
| `src/Repository/CsatFeedbackRepository.php` | Queries for admin dashboard |
| `src/Controller/CsatController.php` | `POST /api/csat/feedback`, `GET /api/csat/status` |
| `migrations/VersionXXXX.php` | Create `csat_feedback` table (collation: `utf8mb4_unicode_ci`) |

**Routes:**
```yaml
# Attribute-based in CsatController
POST   /api/csat/feedback       # Submit feedback
GET    /api/csat/status          # Check if already submitted (by context + session)
```

**Validation rules:**
- `rating`: required, 1-5
- `context`: required, must be in allowed enum list
- `recommend`: optional, 1-10
- `comment`: optional, max 2000 chars
- Rate limit: max 20 feedback submissions per user per day
- Dedup: max 1 feedback per context + session_key per hour

### Step 2: Reusable JS Widget (`assets/js/csat-widget.js`)

Single file, no framework dependency. Features:
- Renders star rating UI (1-5 clickable stars)
- Optional NPS slider (1-10)
- Optional free-text comment (expandable)
- Sends POST to `/api/csat/feedback` via fetch
- Stores submission state in `sessionStorage` to prevent re-prompting
- RTL-aware (Arabic layout)
- Bilingual labels via data attributes
- Configurable trigger: auto (timer), manual (button click), or programmatic
- Smooth slide-up animation
- Dismissible (X button), records dismissal to avoid re-showing

### Step 3: Wire Into Search (`all.html.twig`)

**Integration points:**
1. After zero-result queries — show inline "help us improve" banner
2. After 3+ searches in session — show slide-up CSAT widget
3. On result click with dwell time — log implicit satisfaction signal

**JS hooks in `filters.js`:**
```javascript
// Track search count in session
let searchCount = parseInt(sessionStorage.getItem('search_count') || '0');
searchCount++;
sessionStorage.setItem('search_count', searchCount);

// After 3rd search, show CSAT widget
if (searchCount === 3 && !sessionStorage.getItem('csat_search_shown')) {
    new CsatWidget({
        context: 'search',
        trigger: 'auto',
        delay: 2000,
        position: 'bottom-right',
        metadata: { query: currentQuery, results_count: totalResults, search_type: searchType }
    }).show();
    sessionStorage.setItem('csat_search_shown', '1');
}
```

### Step 4: Wire Into Playground AI Preview

**Integration point:** Inside the `ai-preview-overlay` in `playground/index.html.twig`

After user clicks "Keep" or "Copy":
```javascript
// In the existing Keep/Copy handler
onAiResultAction(action) {
    // ... existing logic ...
    
    // Show inline CSAT
    new CsatWidget({
        context: 'playground_' + currentOperation,
        trigger: 'auto',
        position: 'inline',
        showNps: false,
        metadata: {
            operation: currentOperation,
            usage_log_id: lastUsageLogId,
            action: action,  // 'keep' | 'copy' | 'ignore'
            tokens_used: lastTokenCount
        }
    }).show(document.querySelector('.ai-preview-actions'));
}
```

After user clicks "Ignore":
```javascript
// Show different prompt
new CsatWidget({
    context: 'playground_' + currentOperation,
    position: 'inline',
    metadata: { operation: currentOperation, action: 'ignore' },
    presetRating: null,  // don't show stars
    showComment: true,   // show "What went wrong?" text box
    commentPlaceholder: 'ما الذي لم يعجبك في النتيجة؟'
}).show(document.querySelector('.ai-preview-overlay'));
```

### Step 5: Wire Into OCR

**Integration point:** `templates/ocr/index.html.twig`, on job card status change to "completed"

```javascript
// In the existing polling/SSE handler that updates job status
if (job.status === 'completed') {
    // Add inline stars to the job card
    new CsatWidget({
        context: 'ocr',
        trigger: 'auto',
        delay: 1000,
        position: 'inline',
        metadata: { job_id: job.id, pages: job.pages }
    }).show(jobCard.querySelector('.job-actions'));
}
```

### Step 6: Wire Into ChatBot

**Integration point:** `templates/ChatBot/frame.html.twig`

Per-message thumbs:
```javascript
// After each AI response renders
appendMessageRating(messageEl, messageIndex) {
    const thumbs = document.createElement('div');
    thumbs.innerHTML = '👍 👎';
    thumbs.onclick = (e) => {
        new CsatWidget({
            context: 'chatbot',
            metadata: { research_slug: currentSlug, message_index: messageIndex }
        }).submitQuick(e.target.dataset.value === 'up' ? 5 : 1);
    };
    messageEl.appendChild(thumbs);
}
```

### Step 7: Migrate Existing AI Detection Feedback

```sql
-- One-time migration: copy ai_detection_feedback → csat_feedback
INSERT INTO csat_feedback (user_id, context, rating, recommend, comment, metadata, created_at)
SELECT user_id, 'ai_detection', rating, recommend, comment, NULL, created_at
FROM ai_detection_feedback;
```

Then update `AiDetectionController` to use the new `CsatController` API, and replace the custom widget with the shared `CsatWidget`.

### Step 8: Admin Dashboard — Unified CSAT Tab

Extend existing CSAT tab in `templates/admin/playground/dashboard.html.twig`:

**Metrics per context:**

| Metric | Calculation |
|--------|-------------|
| **CSAT Score** | `(count of 4-5 ratings / total ratings) × 100` |
| **NPS** | `% promoters (9-10) − % detractors (1-6)` |
| **Response Rate** | `feedbacks submitted / eligible impressions` |
| **Avg Rating** | Mean of all ratings |
| **Volume** | Total feedback count |

**Dashboard panels:**
1. **Overview cards**: Overall CSAT %, overall NPS, total responses (last 30 days)
2. **CSAT by context**: Bar chart — CSAT % for search, each AI tool, OCR, etc.
3. **Trend chart**: Line chart — daily CSAT % over time, filterable by context
4. **Lowest-rated contexts**: Table sorted by worst CSAT, highlighting areas needing attention
5. **Recent comments**: Scrollable feed of latest free-text feedback, filterable by context and rating
6. **Search-specific panel**: Zero-result rate vs. CSAT, query refinement rate, correlation between CTR position and satisfaction
7. **AI tool-specific panel**: CSAT by operation type, keep/ignore ratio correlated with rating, average rating by token count buckets

**API routes:**
```
GET /jim19ud83/playground/api/csat-overview          # Overall CSAT + NPS
GET /jim19ud83/playground/api/csat-by-context        # Per-context breakdown
GET /jim19ud83/playground/api/csat-trend             # Daily trend, ?context=X
GET /jim19ud83/playground/api/csat-comments          # Recent comments, ?context=X&rating=1-2
GET /jim19ud83/playground/api/csat-search-correlation # Search metrics vs satisfaction
```

---

## Smart Prompting Strategy (Avoid Survey Fatigue)

| Rule | Implementation |
|------|----------------|
| **Max 1 prompt per session** | `sessionStorage` flag prevents re-showing |
| **Max 1 per context per week per user** | Server-side check in `GET /api/csat/status` |
| **Respect dismissals** | If user closes widget without responding, don't show again for 48h |
| **Progressive disclosure** | First show: stars only. If user rates, slide open NPS. If NPS given, show comment box. |
| **Sampling for NPS** | Only show NPS question to every 5th user (reduce fatigue, NPS needs less volume) |
| **High-value moments** | Prioritize prompting after: first-time AI usage, after "Keep" action, after OCR download |
| **Skip negative contexts** | Don't show CSAT when: operation failed (show error feedback instead), user has 0 credits, page load > 5s |

---

## Translations

```yaml
# translations/Csat.ar.yml
csat:
  rate_experience: "كيف تقيّم تجربتك؟"
  rate_search: "هل وجدت ما تبحث عنه؟"
  rate_ai_result: "هل كانت النتيجة مفيدة؟"
  rate_ocr: "كيف تقيّم جودة التعرف على النص؟"
  what_went_wrong: "ما الذي لم يعجبك في النتيجة؟"
  thank_you: "شكراً لملاحظاتك!"
  comment_placeholder: "أخبرنا المزيد (اختياري)"
  nps_question: "ما مدى احتمال أن توصي بهذه الأداة لزميل؟"
  nps_unlikely: "غير محتمل"
  nps_likely: "محتمل جداً"
  zero_results_help: "لم نجد نتائج. ساعدنا بتحسين البحث — ماذا كنت تبحث عنه؟"
  dismiss: "لاحقاً"

# translations/Csat.en.yml
csat:
  rate_experience: "How would you rate your experience?"
  rate_search: "Did you find what you were looking for?"
  rate_ai_result: "Was this result helpful?"
  rate_ocr: "How would you rate the OCR quality?"
  what_went_wrong: "What didn't work well?"
  thank_you: "Thanks for your feedback!"
  comment_placeholder: "Tell us more (optional)"
  nps_question: "How likely are you to recommend this tool to a colleague?"
  nps_unlikely: "Not likely"
  nps_likely: "Very likely"
  zero_results_help: "No results found. Help us improve — what were you searching for?"
  dismiss: "Maybe later"
```

---

## Data-Driven Improvement Loop

The purpose of CSAT isn't just collecting scores — it's closing the feedback loop:

```
Collect → Analyze → Act → Measure
   ↑                          |
   └──────────────────────────┘
```

### Weekly Review Process

1. **Monday dashboard check**: Review CSAT by context, identify any drops
2. **Low-CSAT investigation**: For contexts scoring < 60% CSAT:
   - Read recent negative comments
   - Cross-reference with usage logs (latency? token count? specific operation?)
   - Check if correlated with specific query types or file types
3. **Action items**: Create tickets for top 2-3 issues found
4. **Track improvement**: After shipping fixes, compare CSAT before/after

### Search Quality Signals

| Signal | What It Means | Action |
|--------|--------------|--------|
| Low CSAT + high CTR | Users click but don't find value | Improve content quality / relevance ranking |
| Low CSAT + low CTR | Results don't look relevant | Improve title matching / snippet quality |
| Low CSAT + zero results | Missing content | Add content for these query topics |
| High CSAT + low CTR | Users find answer in snippets without clicking | Good — consider showing richer previews |
| High refinement rate + low CSAT | Users struggling to formulate query | Add search suggestions / "did you mean?" |

### AI Quality Signals

| Signal | What It Means | Action |
|--------|--------------|--------|
| Low CSAT + high "Ignore" rate | AI output not useful | Improve prompts / model selection |
| Low CSAT + high latency | Frustration from waiting | Optimize token usage / streaming |
| Low CSAT on specific operation | One tool underperforming | Focus prompt engineering on that tool |
| Low CSAT + edited after "Keep" | Output close but not perfect | Fine-tune prompts for that operation |
| High CSAT + low usage | Good tool but poor discoverability | Improve UI/UX, add tooltips, promote in onboarding |

---

## Priority & Effort Matrix

| Step | Priority | Effort | Impact |
|------|----------|--------|--------|
| 1. Entity + API | **HIGH** | Medium | Foundation for everything |
| 2. JS Widget | **HIGH** | Medium | Reusable across all touchpoints |
| 3. Search CSAT | **HIGH** | Low | Highest user volume, biggest insight gap |
| 4. Playground AI CSAT | **HIGH** | Medium | Direct revenue impact (credit-based tools) |
| 5. OCR CSAT | Medium | Low | Smaller user base, but high per-user value |
| 6. ChatBot CSAT | Medium | Low | Per-message thumbs are easy wins |
| 7. Migrate AI Detection | Low | Low | Unify data, but existing system works |
| 8. Admin Dashboard | **HIGH** | High | Makes all data actionable |

**Recommended implementation order:** 1 → 2 → 4 → 3 → 8 → 5 → 6 → 7

---

## Success Metrics

| Metric | Target | Measurement |
|--------|--------|-------------|
| **Response rate** | > 5% of eligible sessions | Feedbacks / total sessions with CSAT-eligible actions |
| **Overall CSAT** | > 70% (4-5 star) | Across all contexts |
| **Search CSAT** | > 60% initially, improve to 75% | After 3 months of iteration |
| **AI tools CSAT** | > 75% | Writing tools should have high satisfaction |
| **NPS** | > 30 | Positive NPS indicates growth potential |
| **Feedback-to-action time** | < 1 week | Time from negative feedback pattern → fix shipped |
| **CSAT improvement after fix** | +10% per quarter | Measurable lift after addressing low-CSAT areas |
