Skip to content

Commit 8739b1c

Browse files
Fix Markdown syntax issues
1 parent 9433f8e commit 8739b1c

File tree

2 files changed

+1
-2
lines changed

2 files changed

+1
-2
lines changed

5_AI_Initial_Questions.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,6 @@ Last updated: 2025-11-03
2828
| **Budget & Timeline** | What’s your budget and timeline for this initiative? | `We have a 3-month window and a $50K budget.` | Helps prioritize scope and feasibility.|
2929
| **Success Metrics** | How will you measure the success of this AI solution? | `Reduction in ticket resolution time by 30%.` | Aligns technical goals with business KPIs.|
3030

31-
3231
<!-- START BADGE -->
3332
<div align="center">
3433
<img src="https://img.shields.io/badge/Total%20views-1413-limegreen" alt="Total views">

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -468,7 +468,7 @@ gpt-rag-resource-group: resource not found: 0 resource groups with prefix or suf
468468
> If `main.parameters.json` contains `"location": "westus2"`, make sure your environment has `AZURE_LOCATION=westus2`.
469469
470470
> [!NOTE]
471-
> A `golden dataset` for RAG is your trusted, `curated set of documents or files that the system retrieves from when answering questions`. It’s a clean, accurate, and `representative subset of all possible data free of noise and errors`, so the model always pulls reliable context. Is a `subset of files, for example, and known Q&A pairs chosen from the larger data source.` These are the “benchmark” `questions where the correct answers are already known`, so they can be `used later to measure system accuracy and performance`. Other `expert users are free to ask additional questions during testing, but those will still pull context from the same curated files in the golden dataset (subset datasource) `. In short, it’s the trusted evaluation set for your proof of concept for example.
471+
> A `golden dataset` for RAG is your trusted, `curated set of documents or files that the system retrieves from when answering questions`. It’s a clean, accurate, and `representative subset of all possible data free of noise and errors`, so the model always pulls reliable context. Is a `subset of files, for example, and known Q&A pairs chosen from the larger data source.` These are the “benchmark” `questions where the correct answers are already known`, so they can be `used later to measure system accuracy and performance`. Other `expert users are free to ask additional questions during testing, but those will still pull context from the same curated files in the golden dataset (subset datasource)`. In short, it’s the trusted evaluation set for your proof of concept for example.
472472
473473
<img width="411" height="243" alt="Untitled Diagram drawio" src="https://github.com/user-attachments/assets/40682ec2-77e4-4413-88e5-d343f036f084" />
474474

0 commit comments

Comments
 (0)