Skip to content

Commit 9973aec

Browse files
committed
Optimized enhanced context for LLM from large knowledge base
1 parent 72e6a3b commit 9973aec

File tree

1 file changed

+15
-0
lines changed

1 file changed

+15
-0
lines changed

content/post/6minutes-tech-rag.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,21 @@ Utilizing mentorship or buddy systems.
8787

8888
![alt text](https://docs.aws.amazon.com/images/sagemaker/latest/dg/images/jumpstart/jumpstart-fm-rag.jpg)
8989

90+
Common usecases:
91+
92+
1. Provided enhanced context for internal informations
93+
2. Optimize enhanced context for large internal knowledge base
94+
95+
### Provided enhanced context for internal informations
96+
97+
### Optimize enhanced context for large internal knowledge base
98+
99+
Let's say you have 10,000 products in your database. If you feed the LLM with the context of 10,000 products included details every conversation. It is not viable solution and the tokens for each question will be exploded, and you will be bankrupt immediately.
100+
101+
The main idea for an assistant chatbox like this is build your internal knowledge base, process the internal query from user question to find the relevant products and feed for the LLM. You should have process to retrieve necessary information from the customer to build the appropriated internal queries.
102+
103+
![Optimize Enhanced Context for LLM from Large Knowledge Base](https://gist.github.com/user-attachments/assets/b4ca75ea-7223-461f-bee7-c59038ea5f01)
104+
90105
## References
91106

92107
- https://aws.amazon.com/what-is/retrieval-augmented-generation/

0 commit comments

Comments
 (0)