souravmighty commited on
Commit
9b44489
โ€ข
1 Parent(s): 811a1e3

Documentation

Browse files
app.py CHANGED
@@ -37,7 +37,7 @@ async def on_chat_start():
37
  id="Model",
38
  label="Choose your favorite LLM:",
39
  values=["llama3-8b-8192", "llama3-70b-8192", "mixtral-8x7b-32768", "gemma-7b-it"],
40
- initial_index=0,
41
  )
42
  ]
43
  ).send()
@@ -101,15 +101,11 @@ async def setup_agent(settings):
101
  user_env = cl.user_session.get("env")
102
  os.environ["GROQ_API_KEY"] = user_env.get("GROQ_API_KEY")
103
 
104
- memory=get_memory()
105
- docsearch = cl.user_session.get("docsearch")
106
-
107
  msg = cl.Message(content = f"You are using `{settings['Model']}` as LLM. You can change model in `Settings Panel` in the chat box.")
108
  await msg.send()
109
 
110
-
111
  memory=get_memory()
112
-
113
 
114
  # Create a chain that uses the Chroma vector stores
115
  chain = ConversationalRetrievalChain.from_llm(
 
37
  id="Model",
38
  label="Choose your favorite LLM:",
39
  values=["llama3-8b-8192", "llama3-70b-8192", "mixtral-8x7b-32768", "gemma-7b-it"],
40
+ initial_index=1,
41
  )
42
  ]
43
  ).send()
 
101
  user_env = cl.user_session.get("env")
102
  os.environ["GROQ_API_KEY"] = user_env.get("GROQ_API_KEY")
103
 
 
 
 
104
  msg = cl.Message(content = f"You are using `{settings['Model']}` as LLM. You can change model in `Settings Panel` in the chat box.")
105
  await msg.send()
106
 
 
107
  memory=get_memory()
108
+ docsearch = cl.user_session.get("docsearch")
109
 
110
  # Create a chain that uses the Chroma vector stores
111
  chain = ConversationalRetrievalChain.from_llm(
assets/conversational_rag_architecture.gif DELETED
Binary file (515 kB)
 
chainlit.md CHANGED
@@ -1,6 +1,64 @@
1
- # Welcome to GroqDoc!
2
 
3
- ## Useful Links ๐Ÿ”—
4
 
5
- - **Groq API KEY:** Generate Groq API Key for free [Groq API Key](https://console.groq.com/keys) ๐Ÿ“š
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ๐Ÿ“š GroqDoc: Chat with your Documents
2
 
3
+ Ever spent hours sifting through a lengthy document, searching for that one specific answer? GroqDoc eliminates this frustration by transforming your documents into interactive chat sessions.
4
 
5
+ GroqDoc is a chatbot application that lets you chat with your uploaded documents! It leverages the power of open-source Large Language Models (LLMs) and the rapid inference speed of Groq.
6
 
7
+ ## Getting Started
8
+
9
+ **1. Generate your Groq API Key:**
10
+ To use GroqDoc, you'll need a Groq API key. Here's how to generate one for free:
11
+ - Visit the Groq Cloud console: [https://console.groq.com/](https://console.groq.com/)
12
+ - Sign up for a free account (if you don't have one already).
13
+ - Navigate to the "API Keys" section (usually found in the left-side navigation panel).
14
+ - Click "Create API Key" and give it a descriptive name.
15
+ - Click "Submit" to generate your unique API key.
16
+ - **Important:** Copy and securely store your API key. Don't share it publicly.
17
+
18
+ **2. Upload your Document:**
19
+ Once you have your Groq API key, launch GroqDoc and upload `PDF` documents you want to chat with. GroqDoc supports upto 10 pdfs and upto 100mb at a time. It may take a few seconds to prcess your documents.
20
+
21
+ **3. Ask your Questions:**
22
+ Once your document is uploaded, feel free to ask GroqDoc any questions you have about the content. GroqDoc will utilize its conversational RAG architecture to retrieve relevant information and provide you with answers directly from the document.
23
+
24
+ **4. Choose your LLM (Optional):**
25
+ GroqDoc offers a selection of open-source LLMs for you to choose from within the settings panel in the chatbox. These include:
26
+
27
+ - llama3-8b-8192
28
+ - llama3-70b-8192
29
+ - mixtral-8x7b-32768
30
+ - gemma-7b-it
31
+
32
+ `llama3-70b-8192` is set by default. Experimenting with different LLMs can provide you with varying perspectives and answer styles.
33
+
34
+ -----
35
+
36
+ ## Project Overview
37
+
38
+ QroqDoc is based on Conversational RAG architecture which allows it to retrieve relevant information from the uploaded documents and response based on the user's query and chat history. The following tech stack is used to achieve this -
39
+
40
+ **1. Framework:** [๐Ÿฆœ๏ธ๐Ÿ”— LangChain](https://www.langchain.com/)
41
+
42
+ **2. Text Embedding Model:** [Snowflake's Arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m)
43
+
44
+ **3. Vector Store:** [Chroma DB](https://www.trychroma.com/)
45
+
46
+ **4. Open Source LLMs:**
47
+ - [LLaMA3 8b](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
48
+ - Developer: Meta
49
+ - Context Window: 8,192 tokens
50
+ - [LLaMA3 70b](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
51
+ - Developer: Meta
52
+ - Context Window: 8,192 tokens
53
+ - [Mixtral 8x7b](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
54
+ - Developer: Mistral
55
+ - Context Window: 32,768 tokens
56
+ - [Gemma 7b](https://huggingface.co/google/gemma-1.1-7b-it)
57
+ - Developer: Google
58
+ - Context Window: 8,192 tokens
59
+
60
+ **5. Infernce Engine:** [๐Ÿš€Groq](https://groq.com/)
61
+
62
+ **6. UI:** [Chainlit](https://docs.chainlit.io/get-started/overview)
63
+
64
+ **7. Deployment** [๐Ÿค—Hugging Face Space](https://huggingface.co/spaces)