mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
Update ollama address (#861)
This commit is contained in:
committed by
GitHub
parent
00c240ff87
commit
4d9f9dcb6c
@@ -2278,19 +2278,39 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "747a2fc7-282d-47ec-a987-ed0a23ed6822",
|
||||
"metadata": {
|
||||
"id": "747a2fc7-282d-47ec-a987-ed0a23ed6822"
|
||||
},
|
||||
"id": "267cd444-3156-46ad-8243-f9e7a55e66e7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"- For macOS and Windows users, click on the ollama application you downloaded; if it prompts you to install the command line usage, say \"yes\"\n",
|
||||
"- Linux users can use the installation command provided on the ollama website\n",
|
||||
"\n",
|
||||
"- In general, before we can use ollama from the command line, we have to either start the ollama application or run `ollama serve` in a separate terminal\n",
|
||||
"\n",
|
||||
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/ch07_compressed/ollama-run.webp?1\" width=700px>\n",
|
||||
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/ch07_compressed/ollama-run.webp?1\" width=700px>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "30266e32-63c4-4f6c-8be3-c99e05ed05b7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"\n",
|
||||
"**Note**:\n",
|
||||
"\n",
|
||||
"- When running `ollama serve` in the terminal, as described above, you may encounter an error message saying `Error: listen tcp 127.0.0.1:11434: bind: address already in use`\n",
|
||||
"- If that's the case, try use the command `OLLAMA_HOST=127.0.0.1:11435 ollama serve` (and if this address is also in use, try to increment the numbers by one until you find an address not in use\n",
|
||||
"\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "747a2fc7-282d-47ec-a987-ed0a23ed6822",
|
||||
"metadata": {
|
||||
"id": "747a2fc7-282d-47ec-a987-ed0a23ed6822"
|
||||
},
|
||||
"source": [
|
||||
"- With the ollama application or `ollama serve` running in a different terminal, on the command line, execute the following command to try out the 8-billion-parameter Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
@@ -2475,6 +2495,8 @@
|
||||
"def query_model(\n",
|
||||
" prompt,\n",
|
||||
" model=\"llama3\",\n",
|
||||
" # If you used OLLAMA_HOST=127.0.0.1:11435 ollama serve\n",
|
||||
" # update the address from 11434 to 11435\n",
|
||||
" url=\"http://localhost:11434/api/chat\"\n",
|
||||
"):\n",
|
||||
" # Create the data payload as a dictionary\n",
|
||||
|
||||
Reference in New Issue
Block a user