You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To help new students moving through Unit 1, here is a heads-up on three common "non-errors" and UI bugs that can slow you down.
1. Ollama "Socket Address" Error
If you see this error: Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
What it means: The Ollama server is already running in the background (it often starts automatically on installation).
The Fix (Windows): Check your System Tray (bottom right toolbox near the clock). Find the Ollama icon, right-click it, and select Quit Ollama.
Next Step: After quitting, you can now run your command (like ollama serve) manually if you wish, and simply start pulling models directly since the port is now clear.
2. Skip Gated Models & Tokens (For Unit 1)
You may see instructions to request access for models like Llama-4 or to configure an HF_TOKEN.
Pro-tip: Don't waste time waiting for gated model approval or complex token setups just to get through Unit 1. (You can set them up, but they aren't required for this specific unit).
The Goal: The "dummy library" is meant for fast experimentation. Jump straight into the code!
Important Note: Some models in the Llama-4 collection may show an error: "_is not a chat model." If you hit this, use meta-llama/Meta-Llama-3-8B-Instruct instead—it is highly stable and works perfectly with the course code.
In the "What are LLMs" section, if you are using Dark Mode, the Token IDs might be invisible because the font color and the background are the same.
The Workaround: If you click on the token IDs and see nothing, temporarily switch your Hugging Face theme to Light Mode. This will make the text visible until the CSS is patched.
Good luck with the course! feel free to add if you find anything like this in the comments below.
To help new students moving through Unit 1, here is a heads-up on three common "non-errors" and UI bugs that can slow you down.
1. Ollama "Socket Address" Error
If you see this error:
Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.ollama serve) manually if you wish, and simply start pulling models directly since the port is now clear.2. Skip Gated Models & Tokens (For Unit 1)
You may see instructions to request access for models like Llama-4 or to configure an
HF_TOKEN."_is not a chat model."If you hit this, usemeta-llama/Meta-Llama-3-8B-Instructinstead—it is highly stable and works perfectly with the course code.3. Tokenizer Playground Visibility (Dark Mode Bug)
In the "What are LLMs" section, if you are using Dark Mode, the Token IDs might be invisible because the font color and the background are the same.
Good luck with the course! feel free to add if you find anything like this in the comments below.