This is a SLM (Small Language Model) example to showcase the usage of Artificial Intelligence to help the users of a website better understand the website's functionality. It aims to help reduce the time and effort that users have to spend on reading the documentation, and also aims to reduce the number of calls to the customer service team.
- LLM: Phi 4 Mini
- Embedding model: mxbai-embed-large
Make sure you have Docker Desktop downloaded
- Run
docker compose up -d --build- sets up the Docker container and all the requirements - Pull the SLM:
docker exec -it ollama ollama pull phi4-mini:latest - Pull the embedding model:
docker exec -it ollama ollama pull mxbai-embed-large:latest - Run the script to embed the data into the vector database:
docker compose run --rm rag_web_app python /app/ingest_job.py - Run
docker compose up -d --buildagain to make sure the SLM now know it can access the vector database - It should all be setup now! Access http://localhost:8501/ to view the chatbot!
Feel free to fork and reuse this repo (remember to adjust the prompt template)!