Before I get your response I tried with open source models like Deepseek r1 7b and Qwen 2.5 and I properly invoked the uvicorn server and did proceed with your website github URL and asked "what is PocketFlow?". It takes a lot of time for 1/5 iteration. So manually stopped the server. May be because of my constrained NVIDIA GeForce RTX 4060 with 8GB VRAM which is not enough to to do these kind of task.
Is it possible to use LLM other than Gemini, specifically open source LLMs?
Yes! Replace the call_llm.py with your implementation:
https://the-pocket.github.io/PocketFlow/utility_function/llm.html
Thanks for your reply.
Before I get your response I tried with open source models like Deepseek r1 7b and Qwen 2.5 and I properly invoked the uvicorn server and did proceed with your website github URL and asked "what is PocketFlow?". It takes a lot of time for 1/5 iteration. So manually stopped the server. May be because of my constrained NVIDIA GeForce RTX 4060 with 8GB VRAM which is not enough to to do these kind of task.