Ok. Thanks for the clarification. Still a good project, and many people like to use online services.
I prefer local models. All I use and used on the local model could be on an online, no secrets here. The speed is more than acceptable for a low end cpu+gpu.
I stil use Perplexity sometimes for more complex questions.
I prefer local models. All I use and used on the local model could be on an online, no secrets here. The speed is more than acceptable for a low end cpu+gpu.
I stil use Perplexity sometimes for more complex questions.