nostr relay proxy

event page

Very new to exploring this. I’ve used ChatGPT and that’s about it. So far using ollama through docker has been great for running locally. Mostly testing small open models >=8b parameters. Will see how these small models do on my M1… I imagine they will give me enough of a boost that I won’t choose to pay for something. What’s your experience been like?

rendered in 1.522081ms