nostr relay proxy

event page

Venice.ai and select the llama3.1 model. Great option for a big model that you can’t run locally. Otherwise a local llama3.1 20B is solid if you have the RAM

rendered in 821.706µs