nostr relay proxy

I somehow managed to kill the etcd instance and cause the cluster to permanently fail because I restarted masters too many times in a test deployment. I am honestly reconsidering my plan to use etcd for a few services
k3s does have etcd for HA, if you want to suffer
🪿
🪿
I miss my old workflow. deploying a new app was as simple as copying a dir in a repo, replacing the image and ingress.host, and commit/push.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I agree. Also, you get everything that was built around k8s for deploying and managing applications. I’m not at that big of a scale, yet, but it is crossing into the point where manual management is not feasible. And importantly, most of my workloads are already compatible with a containerized and distributed architecture easily. It took about an hour for me to configure Apache Pulsar and ClickHouse on a k3s cluster.
🪿
the secret is kubernetes is popular because it pays massive dividends when you’re a large org running containers across thousands of nodes. most people don’t have these problems.
+
I may have drank too much of the cloud juice - Proxmox is not ideal for large-scale deployments with cattle - K8s is complex but the complexity pays off 100x - Route 53 just works - CF Workers is good, sometimes - R2 and B2 are the best object storage - Hetzner is great for a lot of stuff - EC2 as usual is a scam
I’ve yet to try fine turning or LoRA myself either. looks dead simple with python and pytorch šŸ‘€
You only have to create the embeddings once though (until you want to update with newer data), but I’m not sure how well rag works with code predictions
yeah I mainly do RAG, haven't done much LoRA stuff yet. Curious how well it would work.
ā€œFor most developers, starting with a RAG-based system or a managed code customization service is the most practical way to get code in your own style from an LLM, especially with a large number of repositories. Fine-tuning is best if you need even deeper alignment and have the resources to support itā€
i want all code it generates to have the same taste as me without needing to load examples into its context every time
rag?
next
prev

rendered in 16.026309ms