nostr relay proxy

nostr:nprofile1qqs8d3c64cayj8canmky0jap0c3fekjpzwsthdhx4cthd4my8c5u47spzamhxue69uhhyetvv9ujumn0wvh8xmmrd9skctcpr4mhxue69uhkummnw3ez6ur4vgh8wetvd3hhyer9wghxuet5wf4zwp nostr:nprofile1qqsgydql3q4ka27d9wnlrmus4tvkrnc8ftc4h8h5fgyln54gl0a7dgspzemhxue69uhhyetvv9ujuurjd9kkzmpwdejhgqg4waehxw309aex2mrp0yhx6mmnw3ezuur4vgm3l90s can we take a call tmr or sometime next week ?
세상엔 너무나 맛있는게 많다고 합니다 ㅋㅋ 🤣
🤔
Other things include if you remove the IPv6 external address of a node in a dual-stack config, that node will go into a perpetual crash loop. If you use Hetzner CCM and it is not set to dual-stack, it will take your entire cluster down.
I hit an edge case where I ran only with external IPs with the Hetzner CCM as well. Somehow, the NodeHosts config in the integrated CoreDNS was not properly set by K3s because it wanted only internal IPs. None of my nodes had internal IPs, so CoreDNS hung until I manually created the config entry
I somehow managed to kill the etcd instance and cause the cluster to permanently fail because I restarted masters too many times in a test deployment. I am honestly reconsidering my plan to use etcd for a few services
k3s does have etcd for HA, if you want to suffer
🪿
🪿
I miss my old workflow. deploying a new app was as simple as copying a dir in a repo, replacing the image and ingress.host, and commit/push.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice. I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.
I agree. Also, you get everything that was built around k8s for deploying and managing applications. I’m not at that big of a scale, yet, but it is crossing into the point where manual management is not feasible. And importantly, most of my workloads are already compatible with a containerized and distributed architecture easily. It took about an hour for me to configure Apache Pulsar and ClickHouse on a k3s cluster.
🪿
the secret is kubernetes is popular because it pays massive dividends when you’re a large org running containers across thousands of nodes. most people don’t have these problems.
+
I may have drank too much of the cloud juice - Proxmox is not ideal for large-scale deployments with cattle - K8s is complex but the complexity pays off 100x - Route 53 just works - CF Workers is good, sometimes - R2 and B2 are the best object storage - Hetzner is great for a lot of stuff - EC2 as usual is a scam
I’ve yet to try fine turning or LoRA myself either. looks dead simple with python and pytorch 👀
next
prev

rendered in 16.857987ms