No internet connection
  1. Home
  2. Ideas

Deployment - microservices for Kubernetes

By Trentend Tricky @tetricky
    2021-11-08 21:30:56.489Z

    My intention really was to introduce myself, and say some things about forum software, and nice things about ty. There is perhaps not an appropriate place to do that. So probably I shouldn't.

    I do think, for modern forward looking multi-use forum/integrated with services (which I have reason to believe ty is, and is amongst the nice things I might have said), that a scalable deployment method - by which I mean k8s, would be both useful, and lead to more widespread adoption. Which might in turn lead to a business model allowing for more resilience in the development team. Which I see as a potential weakness - though observe that almost a decade of development on this has led to a superior product that has outlasted many other promising projects, and seen off many commercial product offerings in the field.

    Some background. I have run a small community online, since the late nineties. Ostensibly a specific special interest community, it has evolved into a wider support and collaboration community. Based entirely around a forum (various software, laterly SMF and now elkarte - purely to get timely maintenance updates).

    Originally I ruled out talkyard as our next step, in part due to complexity of deployment ( and possibly maintenance). But having tested every other viable option, alongside integrating other tools to provide the collaboration we require ( things like seafile, xmpp, hedgedoc, Hugo - with drone and gitea for gitops static blog updates), the conclusion that I reached is that I can solve many problems by deploying talkyard.

    I have considered creating a collaborative environment around xmpp/mov.im and other tools... but having run a community for a long time appreciate the easily accessible multi-timezone static forum. I started back in the old text BBS days. Old habits die hard.

    I am aware of the previous creation of a helm chart... but I am going to try something a bit more root and branch - stripping out things like postgres, redis, and seeing if I can have multiple replicas scaling on load while working on a HA backed RWX storage backend (longhorn).

    I probably don't have the up to date coding background, but i have some mucking about with k8s from a deployment perspective. I use from single node k3s, up to a multi-master HA cluster (3 masters, five nodes), with a kilo CNI, and >5TB storage.

    Deploying to this wide range of server targets is really where my interest lies (this will be in spare time, so not quick, if at all).

    Happy to discuss aspects of this or my experience/opinions. Equally happy to just see how I get on. I'm not expecting any assistance.

    Great software. Talkyard gives compelling reasons to stay with the forum format, that no other offering has the scope for. It matches the key objectives that I have. Although, not best of breed in every aspect, it's substantially ahead of any competitor, overall.

    • 10 replies

    There are 10 replies. Estimated reading time: 50 minutes

    1. Hello Tetricky!

      Happy to discuss aspects of this

      Ok :- )   About microservices: Be prepared for Talkyard adding & needing more containers, "suddenly".

      For example, some time "soon" (maybe next year) there'll likely be a Docker container running Deno and compiling CommonMark to HTML. (Maybe you can think of this as a microservice, although the underlying motivations are different.)

      And then it's good for you, if somehow the Kubernetes configuration you have, can easily (automatically?) be updated and add these new containers.

      Also, sometimes there're security problems, and it'd be good (I think) if the Kubernetes stuff could somehow auto apply security fixes.

      With Docker-Compose, Talkyard will some time next year be able to not only upgrade itself, but also automatically download new images (that weren't present at all before, any version), as needed. It'll do this, by getting a new docker-compose.yml file with new image version tags (or hashes), and then any new Docker images and containers therein, would automatically get downloaded and stared, by DockerCompose.

      I don't know if such a thing is possible with Kubernetes?

      laterly SMF and now elkarte - purely to get timely maintenance updates

      Hadn't heard about Elkarte. (SMF and Elkarte has automatic software upgrades?)

      k8s, would be both useful, and lead to more widespread adoption

      Not sure about more widespread adoption :- ) When reading at HackerNews, I'm getting the impression that K8s takes a while to learn, and for most organizations, just installing Docker-Compose, or maybe installing nothing, using systemd instead, is simpler.

      multi-master HA cluster (3 masters, five nodes)

      To me that sounds like many. You run these in VPS:es in AWS or Google Cloud or sth like that? Or on bare metal, ... maybe on-premise at a workplace?

      I have run a small community online

      I'm getting curious about what it is about. If you want, you could share a link?

      mov.im, is that https://github.com/movim/movim?

      this will be in spare time, so not quick, if at all

      Ok, maybe that's a bit good — there'll be a migration to Postgres 14, some time next year. And if you're slow enough, you won't need to do the migration :- )

      1. TTrentend Tricky @tetricky
          2021-11-10 22:53:33.671Z

          For example, some time "soon" (maybe next year) there'll likely be a Docker container running Deno and compiling CommonMark to HTML. (Maybe you can think of this as a microservice, although the underlying motivations are different.)

          Deployment can be through a number of methods, but ultimately comes down to a deployment manifest in yaml (that can be packaged with a tool like helm, which provides values - again in yaml, that define the deployment). There is essentially a sliding scale of modularisation.

          1. At the macro end you might have an image that is packaged with all of the services to run an application within it. So this might have a postgresql service, a redis service, etc., providing a full installation. This image is somewhat inflexible - because all of the services required for the application are within one image. You can't independently scale, or easily re-use components elsewhere, because they are contained within one container. There is little gain to be had in this sort of scheme, beyond being able to deploy the application to this sort of environment, along with other workloads - though the separation this provides has some minor advantages (you may have an application that needs alpine, something that runs on ubuntu 20.04, and so on).

          2. You might have, on the other hand, an application image, a separate postgresql image, a redis image, etc., used to create containers, where these can communicate with each other using service ports, or indeed by writing to persistent storage with mount points that can be accessed by the different containers. You might only have one application container, postgresql installed as a HA cluster across multiple servers, a resilient HA storage backend. When the load on the application requires further replicas to meet demand this can then auto-scale, independently of the other components, and vica versa. I am envisaging that this sort of thing might be the sort of scheme that talkyard may fit into (notwithstanding that my level of understanding of the moving parts and inter-dependencies at this stage is almost non-existent).

          3. It is possible to write true low level microservices at the function level. Here you might use a framework like openfaas ( https://www.openfaas.com/ ). With this you can code scaleable functions, that can form the application and services. I am not proposing this level of abstraction, it's not practical or desireable (you would reasonably want to retain your existing deployment options, and it would be idiotic levels of work for every point update). In this regard my use of 'microservices' in the title is wrong. Services might be better, as some might be quite 'big'.

          And then it's good for you, if somehow the Kubernetes configuration you have, can easily (automatically?) be updated and add these new containers.

          Also, sometimes there're security problems, and it'd be good (I think) if the Kubernetes stuff could somehow auto apply security fixes.

          Depending on how you define the deployment, you can control various levels of updated images. This requires the images to be built first, pushed to a hub, and correctly tagged. This allows, and there are various ways of triggering a new image pull, depending on the deployment manifest and how upgrades are triggered - from fully automatic/periodic to manual. If we consider an image "talkyard" then each build (on a per release basis) would be tagged. an image can be tagged "latest", or a release series (eg something like "5" ), or a point release ( "5.4"). If you build and push a new image (say "5.5") then a deployment using images tagged latest would pull the new image, as would one tagged "5", but a deployment tagged "5.4" would not. If you then built and pushed an image for a new series "6.0", then only a deployment using the "latest" tag would upgrade the image.

          So in this way you can manage avoiding upgrades with breaking changes, but still achieve maintenance updates (point upgrades).

          When going from one release series to the next, you might have to do things like upgrade database schema, or some such. In a container environment you should achieve this by pushing a new image - which might include a test for (say) the current status and run an upgrade script if necessary. It may be necessary to scale the application containers down to one replica, then upgrade, pulling a new image (so that multiple containers are not trying to update the same data, which may go badly).

          Where the structure of the required containers changes on new release, there might be a new deployment manifest (or helm chart) that reflects this new structure, and tests for and runs an upgrade on the data to reflect the new required structure.

          It is possible run commands/processes from an image, but this is bad practice. If the image itself changes, unless it is pushed as a new image, and the new image used, then the changes are not persistent and may cause problems.

          With Docker-Compose, Talkyard will some time next year be able to not only upgrade itself, but also automatically download new images (that weren't present at all before, any version), as needed. It'll do this, by getting a new docker-compose.yml file with new image version tags (or hashes), and then any new Docker images and containers therein, would automatically get downloaded and stared, by DockerCompose.

          I don't know if such a thing is possible with Kubernetes?

          Yes. A deployment manifest in kubernetes can handle images in the same way as docker-compose. It can also do lots of other things like create namespaces, define services, ports and ingress (how a deployment is seen from outside the cluster), manage certificates, and things like auth access to services. Generally kubernetes is harder to set up as an infrastructure, but it handles HA, scaling of services, migrating workloads, and all the moving and changing of things, in much less problematic ways than docker, once you progress past single instances. I still use (for historical reasons, no new deployments), in places, docker (mostly with docker-compose), and I did some work with docker-swarm for a short while, but my broad observation was that k3s was not much harder than docker-swarm, and ultimately offered much better scale out options. I am currently looking at moving my docker stuff to podman, which is becoming increasingly competent.

          Docker (podman) offers some ease of use, but against that you have (for example) things like longhorn ( https://longhorn.io/ ) which provides a replicated cross-cluster HA storage backend with backups to s3, which takes resilience and backup from an application consideration to the infrastructure level, multi-master clusters, where infrastructure upgrades can be managed without downtime.

          Hadn't heard about Elkarte. (SMF and Elkarte has automatic software upgrades?)

          No, they don't. SMF appeared to be dying, and elkarte was a temporary measure to get some maintenance and functional updates not available in a timely fashion through SMF. It's a development of SMF and was easy to import my existing forum to. I was considering flarum for a period, when it was long in beta, but I don't much like it's stack (php, Mysql). Talkyard, functionally and technologically, feels a much more attractive option.

          Not sure about more widespread adoption :- ) When reading at HackerNews, I'm getting the impression that K8s takes a while to learn, and for most organizations, just installing Docker-Compose, or maybe installing nothing, using systemd instead, is simpler.

          k8s definitely takes a lot more learning. k3s less so. Much new deployment is to cloud managed k8s platforms (all the hosting providers have an offering). There are offerings like civo ( https://www.civo.com/ ) starting to enter the space (a k3s based managed budget cloud hosting platform). I have managed applications and services on bare metal since the nineties, laterly vm's, and increasingly containerized workloads. Right now I am building an infrastructure, very soon most people wont. They will just run a deployment on a managed providers service (by clicking an icon in a catalogue), which will run a helm-style install, and present them with an application or service ready to go (this already exists - look at things like rancher catalogue). My view would be that things are increasingly moving towards the containerized, ease of deployment, managed services. Managing, maintaining and upgrading bare metal servers less so. This is definitely 'harder' and with an overhead versus the simplicity of a single server..but such is the way of the world.

          To me that sounds like many. You run these in VPS:es in AWS or Google Cloud or sth like that? Or on bare metal, ... maybe on-premise at a workplace?

          That particular cluster is in a scaleaway/dedibox datacentre in Paris (although I picked up the servers through a value reseller). It's individual servers (not in the same subnet) running using a wireguard CNI control plane to link them together ( https://github.com/squat/kilo ). I also have bare metal clusters in various places, and even have some single node k3s 'clusters' running services in customers premises (losing almost all of the scale advantages of kubernetes - but critically allowing exactly the same deployment as larger clusters).

          I'm getting curious about what it is about. If you want, you could share a link?

          https://talkback.trentend.co.uk

          mov.im, is that https://github.com/movim/movim?

          Correct. I like xmpp. Not everyone feels the same.

          Ok, maybe that's a bit good — there'll be a migration to Postgres 14, some time next year. And if you're slow enough, you won't need to do the migration :- )

          Because of the separation offered by containerization, we can install different versions of postgres in different namespaces, and different versions of the application in different namespaces, and we can choose which service by referring the application to the database service that we require (a yaml svc declaration)...so this is not a problem.

          ..but yes, I'm very slow (other commitments) and it will be 14 by then.

          Maybe a bit odd, but things like that can be good for me to know, so feel free to share your thoughts, if you want :- ) both positive and negative

          I very much like the level of facility for community development - forum,questions, blog comments. I like the use of markdown, and the resizeable editing window. I like the modern interface and the collapsible side windows.

          I have typed a reply, and previewed it, and then clicked elsewhere (chat, I think), and not been able to obviously return to my draft (okay, i created a new reply, and copy and pasted from the preview, and re-formated it as formatting options were lost, then i posted the reply....then I refreshed the page and it offered me the chance to resume editing the previous draft. Now I know. Refresh restores the editing option if i have done something else) which is there tantalisingly in preview, but not obviously available to edit. I might prefer an auto-scroll (particularly for mobile) when reaching the bottom of a forum topic/category list. I'm not entirely sure, at this point, whether the admin options and configuration options are as full as I might like (the naming of levels, and things like promotion of users, for example).

          On the whole, I find talkyard good looking, simple to navigate, and pleasant to use, and extremely heavily functioned, that well fits my intended use case.

          1. Hi! Sorry for the late reply. I'm adding a new feature that makes it simpler to reply to long replies :- )   o.O

            Namely an editor layout where the editor is to the left, in its own "column", from top to botom. And the text I'm replying to, to the right. Then I can see almost all of the text I'm replying to (or almost all of the preview).

            1. In reply totetricky:

              (Sorry for the late reply)

              deployment manifest in yaml

              Ok, to me they seem a bit like Docker-Compose config files, but each container gets its own manifest file.

              And I noticed there're image version numbers (of course), e.g. image: nginx:1.14.2.

              1. At the macro end ... little gain to be had in this sort of scheme

              Yes, I think so too. I don't remember anyone having ever asked about getting all containers bundled together into one "mega container". And it takes longer to download new versions as well I think (since, for a small change in one service, layers for other different services might have to be downloaded again, if they happen to be sorted after that service)

              1. ... an application image, a separate postgresql image ... writing to persistent storage ... postgresql installed as a HA cluster

              Yes, this is the road Talkyard is walking along

              1. ... low level microservices ... openfaas

              Didn't know about OpenFaaS — a few things in Talkyard might indeed be well suited for "serverless" functions, e.g. converting CommonMark to HTML, or maybe custom plugins, or sanitizing HTML. Then it can be a bit hard to guess upfront what might need to scale up a lot, compared to other things. And then just placing all such "stand-alone mini tasks" in their own functions, might let those mini-tasks that need to scale, do so, without much work. Whilst at the same time, if the serverless functions have max run time limits, say max 10 ms — in the above-mentioned cases, that might even be good. E.g. a plugin function that didn't finish within 10 ms maybe should be considered buggy.

              each build (on a per release basis) would be tagged. an image can be tagged "latest", or a release series (eg something like "5" ) ... "5.4" ... "5.5" ... "6.0" [...] can manage avoiding upgrades with breaking changes, but still achieve maintenance updates

              Yes, it seems such version numbers, i.e. semantic versioning, is required by Helm:

              https://helm.sh/docs/topics/charts/#charts-and-versioning

              Talkyard currently uses calendar versioning — which I'm thinking is better suited for something like Talkyard, in that it's a platform, with different components, each one could have its own version number. Just like in an operating system — e.g. Ubuntu 20.04, 20.10, 21.04 etc, and there're many smaller components therein, with their own different versions. — Currently Talkyard has version tags like v2021.123 where 123 is a release number counter for that year (2021).

              upgrade database schema

              Yes, currently Talkyard uses https://flywaydb.org for this — it's been working great. Also looking at Diesel (written in Rust), which is more 100% open source, e.g. includes down migrations, but in Flyway, they cost money.

              It works as follows: The application server container stops, it gets a new (upgraded) image. Then it starts — and this image includes the database migrations, and, when starting, the app server itself applies the migrations (and saves the state in a special database table).

              scale the application containers down to one replica, then upgrade, pulling a new image (so that multiple containers are not trying to update the same data, which may go badly).

              That's interesting — never thought about scaling down, when upgrading. Currently there's just 1 app server container anyway. Another alternative / thing-to-also-do, could be to put the app servers in read-only mode, whilst the database is being upgraded.

              a new deployment manifest (or helm chart) that reflects this new structure, and tests for and runs an upgrade on the data

              Is that a K8s pod "init container"? Which runs before the "real" containers in the upgraded pod, starts — whilst an old versions pod might still be up & running, handling traffic

              automatically get downloaded and stared
              Yes [...]

              Hmm, what I meant was, without any human intervention. Currently Talkyard runs a cron job each day, which looks for new images, downloads and starts them. But in k8s, where would such a cron job run? (or something comparable)

              k3s was not much harder than docker-swarm

              That's interesting, sounds good

              looking at moving my docker stuff to podman

              I've been thinking about Podman and Systemd (instead of Docker and Docker-Compose), as a default "light weight" installation for Ty.

              Longhorn

              Hadn't heard, about, sounds interesting. Seems to me it's for self hosted K8s (?).

              Civo

              I had a look, and, wow: "All instance types include a generous data transfer [...] additional data transfer it’s charged at $0.01/GB" — that's nice. In GCE or AWS, egress bandwidth can cost dangerously much. I've even added some Nginx Lua code, to to try to prevent that from happening (!).

              I've been thinking that the Ty customers could choose themselves if they want low budget hosting, or more expensive & more stable hosting. — Maybe Civo some day could power Ty's low budget price plans, hmm. (Or some other similar thing.) Also, there's https://fly.io which looks nice too I think (more expensive bandwidth though).

              running using a wireguard CNI control plane to link them together ... Kilo

              Kilo https://github.com/squat/kilo sounds nice — I wonder, was it hard to configure all that? What about IP addresses — if a HTTP web server pod, is in Amazon US East-1, and other pods in GCE EU West, then, ... let's say AWS US East-1 goes offline, then, I'm thinking the traffic would still get sent to the now offline pod? Hmm, maybe Kilo is more for backend services? Or also for frontends, with more fancy IP rounting? (so traffic gets re-routed to EU West?)

              https://talkback.trentend.co.uk

              Oh, I had a look, makes me think about:

              How to run a small social network site for your friends (runyourown.social)
              https://news.ycombinator.com/item?id=29470565

              Seems in a way you're already doing this :- )   (using sth else than Mastodon)

              can install different versions of postgres in different namespaces, and different versions of the application in different namespaces,

              Hmm, yes, and the old namespace could be read-only, during the migration, I suppose. So, there wouldn't have to be any downtime, just some read-only time.

              1. TTrentend Tricky @tetricky
                  2021-12-13 14:55:09.667Z

                  (no problem)

                  Ok, to me they seem a bit like Docker-Compose config files, but each container gets its own manifest file.

                  Directly comparable to docker-compose manifests. One manifest can define multiple containers (and which images they are based on), the namespaces that they reside in, the services that they access and how the ingress is defined. Or you could write a single manifest, for a single container, that runs in the same namespace as other containers (which interact through ports and/or shared storage). You could manage these individual manifests individually, or all as one deployment manifest (and potentially trigger updates automatically if linked to something like a git repository - see later). It's very flexible.

                  Services/applications within the same namespace can 'see' each other. You can write a service manifest to allow containers within that namespace to see other namespaces.

                  Helm is a way of using variables, in a values file, that defines the deployment (so automates, to a degree - there are defaults that might be overridden in defined ways, the deployment through an auto-generated manifest).

                  So, you might have the same helm file, that (depending on how the values are defined) could either run a postgresql server image within the namespace as part of the application deployment, or access an existing (say HA) postgresql server cluster located somewhere else.

                  You might have default values file that causes helm to create a manifest that deploys a single application ('talkyard') container (based on a defined image), with both a postgresql and redis container within the deployment, and uses default storage with RWO and only scales to one replica (because you don't know how the default storage works for that environment - it might be nfs, it might be local storage on a single node that the application itself must reside on).

                  Altering the values file might cause the same helm chart to create a manifest that uses a clustered HA postgresql (within that k8s cluster, or maybe even somewhere else), a specified storageclass that supports distributed RWM access, and thus allows the application ('talkyard') to have up to 'n' replicas (scale up).

                  Updating the helm chart, when applied using the same values file, can manage container/image updates.

                  Exactly the same deployment can be achieved by writing a deployment manifest (which may be more complex from a user perspective).

                  You could define a container image ('talkyard') that was an image (defined by 'dockerfile') that bundled application, database, cache, etc all into the same image, but you would lose the scale out and platform specific flexibility.

                  Didn't know about OpenFaaS — a few things in Talkyard might indeed be well suited for "serverless" functions, e.g. converting CommonMark to HTML, or maybe custom plugins, or sanitizing HTML.

                  The problem with microservices, is that the functions called have to be available in every deployment scenario (bare metal server, docker/podman, k8s), in a consistent manner. Unless you are writing an application from the ground up, I would caution against this. It seems to me that it would be much easier to call functions from the application container ('talkyard'), but allow that and the underlying services (database, cache, storage) to be scaleable. A half way house.

                  The reason I say this is that the same code could then be deployed easily to every platform - it would just be the environment packaging that changes (an install.sh script for server, a docker-compose for docker/podman, a helm chart for k8s).

                  Yes, it seems such version numbers, i.e. semantic versioning, is required by Helm:

                  I'm not really qualified to have an intelligent informed opinion on how a project should be versioned. But there's no reason you can't have a separate image project "talkyard-container(docker?)" that manages tagged (auto-build?) builds, or releases, with conforming versioning. Also true for a helm chart based on the images. Images don't have to follow master branch releases, and might lag them for stability and packaging reasons.

                  It works as follows: The application server container stops, it gets a new (upgraded) image. Then it starts — and this image includes the database migrations, and, when starting, the app server itself applies the migrations (and saves the state in a special database table).

                  If I were you, I'd retain this scheme. when a container starts, it tests for required migration, performs (or not) as required (as part of init), then runs the app. Scaling up replicas add containers of the same image, which perform the same tests, but tests yield no database migration required (because an equivalent version level container has already performed the migration). I suggested the scaling down to 1 replica prior to upgrade (enter the board into maintenance mode?), so that multiple concurrent migrations don't clusterfuck the database. It depends on the app, and how the migration works. It might be possible.

                  Down version as well as up? Not impossible, but I would imagine complicated. I'm out of my depth here. (the previous helm chart/images need advance notice of how to strip out features, and adjust the schema, of the next version upgrade. Which hasn't been written yet. Time machine required?). This might limit what you can do in future versions, to maintain up and downward compatibility, and may not be worth it. Alternatively duplicate the database, spin up a new version in a new namespace/server. Test/deploy. Provide a script to add new version contributions, to a previous version database. Revert adding new content to the old version database if you want to downversion? I'm just spitballing. ).

                  Is that a K8s pod "init container"? Which runs before the "real" containers in the upgraded pod, starts — whilst an old versions pod might still be up & running, handling traffic

                  Could be. I think I'd be tempted to just include the code in the new version image, and restrict image update to no further than the next major version (where the schema might change in understood ways). Should be a simple test that's quickly skipped past (once the initial migration is complete), and makes the image and deployment multi-scenario (upgrade, run, scale up additional replicas).

                  Hmm, what I meant was, without any human intervention. Currently Talkyard runs a cron job each day, which looks for new images, downloads and starts them. But in k8s, where would such a cron job run? (or something comparable)

                  You can run cron jobs in k8s (for example you might use them to trigger something like a borgbackup container to execute a user defined backup scheme, rclone a copy to s3, and push notify result by xmpp - I use this example, because I do exactly this). you could also use something like argoCD (https://argoproj.github.io/cd/) to auto-run updates (could be helm, could be a manifest) from a git repository. So you increment the image version number in git, argo monitors the changes, and auto-updates.

                  The advantage of this is that I could link to your repository, so auto-update when you issue...or I could use my own repository and control when I do updates, after testing.

                  I've been thinking about Podman and Systemd (instead of Docker and Docker-Compose), as a default "light weight" installation for Ty.

                  My personal confidence in docker is decreasing, my confidence in podman is increasing. It's reaching a level of maturity and feature completion. I believe it is less subject to licensing whims, and less exposed to commercial vulnerability. Not to mention the whims of support and licensing. I've been running some test servers, with podman, and a cockpit portal for easy maintenance and visualisation. It's lean, efficient, and does what I want for a single standalone server. It builds containers okay. I think me and docker are finished. Once I get rid of all my legacy cruft.

                  podman also has a 'pod' construct, where multiple containers can exist in a 'pod' - broadly equivalent to a 'namespace'. It can also be used to automatically create a kubernetes deployment manifest - so test in podman, deploy to k8s.

                  Longhorn
                  Hadn't heard, about, sounds interesting. Seems to me it's for self hosted K8s (?).

                  Can be. Can also be used by a service provider. If I make a cluster I have to make provision for storage, loadbalancing, databases....where a service provider might offer these as services, that you can access by your applications, and pay for in a different way to the compute for k8s. They might put these services in place using similar, or identical tools to what you can use in your own cluster. It comes down to balancing cost and management.

                  I've been thinking that the Ty customers could choose themselves if they want low budget hosting, or more expensive & more stable hosting. — Maybe Civo some day could power Ty's low budget price plans, hmm. (Or some other similar thing.) Also, there's https://fly.io which looks nice too I think (more expensive bandwidth though).

                  I could see that you might offer talkyard customers managed k8s based hosting. The backend infrastructure might depend on plan level. Deploying a new instance simply using helm with a template. Access through a simple service manifest linking to customers FQDN.

                  Kilo https://github.com/squat/kilo sounds nice — I wonder, was it hard to configure all that? What about IP addresses — if a HTTP web server pod, is in Amazon US East-1, and other pods in GCE EU West, then, ... let's say AWS US East-1 goes offline, then, I'm thinking the traffic would still get sent to the now offline pod? Hmm, maybe Kilo is more for backend services? Or also for frontends, with more fancy IP rounting? (so traffic gets re-routed to EU West?)

                  Kilo is just the CNI. So I have distributed servers, making up a k3s multi-master cluster. Intra-cluster traffic being encrypted over wireguard using kilo. All the normal cluster operations work (but slower, because the servers are not on the same subnet). I can use a loadbalancer (metallb) to expose a service on any of the nodes (the applications don't necessarily run on that exposed node). In addition I can, for example, spin up a vps (or indeed another cluster) somewhere, connect it with wireguard to the cluster, and access services on the cluster. So the cluster I have, for example, could do HA postgresql, and simple vps could run an application (or just a web-proxy) accessing the cluster.

                  It's not 'high end' performance, but it is low budget.

                  Hmm, yes, and the old namespace could be read-only, during the migration, I suppose. So, there wouldn't have to be any downtime, just some read-only time.

                  This should be entirely possible.

                • In reply totetricky:

                  I very much like the ... and the collapsible side windows.

                  Aha :- ) Then, if you open the editor, try the "Place left" button. Then you'll get even more "collapsible" side things. & simpler to reply to long replies. — "Soon" the edits preview, too, will be visible, side by side with the editor and the comment one is replying to. (But that'd be for somewhat wide screens, >= 1900 px maybe.)

                  not been able to obviously return to my draft

                  Oh, this has slightly annoyed me as well. Maybe the Show preview could change to Go back to the post you're replying to, if one navigates away.

                  copy and pasted from the preview ... posted the reply ... refreshed the page and it offered me the chance to resume editing the previous draft

                  I think there are some bugs / unintended behavior, related to this. — If one navigates away, or refreshes the page, Ty auto-saves a draft.

                  the admin options and configuration options are as full as I might like (the naming of levels, and things like promotion of users

                  Hmm yes, promotion of users hasn't been totally thought thrown yet. Probably some time there'll be an API, so the promotion of users, can even be handled by an external service, which maybe analyzes people's past behavior and could be a lot "smarter" than just "now s/he has X likes, promote". But there'll be some make-sense built-in rules too. Probably they'd take into account how many Like votes and Unwanted votes someone has gotten, compared to how many people have read his/her posts, rather than looking at absolute numbers.

                  1. TTrentend Tricky @tetricky
                      2021-12-13 15:09:02.440Z

                      Aha :- ) Then, if you open the editor, try the "Place left" button. Then you'll get even more "collapsible" side things. & simpler to reply to long replies. — "Soon" the edits preview, too, will be visible, side by side with the editor and the comment one is replying to. (But that'd be for somewhat wide screens, >= 1900 px maybe.)

                      I like what has happened here. Following the instruction I have an editing pane, and a discussion pane, that I can easily work with.

                      Pressing "show preview" renders a live preview in position in the thread.

                      This works for me. I don't think I would like another preview pane. I don't need it, and it would reduce the clarity of what I am working on.

                      If I navigate away from the thread (for example to check something, or copy and paste something from elsewhere), I have to manually navigate back to the thread I am replying to, in order to get back to the thread/reply.

                      I would prefer a button on the editor, that navigated me back to the thread and my reply, more than an additional form of preview. (the "show preview" button taking me back there would be ideal, but currently I do not believe that it does).

                      If that were possible.

                • In reply totetricky:

                  say [...] nice things about ty [ ... ] not best of breed in every aspect

                  Maybe a bit odd, but things like that can be good for me to know, so feel free to share your thoughts, if you want :- ) both positive and negative

                  Then I'll get some feedback about what to do more of, and what to do less of / things-to-fix/improve. Even if maybe there's not much time right now, can still be something to keep in the back of my (and others') mind(s), in the future

                  1. TTrentend Tricky @tetricky
                      2021-11-11 07:36:46.044Z

                      Some user level things, to meet modern expectations of interacting with a forum/discussion. For example, push notifications of new posts, swipe to refresh on android, auto-scroll when reaching the bottom of topic list.

                      1. Ok, thanks :- )   (I'll reply to the other comment above later this week)