[go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add helm charts #357

Closed
4 of 8 tasks
zoriya opened this issue Mar 25, 2024 · 19 comments · Fixed by #560
Closed
4 of 8 tasks

Add helm charts #357

zoriya opened this issue Mar 25, 2024 · 19 comments · Fixed by #560
Labels
enhancement New feature or request tools A developer tool change
Milestone

Comments

@zoriya
Copy link
Owner
zoriya commented Mar 25, 2024

Feature description

Steps for scalling (pseudo related)

@zoriya zoriya added enhancement New feature or request tools A developer tool change labels Mar 25, 2024
@zoriya zoriya added this to the Backlog milestone Mar 25, 2024
@bo0tzz
Copy link
bo0tzz commented Apr 6, 2024

For the immich helm chart, I used the common-library chart from bjw-s as a base, which helped get a chart together much quicker.

@joryirving
Copy link

Something to consider with the helm chart, would be to be able to spawn transcoders as needed. Rather than 1 container doing all the transcoding, you could create 1 pod per transcode stream.

@onedr0p
Copy link
Contributor
onedr0p commented Apr 6, 2024

Rather than 1 container doing all the transcoding, you could create 1 pod per transcode stream.

That may require a Kubernetes operator or for the application to at least be Kubernetes aware and have proper Kubernetes RBAC setup for the app to create transcoding pods. It would be neat but might be outside the scope of this issue.

@bo0tzz
Copy link
bo0tzz commented Apr 6, 2024

I'm not particularly familiar with HorizontalPodAutoscalers or KEDA, but maybe if one of those was involved scaling could be done without many changes on Kyoo's side?

@zoriya
Copy link
Owner Author
zoriya commented Apr 6, 2024

I do want to support transcoder scaling and distributed transcodes. Not sure what is needed for that, I just finished reading the getting started of k8s x)

@onedr0p
Copy link
Contributor
onedr0p commented Apr 6, 2024

Feel free to reach out if you need any help, I joined your discord 😄

@atmosx
Copy link
atmosx commented Apr 7, 2024

Rather than 1 container doing all the transcoding, you could create 1 pod per transcode stream.

That may require a Kubernetes operator or for the application to at least be Kubernetes aware and have proper Kubernetes RBAC setup for the app to create transcoding pods. It would be neat but might be outside the scope of this issue.

For HPA an operator is not required. You can setup auto-scaling based on CPU/RAM metrics, which come out of the box with any modern k8s distribution. Auto-scaling on custom metrics (e.g. a combination of transcoding jobs + queue + <another_random_metric>) is possible, but requires something like prometheus. Now, people running kubernetes at home (like myself) usually are familiar with this stack (I do this for a living) but I believe it's an overkill. Most home deployments will have a dedicated RPi4/5 or something another low end box running Kyoto (or any media server) with additional storage attached.

An operator handles way more than autoscaling (e.g. automated DB backups/restore, version upgrades, storage expansion, maintenance tasks, etc.)

@MegaShinySnivy
Copy link

I'd be willing to help test this chart and by extension Kyoo itself--I have enough users to put it through home/small event use (10ish users)

@zoriya
Copy link
Owner Author
zoriya commented Jun 15, 2024

Thanks! I started working on it on the feat/helm branch, but I was focused on something else, so It kinda stalled. An app-template based helm charts has been made by the folks in the home-assistant discord server.

I probably won't continue working on the helm chart soon, If someone wants to upstream the helm chart I'm open for it!

@acelinkio
Copy link
Contributor

Hey @bo0tzz @onedr0p @joryirving @MegaShinySnivy , created a helm chart MR in #560. Take a look.

charts/README.md contains an example values file to deploy with no other resources. Let me know if you have any questions or feedback. Microservice helm charts can be a little challenging. First pass focused on reducing configuration complexity.

@JeWe37
Copy link
JeWe37 commented Aug 3, 2024

It would be nice if as part of this one could also do something federation-esque(as suggested in #477) and split the actual library storage and transcoding across instances, but have it be shown as a single library and still have just a single frontend.

All that would really be needed is to be able to have different folders that the transcoder and scanner read from, and have dispatch to the right transcoder that actually has access.

@bo0tzz
Copy link
bo0tzz commented Aug 3, 2024

I don't really see how #477 relates to the helm chart

@JeWe37
Copy link
JeWe37 commented Aug 3, 2024

Route users to the same transcoder instance if they are requesting an video already being handled by an instance

From this it's a relatively small step architecturally towards "Route user towards the transcoder instance that has the file to be streamed located close to it" essentially. It isn't too big a deal to have a slow distributed mount(think minio, s3fs, gluster, etc., even something simple like sshfs+mergerfs) for scanning, but it is bad for transcoding/streaming purposes.

@zoriya
Copy link
Owner Author
zoriya commented Aug 3, 2024

Route users to the same transcoder instance if they are requesting an video already being handled by an instance

This would be needed to support replicate for the transcoder since using a redis for segment storage would probably introduce too much complexity & latency to make replicate worth it.

I have absolutely no idea how to do it tho

@JeWe37
Copy link
JeWe37 commented Aug 3, 2024

Route users to the same transcoder instance if they are requesting an video already being handled by an instance

This would be needed to support replicate for the transcoder since using a redis for segment storage would probably introduce too much complexity & latency to make replicate worth it.

I have absolutely no idea how to do it tho

I mean if this is all you need it's just a loadbalancer problem. HAProxy can do it using the balance uri config option probably. Couldn't find an equivalent strategy for traefik.

One of those(or some other load balancer, can't say I have a great overview of the options) could probably also be made to do "Route user towards the transcoder instance that has the file to be streamed located close to it" somehow, though I'd have to think how that might work.

EDIT:

If tell your clients which server they're talking to and force them to provide that as a URL param you can use balance url_param I suppose and that would solve everything.

Also for Traefik what might work is to mirror all requests and just ignore them if this replica isn't responsible. Would probably require some negotiation between replicas as to which one should respond for new transcodes, but some intelligence there might be desirable anyway.

@zoriya
Copy link
Owner Author
zoriya commented Aug 4, 2024

balence url is exactly what i was hoping to have. Thanks!

negotiation between replicas would indeed be good, it would be used to know when a user switch to another instance so the previous one can cleanup.

@JeWe37
Copy link
JeWe37 commented Aug 4, 2024

I was moreso thinking for negotiating which instance should take care of a request in the first place. Load might not map to number of simultaneous streams very well, for a small number of streams per instance. Think for instance a transcode to 4K HEVC vs 480p H264. Also would have the advantage of not pulling in a whole second proxy just for this.

@acelinkio
Copy link
Contributor

@zoriya , can this ticket be updated like we talked about in #560 (comment). Some of the related check boxes here were not relevant. #450 did not need to happen for a helm chart to occur. Kubernetes relies upon Ingress/Gateway standards for configuring reverse proxies of all kinds.

As for the other conversation, can that please move into #477, That literally has nothing to do with the helm chart. Personally I think the issue should be closed as they are asking for Kyoo cluster/mesh where one is in their home and the other is in the cloud somewhere. I overwhelmingly disagree with any approach of using infrastructure to solve this problem. You are relying upon infrastructure to solve issues that are bespoke problems.

@zoriya
Copy link
Owner Author
zoriya commented Aug 4, 2024

I created an issue for each item, i'll keep this list here for future reference. This isssue will be closed (and readme updated) when we merge the helm charts!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request tools A developer tool change
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants