-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate eBPF and how it can be applied (or not) #956
Comments
Hi, |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. |
It's sad that it got closed without any response. I'm really interested in this with usage of tcp/udp traffic to scale my services to 0 when no customer traffic is detected. |
@JorTurFer can we reopen this? |
That's currently possible with the current approach without eBPF
Sure! |
I cannot find anywhere how to scale TCP or UDP workloads with current approach. I can only see HTTP ones using ingress. |
Sorry, I couldn't find time to improve upon my POC. I will try to share some more results in a few weeks. |
True, I misunderstood the requirement :/ sorry |
Proposal
Currently, we are deploying the interceptor instances as containers, which works well, but eBPF could improve the performance working at kernel level instead of at application level.
Using eBPF we should be able to get the metrics about the traffic, so technically we should be able to get all the metrics that we need for scaling but we still have to check if we can "hold" the request during cold starts.
Actually, this issue is just a place to bump thoughts to plan next steps in this direction than a real issue or feature request. We should define the next steps from here.
The text was updated successfully, but these errors were encountered: