Difference between revisions of "News/events"
Jump to navigation
Jump to search
m |
m |
||
| Line 24: | Line 24: | ||
[https://sched.co/2CW7Z Optimizing Error Recovery for Cost-Efficient Distributed AI Model Training with Kubeflow] | [https://sched.co/2CW7Z Optimizing Error Recovery for Cost-Efficient Distributed AI Model Training with Kubeflow] | ||
| + | |||
| + | == FOSDEM 2025 == | ||
| + | [[Image:Fosdem.png|left|100px|link=]] | ||
| + | |||
| + | '''February 1, 2026, Brussels, Belgium''' | ||
| + | |||
| + | [https://fosdem.org/2026/schedule/event/fosdem-2026-8264-investigating-security-incidents-with-forensic-snapshots-in-kubernetes Investigating Security Incidents with Forensic Snapshots in Kubernetes] | ||
== Linux Plumbers Conference 2025 == | == Linux Plumbers Conference 2025 == | ||
| Line 42: | Line 49: | ||
[https://sc25.conference-program.com/presentation/?id=ws_canopie105&sess=sess199 Engine-Agnostic Model Hot-Swapping for Cost-Effective LLM Inference] | [https://sc25.conference-program.com/presentation/?id=ws_canopie105&sess=sess199 Engine-Agnostic Model Hot-Swapping for Cost-Effective LLM Inference] | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
<br clear="both"> | <br clear="both"> | ||
Revision as of 22:02, 11 December 2025
This page collects into about events criu takes part in.
<startFeed/>
KubeCon EU 2026
24-26 March, 2026, Amsterdam, Netherlands
Ctrl-X, Ctrl-V Your Pods: WG Checkpoint Restore in Kubernetes
Optimizing Error Recovery for Cost-Efficient Distributed AI Model Training with Kubeflow
FOSDEM 2025
February 1, 2026, Brussels, Belgium
Investigating Security Incidents with Forensic Snapshots in Kubernetes
Linux Plumbers Conference 2025
December 12, 2025, Tokyo, Japan, Containers and Checkpoint/Restore Microconference
Guarded Control Stack on arm64: Challenges in Enabling Shadow Stack Support for CRIU
Optimizing Checkpoints with Built-in Memory Page Compression
CANOPIE-HPC @ Supercomputing 2025
November 17, 2025, St. Louis, MO
Engine-Agnostic Model Hot-Swapping for Cost-Effective LLM Inference
<endFeed/>
See also