Changhyeon Nam
At SKTelecom, I build GPU platforms for LLM training and serving, covering the full stack from infrastructure to application.
I developed LLM pretraining and finetuning pipelines on Kubernetes and Slurm, and on the inference side, I focus on disaggregated serving architectures and KV cache tiering, optimizing performance across the system (Kubernetes, NVIDIA Dynamo, etc).
Previously, I worked as an ML Engineer Intern at NAVER Clova and KIST, building RecSys models with Transformer/RL and MF, ALS, etc.
March 20, 2026Hello World