Credit: Breville
Alternating the GPUs each layer is on didn’t fix it, but it did produce an interesting result! It took longer to OOM. The memory started increasing on gpu 0, then 1, then 2, …, until eventually it came back around and OOM. This means memory is accumulating as the forward pass goes on. With each layer more memory is allocated and not freed. This could happen if we’re saving activations or gradients. Let’s try wrapping with torch.no_grad and make required_grad=False even for the LoRA.
。wps对此有专业解读
It's that easy. We know that the small portion of Wendy's chili isn't technically "free" if you're needing to make an order of at least $5, but in this economy? We'll take what we can get. Buy a couple of drinks to partner with your chili. Buy a burger and have the chili as a starter. Do what you've got to do — Wendy's chili is a solid option at anytime of the day.
theguardian.com