-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A speedrun on consumer grade cards? #29
Comments
a suggestion, to reduce memory you could run with a lower sequence length |
I think so, thanks :) Just wondering whether there will be a speedrun like the current great one but focus on RTX4090's time, because many more people have consumer grade cards than H100s. |
I think a Google colab speed runs would be also awesome - |
That looks interesting! (But I guess it may be too hard to have it run in acceptable times...) |
Realistically single-card speed-run would need a smaller model too, otherwise it's too slow to experiment. Thinking:
The sequence length during training has been a variable factor in the last speed run; for evaluation it's fine if it's the whole document clamped at maximum 1024 window size? |
It seems H100 has 2000 TFLOPS for bf16 tensor core, while 4090 is about 330 TFLOPS. Thus 8xH100 5minute = 1x4090 4hour, which looks not bad! The major problem looks like the memory is only 24GB... So we may not be able to do some optimizations. |
4h feels quite high for a speed run. Too hard to test ideas, no? Working on some memory optimizations now, should help a lot... |
Surely faster would be great! But if impossible, then 4h is better than nothing :( |
A100 has got 40GB, cost is about 10usd for ~12 hours with about the same tflops as 4090. |
If we optimize for cost, then 4090 is much cheaper than A100 per hour while having same tflops. Thus as long as we manage to fit in 24GB, then maybe we can further scale down the cost. |
A note: The current cost per run on an 8xH100 is about $1.90 (since it's about $3/hr for SXM H100s) Personally, when I don't feel like spending that much, I go back to speedrunning CIFAR-10. But I understand that might not be so interesting to everyone |
Looks like 4090 is about $0.3/hr, thus 4hr = $1.2, which looks a bit cheaper. And moreover, some people have 4090s bought in their house (e.g. many people in r/LocalLlama, me, etc), while it seems less people buy A100/H100 in their house, and the bought cost is much cheaper than a cloud. |
I'm also interested in this variant. Considering the long runtime, perhaps it makes sense to compete to minimize validation loss within a 1 hour run? |
I would guess that halving the sequence length (and going to batch size 16) will allow fitting the run into 24G memory, without impacting performance very much. or quartering it, if that doesn't still fit |
I achieved < 3.28 in a little under two hours with a few tweaks. @KellerJordan are you interested in hosting a 1x4090 variant of the competition in this repo? If so, I'll submit a PR for |
@lapp0 That looks great - $0.3/hr x 2hr = $0.6, which is 3x cheaper than $1.9 (8xH100), and looking forward to your code! |
I'd prefer to allow experimentation on 4090s, but still time the final speedrun on 8xH100, like it is now. I'm happy to help with timing for 4090 runs that look promising. That way, the benchmark doesn't encourage techniques which are specific to 1x4090 (e.g., using a much smaller batch size). |
Ah, without smaller batch size I get 2 hours 10 minutes. https://gist.github.com/lapp0/2740a03a637ec926cf0eea90e541a0a6 The only changes necessary for 130 minute run which is effectively identical to the 8xH100 are
You can improve runtime to 90 minutes while deviating from 8xH100 learning dynamics by setting batch_size to 1, and tweaking the learning rates. I'll submit a PR for this setting as an experimentation tool, however I'm really interested in a 4090 competition variant. Please let me know if someone is interested in hosting, otherwise I might create a fork myself :) |
I am also quite interested in it and happy to host it and see it being improved :) |
514e8a53-74e0-4d77-a61e-53a416f3ec3a.txt |
@KellerJordan where do you rent your 8xH100 |
Hey, can you please point me to whether 2 different runs, one with bsz=8, another with bsz=16 is still comparable in terms of numbers of tokens seen during training, everything else being fixed? |
This is true if you split the sequence length respectively. e.g. these runs are all equivalent: H100:
4090:
The only caveat is that for each sequence in the batch, one sample is split. This occurs twice as frequently when the sequence length is halved. |
@KellerJordan Have you thought about doing something for ImageNet1K for quickly doing the same checks that you are doing but for vision? |
Hi thanks for the great repo! I would appreciate it if there can be a speed run on consumer cards e.g. RTX4090. Since it is 125M params, the RTX4090's 24GB memory should fit in the classical way, and thus it is trainable.
The text was updated successfully, but these errors were encountered: