Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A speedrun on consumer grade cards? #29

Open
fzyzcjy opened this issue Nov 22, 2024 · 24 comments · May be fixed by #38
Open

A speedrun on consumer grade cards? #29

fzyzcjy opened this issue Nov 22, 2024 · 24 comments · May be fixed by #38

Comments

@fzyzcjy
Copy link

fzyzcjy commented Nov 22, 2024

Hi thanks for the great repo! I would appreciate it if there can be a speed run on consumer cards e.g. RTX4090. Since it is 125M params, the RTX4090's 24GB memory should fit in the classical way, and thus it is trainable.

@KellerJordan
Copy link
Owner

a suggestion, to reduce memory you could run with a lower sequence length

@fzyzcjy
Copy link
Author

fzyzcjy commented Nov 25, 2024

I think so, thanks :) Just wondering whether there will be a speedrun like the current great one but focus on RTX4090's time, because many more people have consumer grade cards than H100s.

@naoro
Copy link

naoro commented Nov 25, 2024

I think a Google colab speed runs would be also awesome -
That would greatly commodotize research and experimentation

@fzyzcjy
Copy link
Author

fzyzcjy commented Nov 25, 2024

That looks interesting! (But I guess it may be too hard to have it run in acceptable times...)

@alexjc
Copy link

alexjc commented Nov 25, 2024

Realistically single-card speed-run would need a smaller model too, otherwise it's too slow to experiment.

Thinking:

n_layer = 8
n_embd = 512

The sequence length during training has been a variable factor in the last speed run; for evaluation it's fine if it's the whole document clamped at maximum 1024 window size?

@fzyzcjy
Copy link
Author

fzyzcjy commented Nov 25, 2024

It seems H100 has 2000 TFLOPS for bf16 tensor core, while 4090 is about 330 TFLOPS. Thus 8xH100 5minute = 1x4090 4hour, which looks not bad!

The major problem looks like the memory is only 24GB... So we may not be able to do some optimizations.

@alexjc
Copy link

alexjc commented Nov 25, 2024

4h feels quite high for a speed run. Too hard to test ideas, no?

Working on some memory optimizations now, should help a lot...

@fzyzcjy
Copy link
Author

fzyzcjy commented Nov 25, 2024

Surely faster would be great! But if impossible, then 4h is better than nothing :(

@naoro
Copy link

naoro commented Nov 25, 2024

It seems H100 has 2000 TFLOPS for bf16 tensor core, while 4090 is about 330 TFLOPS. Thus 8xH100 5minute = 1x4090 4hour, which looks not bad!

The major problem looks like the memory is only 24GB... So we may not be able to do some optimizations.

A100 has got 40GB, cost is about 10usd for ~12 hours with about the same tflops as 4090.
So about 3.3usd per run. Not bad, I'd think.

@fzyzcjy
Copy link
Author

fzyzcjy commented Nov 25, 2024

If we optimize for cost, then 4090 is much cheaper than A100 per hour while having same tflops. Thus as long as we manage to fit in 24GB, then maybe we can further scale down the cost.

@KellerJordan
Copy link
Owner

A note: The current cost per run on an 8xH100 is about $1.90 (since it's about $3/hr for SXM H100s)

Personally, when I don't feel like spending that much, I go back to speedrunning CIFAR-10. But I understand that might not be so interesting to everyone

@fzyzcjy
Copy link
Author

fzyzcjy commented Nov 26, 2024

Looks like 4090 is about $0.3/hr, thus 4hr = $1.2, which looks a bit cheaper. And moreover, some people have 4090s bought in their house (e.g. many people in r/LocalLlama, me, etc), while it seems less people buy A100/H100 in their house, and the bought cost is much cheaper than a cloud.

@lapp0
Copy link

lapp0 commented Nov 26, 2024

I'm also interested in this variant.

Considering the long runtime, perhaps it makes sense to compete to minimize validation loss within a 1 hour run?

@KellerJordan
Copy link
Owner

KellerJordan commented Nov 26, 2024

I would guess that halving the sequence length (and going to batch size 16) will allow fitting the run into 24G memory, without impacting performance very much. or quartering it, if that doesn't still fit

@lapp0
Copy link

lapp0 commented Nov 26, 2024

I achieved < 3.28 in a little under two hours with a few tweaks.

@KellerJordan are you interested in hosting a 1x4090 variant of the competition in this repo? If so, I'll submit a PR for 4090/train_gpt2_4090.py and 4090/run_4090.sh and update the readme.

@fzyzcjy
Copy link
Author

fzyzcjy commented Nov 26, 2024

@lapp0 That looks great - $0.3/hr x 2hr = $0.6, which is 3x cheaper than $1.9 (8xH100), and looking forward to your code!

@KellerJordan
Copy link
Owner

I'd prefer to allow experimentation on 4090s, but still time the final speedrun on 8xH100, like it is now. I'm happy to help with timing for 4090 runs that look promising. That way, the benchmark doesn't encourage techniques which are specific to 1x4090 (e.g., using a much smaller batch size).

@lapp0
Copy link

lapp0 commented Nov 29, 2024

Ah, without smaller batch size I get 2 hours 10 minutes.

https://gist.github.com/lapp0/2740a03a637ec926cf0eea90e541a0a6

The only changes necessary for 130 minute run which is effectively identical to the 8xH100 are

  • batch_size: int = 16
  • sequence_length: int = 32 * 1024
  • y = flex_attention(..., kernel_options={"BLOCK_M": 64, "BLOCK_N": 64, "BLOCK_M1": 32, "BLOCK_N1": 64, "BLOCK_M2": 64, "BLOCK_N2": 32}
  • update run.sh with --nproc_per_node=1

You can improve runtime to 90 minutes while deviating from 8xH100 learning dynamics by setting batch_size to 1, and tweaking the learning rates.

I'll submit a PR for this setting as an experimentation tool, however I'm really interested in a 4090 competition variant. Please let me know if someone is interested in hosting, otherwise I might create a fork myself :)

@fzyzcjy
Copy link
Author

fzyzcjy commented Nov 29, 2024

however I'm really interested in a 4090 competition variant. Please let me know if someone is interested in hosting, otherwise I might create a fork myself :)

I am also quite interested in it and happy to host it and see it being improved :)

@lapp0 lapp0 linked a pull request Nov 29, 2024 that will close this issue
@banyan-god
Copy link

banyan-god commented Nov 30, 2024

514e8a53-74e0-4d77-a61e-53a416f3ec3a.txt
I was able to reproduce it on 4x 4090 ~31 minutes
step:1750/1750 val_loss:3.2783 train_time:1889410ms step_avg:1085.87ms

@banyan-god
Copy link

A note: The current cost per run on an 8xH100 is about $1.90 (since it's about $3/hr for SXM H100s)

Personally, when I don't feel like spending that much, I go back to speedrunning CIFAR-10. But I understand that might not be so interesting to everyone

@KellerJordan where do you rent your 8xH100

@LakshyAAAgrawal
Copy link

Ah, without smaller batch size I get 2 hours 10 minutes.

https://gist.github.com/lapp0/2740a03a637ec926cf0eea90e541a0a6

The only changes necessary for 130 minute run which is effectively identical to the 8xH100 are

* `batch_size: int = 16`

* `sequence_length: int = 32 * 1024`

* `y = flex_attention(..., kernel_options={"BLOCK_M": 64, "BLOCK_N": 64, "BLOCK_M1": 32, "BLOCK_N1": 64, "BLOCK_M2": 64, "BLOCK_N2": 32}`

* update `run.sh` with `--nproc_per_node=1`

You can improve runtime to 90 minutes while deviating from 8xH100 learning dynamics by setting batch_size to 1, and tweaking the learning rates.

I'll submit a PR for this setting as an experimentation tool, however I'm really interested in a 4090 competition variant. Please let me know if someone is interested in hosting, otherwise I might create a fork myself :)

Hey, can you please point me to whether 2 different runs, one with bsz=8, another with bsz=16 is still comparable in terms of numbers of tokens seen during training, everything else being fixed?

@lapp0
Copy link

lapp0 commented Dec 6, 2024

Hey, can you please point me to whether 2 different runs, one with bsz=8, another with bsz=16 is still comparable in terms of numbers of tokens seen during training, everything else being fixed?

This is true if you split the sequence length respectively. e.g. these runs are all equivalent:

H100:

torchrun --standalone --nproc_per_node=8 train_gpt2.py \
    --train.batch_size 8 --train.sequence_length 65536  # default values

4090:

torchrun --standalone --nproc_per_node=8 train_gpt2.py --gpt.flex_kernel_consumer True \
    --train.batch_size 16 --train.sequence_length --train.sequence_length 32768

The only caveat is that for each sequence in the batch, one sample is split. This occurs twice as frequently when the sequence length is halved.

@mosamdabhi
Copy link

A note: The current cost per run on an 8xH100 is about $1.90 (since it's about $3/hr for SXM H100s)

Personally, when I don't feel like spending that much, I go back to speedrunning CIFAR-10. But I understand that might not be so interesting to everyone

@KellerJordan Have you thought about doing something for ImageNet1K for quickly doing the same checks that you are doing but for vision?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants