13 lines
1.6 KiB
Plaintext
13 lines
1.6 KiB
Plaintext
setting up the environment by loading in conda environment at Thu Nov 6 20:18:30 CST 2025
|
|
running the batched olmo categorization job at Thu Nov 6 20:18:30 CST 2025
|
|
[nltk_data] Downloading package punkt_tab to
|
|
[nltk_data] /home/nws8519/nltk_data...
|
|
[nltk_data] Package punkt_tab is already up-to-date!
|
|
cuda
|
|
NVIDIA H100 80GB HBM3
|
|
_CudaDeviceProperties(name='NVIDIA H100 80GB HBM3', major=9, minor=0, total_memory=81090MB, multi_processor_count=132, uuid=7544a27f-9039-df5d-70c9-e91e278a50f6, L2_cache_size=50MB)
|
|
|
|
Loading checkpoint shards: 0%| | 0/6 [00:00<?, ?it/s]
|
|
Loading checkpoint shards: 17%|█▋ | 1/6 [00:09<00:46, 9.40s/it]
|
|
Loading checkpoint shards: 33%|███▎ | 2/6 [00:18<00:36, 9.17s/it]
|
|
Loading checkpoint shards: 50%|█████ | 3/6 [00:27<00:27, 9.16s/it]
|
|
Loading checkpoint shards: 67%|██████▋ | 4/6 [00:36<00:18, 9.06s/it]
|
|
Loading checkpoint shards: 83%|████████▎ | 5/6 [00:45<00:09, 9.17s/it]
|
|
Loading checkpoint shards: 100%|██████████| 6/6 [00:51<00:00, 7.94s/it]
|
|
Loading checkpoint shards: 100%|██████████| 6/6 [00:51<00:00, 8.56s/it]
|
|
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
|
|
This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (4096). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.
|
|
unsupervised batched olmo categorization pau at Fri Nov 7 14:05:28 CST 2025
|