1
0

adding batched OLMO results

This commit is contained in:
mgaughan 2025-09-07 11:10:31 -05:00
parent 6de62f2447
commit 99c702fe20
2 changed files with 151694 additions and 64 deletions

File diff suppressed because one or more lines are too long

View File

@ -1,70 +1,12 @@
setting up the environment by loading in conda environment at Thu Sep 4 18:31:14 CDT 2025
running the batched olmo categorization job at Thu Sep 4 18:31:14 CDT 2025
setting up the environment by loading in conda environment at Fri Sep 5 13:42:49 CDT 2025
running the batched olmo categorization job at Fri Sep 5 13:42:50 CDT 2025
[nltk_data] Downloading package punkt_tab to
[nltk_data] /home/nws8519/nltk_data...
[nltk_data] Package punkt_tab is already up-to-date!
cuda
NVIDIA A100-SXM4-80GB
_CudaDeviceProperties(name='NVIDIA A100-SXM4-80GB', major=8, minor=0, total_memory=81153MB, multi_processor_count=108, uuid=805df503-cf0d-c6cd-33f3-cb3560ee9fea, L2_cache_size=40MB)
Loading checkpoint shards: 0%| | 0/12 [00:00<?, ?it/s] Loading checkpoint shards: 8%|▊ | 1/12 [00:00<00:03, 3.01it/s] Loading checkpoint shards: 17%|█▋ | 2/12 [00:00<00:04, 2.03it/s] Loading checkpoint shards: 25%|██▌ | 3/12 [00:01<00:04, 1.94it/s] Loading checkpoint shards: 33%|███▎ | 4/12 [00:02<00:04, 1.73it/s] Loading checkpoint shards: 42%|████▏ | 5/12 [00:02<00:04, 1.69it/s] Loading checkpoint shards: 50%|█████ | 6/12 [00:03<00:03, 1.67it/s] Loading checkpoint shards: 58%|█████▊ | 7/12 [00:04<00:03, 1.63it/s] Loading checkpoint shards: 67%|██████▋ | 8/12 [00:04<00:02, 1.69it/s] Loading checkpoint shards: 75%|███████▌ | 9/12 [00:05<00:01, 1.72it/s] 3, 3.01it/s] 3, 3.01it/s] 3, 3.01it/s] 3, 3.01it/s] Loading checkpoint shards: 17%|█▋ | 2/12 [00:00<00:04, 2.03it/s] it/s]
_CudaDeviceProperties(name='NVIDIA A100-SXM4-80GB', major=8, minor=0, total_memory=81153MB, multi_processor_count=108, uuid=83b2afae-0102-1408-8043-b11b77d85fc8, L2_cache_size=40MB)
Loading checkpoint shards: 0%| | 0/12 [00:00<?, ?it/s] Loading checkpoint shards: 8%|▊ | 1/12 [00:00<00:03, 3.01it/s] Loading checkpoint shards: 25%|██▌ | 3/12 [00:01<00:04, 1.94it/s] 3, 3.01it/s] Loading checkpoint shards: 33%|███▎ | 4/12 [00:02<00:04, 1.73it/s] 3, 3.01it/s] Loading checkpoint shards: 42%|████▏ | 5/12 [00:02<00:04, 1.69it/s] 3, 3.01it/s] Loading checkpoint shards: 50%|█████ | 6/12 [00:03<00:03, 1.67it/s] 3, 3.01it/s] Loading checkpoint shards: 58%|█████▊ | 7/12 [00:04<00:03, 1.63it/s] 3, 3.01it/s] Loading checkpoint shards: 67%|██████▋ | 8/12 [00:04<00:02, 1.69it/s] 3, 3.01it/s] Loading checkpoint shards: 75%|███████▌ | 9/12 [00:05<00:01, 1.72it/s] Loading checkpoint shards: 17%|█▋ | 2/12 [00:00<00:04, 2.03it/s] Loading checkpoint shards: 17%|█▋ | 2/12 [00:00<00:04, 2.03it/s] 3, 3.01it/s] Loading checkpoint shards: 17%|█▋ | 2/12 [00:00<00:04, 2.03it/s] Loading checkpoint shards: 17%|█▋ | 2/12 [00:00<00:04, 2.03it/s] Loading checkpoint shards: 92%|█████████▏| 11/12 [00:05<00:00, 1.98it/s] Loading checkpoint shards: 100%|██████████| 12/12 [00:06<00:00, 1.99it/s]
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
Traceback (most recent call last):
File "/home/nws8519/git/mw-lifecycle-analysis/p2/quest/python_scripts/090425_batched_olmo_cat.py", line 106, in <module>
outputs = olmo.generate(**inputs, max_new_tokens=256, do_sample=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/generation/utils.py", line 2597, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/generation/utils.py", line 3557, in _sample
outputs = self(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 667, in forward
outputs: BaseModelOutputWithPast = self.model(
^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 432, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/modeling_layers.py", line 48, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 269, in forward
hidden_states = self.mlp(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 224, in forward
down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 752.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 343.50 MiB is free. Including non-PyTorch memory, this process has 78.91 GiB memory in use. Of the allocated memory 70.96 GiB is allocated by PyTorch, and 7.45 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
unsupervised batched olmo categorization pau at Fri Sep 5 01:25:00 CDT 2025
This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (4096). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.
unsupervised batched olmo categorization pau at Sun Sep 7 05:04:39 CDT 2025