71 lines
6.4 KiB
Plaintext
71 lines
6.4 KiB
Plaintext
setting up the environment by loading in conda environment at Thu Sep 4 18:31:14 CDT 2025
|
|
running the batched olmo categorization job at Thu Sep 4 18:31:14 CDT 2025
|
|
[nltk_data] Downloading package punkt_tab to
|
|
[nltk_data] /home/nws8519/nltk_data...
|
|
[nltk_data] Package punkt_tab is already up-to-date!
|
|
cuda
|
|
NVIDIA A100-SXM4-80GB
|
|
_CudaDeviceProperties(name='NVIDIA A100-SXM4-80GB', major=8, minor=0, total_memory=81153MB, multi_processor_count=108, uuid=805df503-cf0d-c6cd-33f3-cb3560ee9fea, L2_cache_size=40MB)
|
|
|
|
Loading checkpoint shards: 0%| | 0/12 [00:00<?, ?it/s]
|
|
Loading checkpoint shards: 8%|▊ | 1/12 [00:00<00:03, 3.01it/s]
|
|
Loading checkpoint shards: 17%|█▋ | 2/12 [00:00<00:04, 2.03it/s]
|
|
Loading checkpoint shards: 25%|██▌ | 3/12 [00:01<00:04, 1.94it/s]
|
|
Loading checkpoint shards: 33%|███▎ | 4/12 [00:02<00:04, 1.73it/s]
|
|
Loading checkpoint shards: 42%|████▏ | 5/12 [00:02<00:04, 1.69it/s]
|
|
Loading checkpoint shards: 50%|█████ | 6/12 [00:03<00:03, 1.67it/s]
|
|
Loading checkpoint shards: 58%|█████▊ | 7/12 [00:04<00:03, 1.63it/s]
|
|
Loading checkpoint shards: 67%|██████▋ | 8/12 [00:04<00:02, 1.69it/s]
|
|
Loading checkpoint shards: 75%|███████▌ | 9/12 [00:05<00:01, 1.72it/s]
|
|
Loading checkpoint shards: 83%|████████▎ | 10/12 [00:05<00:01, 1.78it/s]
|
|
Loading checkpoint shards: 92%|█████████▏| 11/12 [00:06<00:00, 1.91it/s]
|
|
Loading checkpoint shards: 100%|██████████| 12/12 [00:06<00:00, 1.96it/s]
|
|
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
|
|
Traceback (most recent call last):
|
|
File "/home/nws8519/git/mw-lifecycle-analysis/p2/quest/python_scripts/090425_batched_olmo_cat.py", line 106, in <module>
|
|
outputs = olmo.generate(**inputs, max_new_tokens=256, do_sample=False)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
|
|
return func(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/generation/utils.py", line 2597, in generate
|
|
result = self._sample(
|
|
^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/generation/utils.py", line 3557, in _sample
|
|
outputs = self(**model_inputs, return_dict=True)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
|
|
return self._call_impl(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
|
|
return forward_call(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/utils/generic.py", line 969, in wrapper
|
|
output = func(self, *args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 667, in forward
|
|
outputs: BaseModelOutputWithPast = self.model(
|
|
^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
|
|
return self._call_impl(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
|
|
return forward_call(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/utils/generic.py", line 969, in wrapper
|
|
output = func(self, *args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 432, in forward
|
|
layer_outputs = decoder_layer(
|
|
^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/modeling_layers.py", line 48, in __call__
|
|
return super().__call__(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
|
|
return self._call_impl(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
|
|
return forward_call(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 269, in forward
|
|
hidden_states = self.mlp(hidden_states)
|
|
^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
|
|
return self._call_impl(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
|
|
return forward_call(*args, **kwargs)
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
File "/home/nws8519/.conda/envs/olmo/lib/python3.11/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 224, in forward
|
|
down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
|
|
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 752.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 343.50 MiB is free. Including non-PyTorch memory, this process has 78.91 GiB memory in use. Of the allocated memory 70.96 GiB is allocated by PyTorch, and 7.45 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
|
unsupervised batched olmo categorization pau at Fri Sep 5 01:25:00 CDT 2025
|