You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we solve the Errors like:
"EOFError: Ran out of input"
"RuntimeError: unexpected EOF, expected 31322553 more bytes. The file might be corrupted."
"requests.exceptions.SSLError: HTTPSConnectionPool(host='zenodo.org', port=443): Max retries exceeded with url: /record/3518331/files/best_weights_ep143.npz?download=1 (Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (ssl.c:1129)')))" We often use----“rm -r ~/.tractseg”
**This does solve the problem of reporting errors,but we have to download same pretrained_weights(~140MB)data for different subjects frequently. When we treat one subject, we have to download different_pretrained_weights(~140MB)_data seven times (aboat 980M):**
__Loading weights from: /home/w/.tractseg/pretrained_weights_tract_segmentation_v3.npz
Downloading pretrained weights (~140MB) ...
In batch_TractSeg_data processing, this will consume significant network traffic, time and energy.
Can you optimize the program for optimal time and network traffic consumption? or can we have a better other solution for the problem.
Thank you.
The text was updated successfully, but these errors were encountered:
When we solve the Errors like:
"EOFError: Ran out of input"
"RuntimeError: unexpected EOF, expected 31322553 more bytes. The file might be corrupted."
"requests.exceptions.SSLError: HTTPSConnectionPool(host='zenodo.org', port=443): Max retries exceeded with url: /record/3518331/files/best_weights_ep143.npz?download=1 (Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (ssl.c:1129)')))"
We often use----“rm -r ~/.tractseg”
**This does solve the problem of reporting errors,but we have to download same pretrained_weights(~140MB)data for different subjects frequently. When we treat one subject, we have to download different_pretrained_weights(~140MB)_data seven times (aboat 980M):**
__Loading weights from: /home/w/.tractseg/pretrained_weights_tract_segmentation_v3.npz
Downloading pretrained weights (~140MB) ...
Loading weights from: /home/w/.tractseg/pretrained_weights_endings_segmentation_v4.npz
Downloading pretrained weights (~140MB) ...
Loading weights from: /home/w/.tractseg/pretrained_weights_peak_regression_part1_v2.npz
Downloading pretrained weights (~140MB) ..
Loading weights from: /home/w/.tractseg/pretrained_weights_peak_regression_part1_v2.npz
Downloading pretrained weights (~140MB) ...
Loading weights from: /home/w/.tractseg/pretrained_weights_peak_regression_part2_v2.npz
Downloading pretrained weights (~140MB) ...
Loading weights from: /home/w/.tractseg/pretrained_weights_peak_regression_part3_v2.npz
Downloading pretrained weights (~140MB) ...
Loading weights from: /home/w/.tractseg/pretrained_weights_peak_regression_part4_v2.npz
Downloading pretrained weights (~140MB) ...__
In batch_TractSeg_data processing, this will consume significant network traffic, time and energy.
Can you optimize the program for optimal time and network traffic consumption? or can we have a better other solution for the problem.
Thank you.
The text was updated successfully, but these errors were encountered: