You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The overall workflow can be executed successfully but the output files of each step can only be downloaded from the website "workflow.deepmodeling.com". How can I retore the output files of each step in the remote machine?
All the computing results will be cleaned when the calculation is finished. I'm wondering if there's a way to save the files in the appointed file instead of the temporary files with the hash name.
Here is my DispatcherExecutor:
DispatcherExecutor(
host=host,
port=port,
username=username,
password=password,
remote_root="/data/home/tmp",
image_pull_policy="IfNotPresent",
machine_dict={
"batch_type": "slurm",
"local_root": "/data/home/usr",
"context_type": "SSHContext",
"clean_asynchronously":false},
resources_dict={
"number_node": 1,
"cpu_per_node": 10,
"gpu_per_node": 1,
"queue_name": "gpu",
"group_size": 1,
"custom_flags": [
"#SBATCH --time=0-1000:00:00"
],
"source_list": ["activate deepmd-kit"],
"batch_type": "Slurm"},
merge_sliced_step=False
)
The parameter "local_root" doesn't help. There is nothing in the local_root.
The text was updated successfully, but these errors were encountered:
The overall workflow can be executed successfully but the output files of each step can only be downloaded from the website "workflow.deepmodeling.com". How can I retore the output files of each step in the remote machine?
All the computing results will be cleaned when the calculation is finished. I'm wondering if there's a way to save the files in the appointed file instead of the temporary files with the hash name.
Here is my DispatcherExecutor:
DispatcherExecutor(
host=host,
port=port,
username=username,
password=password,
remote_root="/data/home/tmp",
image_pull_policy="IfNotPresent",
machine_dict={
"batch_type": "slurm",
"local_root": "/data/home/usr",
"context_type": "SSHContext",
"clean_asynchronously":false},
resources_dict={
"number_node": 1,
"cpu_per_node": 10,
"gpu_per_node": 1,
"queue_name": "gpu",
"group_size": 1,
"custom_flags": [
"#SBATCH --time=0-1000:00:00"
],
"source_list": ["activate deepmd-kit"],
"batch_type": "Slurm"},
merge_sliced_step=False
)
The parameter "local_root" doesn't help. There is nothing in the local_root.
The text was updated successfully, but these errors were encountered: