-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Which area is used as the testing area for STPLS3D semantic segmentation? #12
Comments
WMSC is the validation data for the experiments reported in the paper tables 3 and 6. To replicate what we have achieved, I would suggest starting with KpConv (since we have provided the simple instruction to run it) and making sure everything works fine there, then moving on to RandLA and SCF-net. |
Hi @meidachen OK. Thanks. I will run KPConv at first, and then move to RandLA-Net and SCF-Net. In addition, I'm not clear that STPLS3D dataset is only splited into training and validation, right? Not training, validation, and testing. During training, the WMSC is used for evaluate to obtain the best model, and then WMSC used as to test the model? |
That is correct, in the paper we have tested the trained model in another dataset (FDc) which cannot be released. So on the released dataset, you can either validate and test on WMSC, or you could do cross-validation using all four real-world datasets. |
Hi @meidachen So on the released dataset, you can either validate and test on WMSC. This operation is similar to the S3DIS with 5 area as testing. But I think it is not reasonable as using the 5 area as validation and testing at the same time. Actually, validation and testing should have no intersection. Or, if not using WMSC as validation, only as testing, 3 or 5 times average is also OK. Thanks. |
You are right, it is better to have validation and testing sets without intersection, and yes, I was following S3DIS (testing on aera5) when releasing STPLS3D. One of the main reasons that we can't really do a train, validation, and test split is the lack of real-world data.
|
In this case, I think cross-validation would be a better option. |
An error occurs when the two sets of data have inconsistent category labels during model fine-tuning using RandLA. Can you post the fine-tuning code? Thanks! |
Hi @volare1996 , Which two sets of data are you using? |
Hi @meidachen
In the codes of RandLA-Net, SCF-Net, KPConv, following the table3 and table6 from the paper, I want to know clear which area of the RealWorldData is used as the testing area? WMSC_split? Or which one? I want to have a fair comparison.
The STPLS3D dataset is only splited into training and validation, right? Not training, validation, and testing.
In addition, following the code of data_preparation_STPLS3D.py in RandLA-Net and SCF-Net, even using .txt files, the data_preparation_STPLS3D.py points to using RealWorldData to train and test, instead of using Synthetic dataset. If to use both RealWorldData and Synthetic data to train and use a area of RealWorldData to test, the dataset preparation may follow the data_preparation_STPLS3D.py in KPConv.
Thanks.
The text was updated successfully, but these errors were encountered: