-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stability dataset clustering data loss #10
Comments
I rewrote the algorithm in this way, which is where I got my numbers from:
|
Hi @franzigeiger , thanks for bringing this up! It took me some time to get into the code again, and indeed it could be that instead of grabbing all sequences in from a cluster selected for training, we only pick the cluster representatives. In other words, it's cluster representatives for train vs. cluster representatives for test, while in the paper we discussed whole clusters for train vs. representatives for test. Before moving on to changing the data, it might be useful to run the baselines again and see if we need to make any updates on our interpretations and results. I don't expect this change to affect performance significantly, but I may be wrong :) |
P.S.: feel free to open a PR for the proposed changes! |
Hi FLIP authors,
I have been working with the data split routine you applied to the meltome atlas data and found some irregularities. You create the train and test splits based on clusters from mmseq2 but the notebook routine seems off (in collect_flip/2_meltome_atlas.ipynb).
For creating the mixed dataset based on the cluster you remove the cluster center datapoints from the set once you encountered it in the full protein list which I think makes the output datasets incorrect:
Cell 30, last 20 LOC
While removing the sequences is fine for the test set (only the cluster center points are used anyways), for the training set it holds out all sequences of this cluster that are processed in the loop after the cluster center.
Upon fixing this I get a training set of 67361 datapoints + 3134 test datapoints (in comparison to 24817 training datapoints reported on the paper).
Do I understand something wrong here? 67361 is also 80% of the full cluster dataset (84030 entries) so this would make more sense based on the setting. The mixed set should in the end be 80% of all data in train + only cluster centers for test, which are obviously a lot less than 20% of all data.
I haven't checked if the same error happened on the other datasets but would recommend to do so.
The text was updated successfully, but these errors were encountered: