Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2020-CIKM-Learning to Form Skill-based Teams of Experts #256

Open
thangk opened this issue Sep 6, 2024 · 0 comments
Open

2020-CIKM-Learning to Form Skill-based Teams of Experts #256

thangk opened this issue Sep 6, 2024 · 0 comments
Assignees
Labels
literature-review Summary of the paper related to the work

Comments

@thangk
Copy link
Collaborator

thangk commented Sep 6, 2024

Link: https://dl.acm.org/doi/10.1145/3340531.3412140

Main problem

Previous work overlooked sparse networks in which the majority of experts participated only in a few teams. These networks only accounted for a small portion of the history of collaboration and were computationally expensive. These shortcomings yield in poor future team prediction and high training costs.

Proposed method

The author proposes a variational neural network model namely the variational Bayesian neural network (vBnn) which leverages the Bayes theorem to assign uncertainty weights to the parameters.

My Summary

The proposed method delivers better performance than the existing state-of-the-art neural network-based model by Sapienza et al. (which is one of the runner-ups in this paper's test results aside from RRN by Wu et al.). However, I find it's a model which requires some deeper mathematical knowledge to fully understand the workings of the architecture as it uses the Bayes theorem to compute the uncertainty weights of the parameters. There's also a mention of needing to double the number of parameters for this computation to work. So it's definitely a complex architecture compared to something like the vanilla Transformer model, in my opinion. However, the paper only tested with one dataset, DBLP, with about 33,000 teams. I'd like to see how it performs in other datasets (i.e., a dataset with 150,000 teams or more).

Datasets

DBLP with 33,002 teams, 2,000 skills, and 2,470 experts
@thangk thangk added the literature-review Summary of the paper related to the work label Sep 6, 2024
@thangk thangk self-assigned this Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
literature-review Summary of the paper related to the work
Projects
None yet
Development

No branches or pull requests

1 participant