diff --git a/_search-plugins/ltr/core-concepts.md b/_search-plugins/ltr/core-concepts.md index cdfc8e3d5b..df80ff3b5c 100644 --- a/_search-plugins/ltr/core-concepts.md +++ b/_search-plugins/ltr/core-concepts.md @@ -48,7 +48,7 @@ Rank?](http://opensourceconnections.com/blog/2017/02/24/what-is-learning-to-rank Judgement lists, sometimes referred to as "golden sets", grade individual search results for a keyword search. For example, our -[demo](http://github.com/o19s/elasticsearch-learning-to-rank/tree/master/demo/) +[demo](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/) uses [TheMovieDB](http://themoviedb.org). When users search for "Rambo" we can indicate which movies ought to come back for "Rambo" based on our user's expectations of search. diff --git a/_search-plugins/ltr/logging-features.md b/_search-plugins/ltr/logging-features.md index 9beab7010e..ab29d243fa 100644 --- a/_search-plugins/ltr/logging-features.md +++ b/_search-plugins/ltr/logging-features.md @@ -22,7 +22,7 @@ every feature-query to retrieve the scores of features. For the sake of discussing logging, let's say we created a feature set like so that works with the TMDB data set from the -[demo](https://github.com/o19s/OpenSearch-learning-to-rank/tree/master/demo): +[demo](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/): ```json PUT _ltr/_featureset/more_movie_features diff --git a/_search-plugins/ltr/training-models.md b/_search-plugins/ltr/training-models.md index 0a0bcd07c7..57a553a9a5 100644 --- a/_search-plugins/ltr/training-models.md +++ b/_search-plugins/ltr/training-models.md @@ -18,12 +18,12 @@ an extensive overview) and then dig into uploading a model. ## RankLib training We provide two demos for training a model. A fully-fledged [RankLib -Demo](http://github.com/o19s/elasticsearch-learning-to-rank/tree/master/demo) +Demo](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/) uses RankLib to train a model from OpenSearch queries. You can see how features are -[logged](http://github.com/o19s/elasticsearch-learning-to-rank-learning-to-rank/tree/master/demo/collectFeatures.py) +[logged](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/collectFeatures.py) and how models are -[trained](http://github.com/o19s/elasticsearch-learning-to-rank-learning-to-rank/tree/master/demo/train.py) +[trained](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/train.py) . In particular, you\'ll note that logging create a RankLib consumable judgment file that looks like: @@ -35,7 +35,7 @@ judgment file that looks like: Here for query id 1 (Rambo) we've logged features 1 (a title `TF*IDF` score) and feature 2 (a description `TF*IDF` score) for a set of documents. In -[train.py](http://github.com/o19s/elasticsearch-learning-to-rank/demo/train.py) +[train.py](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/train.py) you'll see how we call RankLib to train one of it's supported models on this line: @@ -71,7 +71,7 @@ set). RankLib does not use feature names when training. ## XGBoost example There's also an example of how to train a model [using -XGBoost](http://github.com/o19s/elasticsearch-learning-to-rank/tree/master/demo/xgboost-demo). +XGBoost](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/xgboost-demo). Examining this demo, you'll see the difference in how RankLib is executed compared to XGBoost. XGBoost will output a serialization format for gradient boosted decision tree that looks like: